The Federal Trade Commission (FTC) released a report to Congress on Thursday about using artificial intelligence (AI) to solve online problems and advised Congress to use “great caution” when relying on it as a solution, instead the FTC urged Congress to set up a legal framework to address these issues.
The report comes after Congress enacted legislation last year to have the FTC explore ways that AI may cause specific online harm and to recommend, policies, procedure and legislation to address these issues. Congress was especially concerned about “online fraud, impersonation scams, fake reviews and accounts, bots, media manipulation, illegal drug sales and other illegal activities, sexual exploitation, hate crimes, online harassment and cyberstalking, and misinformation campaigns aimed at influencing elections.”
The FTC expressed concern with various harms that AI can have and cause, such as inaccuracy, bias, discrimination and encourages commercial surveillance and corporate creeping. The Commission notes that using AI, especially big tech’s use of the technology, can be problematic because of the aforementioned issues.
“Our report emphasizes that nobody should treat AI as the solution to the spread of harmful online content,” Samuel Levine, Director of the FTC’s Bureau of Consumer Protection, said in a press release. “Combatting online harm requires a broad societal effort, not an overly optimistic belief that new technology—which can be both helpful and dangerous—will take these problems off our hands.”
The FTC advises against using AI as a policy solution for these online problems and states that its adoption could lead to more harms. The report points out several AI-related problems, such as: inherent design flaws and inaccuracy, bias and discrimination, and incentivizing commercial surveillance.
Specifically, the agency asserts that AI tools are “blunt instruments with build in imprecision and inaccuracy” so their detection abilities will be limited by these flaws. Additionally, the FTC states that AI tools can have “the biases of its developers that lead to faulty and potentially illegal outcomes.” Corporations can also be incentivized to use AI and engage in commercial surveillance and data extraction because AI requires lots of data for programming and usage.
The FTC recommends that lawmakers set up a legal framework to mitigate the harm caused by AI, since it is already being used by big tech.
The FTC voted 4-1 at an open meeting to send the report to Congress. Chair Lina M. Khan and Commissioners Rebecca Kelly Slaughter and Alvaro Bedoya issued separate statements with Commissioner Christine S. Wilson issuing a concurring statement and Commissioner Philips issuing a dissenting statement.
“As AI tools continue to become more widely adopted across the economy and across contexts, deepening our agency’s expertise in this area will be critical,” FTC Chair Khan said in a statement. “Understanding how these tools can be used and misused, including through unlawful business practices that the FTC is charged with prohibiting, is of paramount importance.”
Khan adds, “I especially appreciated the observation that while almost all of the harms that Congress listed far preceded the internet and are not themselves the product of AI, newer technologies do appear to play a key role in amplifying and exacerbating many of these harms, including sometimes by design. This in particular is a key area where we should deepen our understanding, including of the business models that can incentivize these practices.”
Meanwhile, in a dissenting statement, Commissioner Phillips stated, “I do not believe we conducted the requisite study, and I do not think the report on AI issued by the Commission (‘AI Report’ or ‘Report’) takes sufficient care to answer the questions Congress asked. The Report gives short shrift to how and why AI is being used to combat the online harms identified by Congress. Instead, the Report reads as a general indictment of the technology itself. I respectfully dissent.”