Published Date : 9/4/2025
Congress and state attorneys general are moving swiftly to crack down on AI-generated sexual imagery, even as Washington has yet to enact comprehensive laws to curb political deepfakes before the 2026 elections.
On May 19, President Donald Trump signed the Take It Down Act, marking the first federal statute directly targeting the non-consensual posting and distribution of intimate imagery, including AI-generated “deepfakes.” The law requires covered platforms to remove flagged images within 48 hours of a valid notice and grants the Federal Trade Commission (FTC) the authority to enforce takedown notices. It also introduces new criminal penalties for individuals who publish such content without consent. Platforms have one year to implement the mandated reporting and removal systems, while the criminal provisions are already in effect.
The bill’s sponsor, Republican Sen. Ted Cruz, emphasized the practical benefits for victims. “The Take It Down Act gives victims of revenge and deepfake pornography, many of whom are young girls, the ability to fight back,” he said. After the bill’s signing, Cruz praised the “bravery and dedication of Elliston Berry,” a Texas student whose case helped galvanize action. Berry, who attended the signing, described the emotional toll of having explicit fakes of her spread among peers. “I had PSAT testing the next day, the last thing I need was to wake up and find out that someone made fake nudes of me,” she told CBS News, adding, “I can’t go back and redo what he did, but instead, I can prevent this from happening to other people.”
States have also been proactive in enacting their own guardrails. Maryland’s SB 360, effective July 1, broadened its “revenge porn” statute to include computer-generated depictions and strengthened civil remedies, with criminal penalties reaching up to two years in prison and $5,000 in fines. Texas enacted Senate Bill 20, the Stopping AI-Generated Child Pornography Act, which created new offenses for “obscene visual material” depicting minors, whether real, animated, or AI-generated. The bill took effect on September 1.
Washington state expanded its framework as well, criminalizing the willful distribution of forged digital likenesses and reinforcing protections against fabricated intimate images. State attorneys general are also applying coordinated pressure on the tech ecosystem that enables sexual deepfakes. In late August, a bipartisan coalition announced letters to major search engines and payment platforms, urging them to take concrete steps to restrict “nudify” and “undress” tools and to cut off monetization of businesses that sell them.
California Attorney General Rob Bonta said the aim is to push companies that “are indirectly part of the ecosystem” to become “part of the solution,” noting that such images have been used to “bully, harass, and exploit people all over the world.” Massachusetts Attorney General Andrea Joy Campbell co-led a bipartisan coalition of 47 attorneys general in calling on major search engines and payment platforms to take stronger action against the increasing spread of computer-generated deepfake nonconsensual intimate imagery.
In a letter to search engines, the coalition outlined the failures of these companies to limit the creation of deepfakes and called for stronger safeguards such as warnings and redirecting users away from harmful content to better protect the public. In a separate letter to payment platforms, the coalition urged these companies to identify and remove payment authorization for the creation of deepfake pornography content.
By contrast, the policy response to political deepfakes remains piecemeal. Congress has proposed the REAL Political Advertisements Act and the Protect Elections from Deceptive AI Act, which would require disclosures or ban materially deceptive AI in campaign ads, but neither has become law. In the absence of legislation, federal action has come through agencies. In February 2024, the Federal Communications Commission (FCC) clarified that AI-generated voice calls fall under the Telephone Consumer Protection Act’s restrictions on “artificial or prerecorded voice,” making such robocalls unlawful without the required consent.
The move followed an episode in which calls mimicking President Joe Biden’s voice targeted New Hampshire voters. In August 2024, the FCC opened a rulemaking to require on-air and written disclosures for AI-generated content in political ads on broadcast, cable, and satellite. The proposals are still pending. States are testing the limits of what can be policed around election speech and are running into First Amendment challenges. California’s attempts have been repeatedly enjoined. In October 2024, a federal judge blocked AB 2839, which created private lawsuits over election deepfakes, calling it a “blunt tool” that risks suppressing protected expression.
In August, a federal judge likewise struck down AB 2655, known as the Defending Democracy from Deepfake Deception Act, which required large platforms to block or label certain “materially deceptive” election content around voting periods. X Corp., among others, challenged the law, and the court found it likely unconstitutional. Minnesota’s election deepfake statute is also in court, with litigation brought by X Corp. arguing that the law criminalizes protected political speech and impermissibly burdens platforms.
Despite these challenges, 28 states had enacted laws covering political deepfakes as of July 10. A separate federal debate centers on likeness and voice rights. The bipartisan No Fakes Act would give individuals federal control over “digital replicas,” set up a DMCA-style takedown process, and create a private right of action with carve-outs for news and parody. Entertainment and music groups have rallied behind the bill, while tech platforms like YouTube signaled support this spring. Critics, including civil liberties advocates and some academics, warn that sweeping protections could chill lawful parody and remix culture.
Outside politics, the fraud front is widening. On September 3, the American Bankers Association (ABA) Foundation and the FBI released a public-facing infographic explaining how manipulated images, video, and audio are fueling impostor schemes and other consumer scams. The ABA highlighted FBI figures, noting that since 2020, consumers have filed more than 4.2 million fraud reports totaling over $50.5 billion in losses, with a growing slice involving deepfake-enabled deception.
“Deepfakes are becoming increasingly sophisticated and harder to detect,” said Sam Kunjukunju, the ABA Foundation’s vice president of consumer education. “This infographic provides practical tips to help consumers recognize red flags and protect themselves.” Jose Perez, the FBI’s Criminal Investigative Division assistant director, said educating the public is essential “so they can spot deepfakes before they do any harm.” The ABA Foundation will revive its BanksNeverAskThat and PracticeSafeChecks campaigns in October and continues its Safe Banking for Seniors program for elder-fraud prevention.
The uneven legal landscape reflects a broader trend. Since 2019, states have passed deepfake laws at a rapid pace. By late July, 47 states had at least one deepfake law on the books across various categories. Many of these measures focus on sexual imagery, reflecting a political consensus that intimate abuse is both pervasive and addressable without entangling core political speech.
The Take It Down Act imposes a national baseline against non-consensual intimate imagery, with both a criminal backstop and civil enforcement by the FTC, while leaving political content in constitutional crosswinds. Under the new statute, covered platforms must build 48-hour takedown pipelines by May 19, 2026, and are shielded when they remove reported content in good faith. Individuals who knowingly publish intimate fakes face criminal exposure.
For election deepfakes, voters are still largely reliant on a shifting patchwork of state rules, defamation law, platform policy, and, if the FCC finishes its rulemaking, broadcast-ad disclosures that wouldn’t reach most online channels. Advocates who championed the Take It Down Act, some of whom simultaneously warned about its potential for misuse, see it as a long-overdue baseline for intimate abuse, even as they press for careful tailoring elsewhere.
Civil liberties groups have urged caution about any regime that pushes platforms toward heavy-handed removal, especially in the political realm. The strain between these positions helps explain why Congress moved quickly on sexual deepfakes but not on campaign speech. Absent further federal action, next year is likely to bring more state experimentation, more court fights, and growing pressure on platforms to draw their own lines before the 2026 election cycle kicks into high gear.
Q: What is the Take It Down Act?
A: The Take It Down Act is a federal statute signed into law on May 19, 2023, that targets the non-consensual posting and distribution of intimate imagery, including AI-generated deepfakes. It requires platforms to remove flagged images within 48 hours and introduces new criminal penalties for individuals who publish such content without consent.
Q: How are states addressing AI-generated sexual deepfakes?
A: States are enacting their own laws to combat AI-generated sexual deepfakes. For example, Maryland’s SB 360 broadened its ‘revenge porn’ statute to include computer-generated depictions, and Texas enacted the Stopping AI-Generated Child Pornography Act, which created new offenses for obscene visual material depicting minors.
Q: What actions are state attorneys general taking against tech companies?
A: State attorneys general are applying coordinated pressure on tech companies to restrict tools that enable the creation of deepfakes and to cut off monetization for businesses that sell them. They have sent letters to major search engines and payment platforms urging them to take stronger action.
Q: What is the status of legislation addressing political deepfakes?
A: The policy response to political deepfakes remains piecemeal. Congress has proposed the REAL Political Advertisements Act and the Protect Elections from Deceptive AI Act, but neither has become law. Some states are testing the limits of what can be policed around election speech, but they are running into First Amendment challenges.
Q: How are deepfakes being used in fraud schemes?
A: Deepfakes are being used in fraud schemes, such as impostor scams and consumer fraud. The American Bankers Association (ABA) Foundation and the FBI have released an infographic to educate the public about recognizing and protecting themselves from deepfake-enabled deception.