AI Deepfakes Are Already in the Midterms. Nobody’s Stopping Them.
The big picture: Campaigns across the country are using AI-generated deepfakes, manipulated audio, and fabricated imagery to reach voters ahead of the midterms. AI companies have pledged $265 million in super PAC spending to oppose regulation. And the president himself uses the technology regularly, setting a tone that gives every other campaign permission to do the same.
Why it matters: Voters are being shown fabricated videos and audio of candidates with minimal or no disclosure. The technology is cheap, fast, and essentially unenforceable. States that have tried to regulate political deepfakes could see their efforts rolled back by Trump’s new national AI framework.
The cases:
Texas: The National Republican Senatorial Committee posted an AI video of Democratic Senate nominee James Talarico reading his old tweets on camera. The tweets were real. The video was entirely fabricated. The only disclosure: a tiny watermark. Also in Texas, Republican Ken Paxton posted a fake video of John Cornyn dancing with Jasmine Crockett.
Massachusetts: A Republican candidate used AI to generate Governor Maura Healey’s voice saying things she never said. The campaign called it “parody” and “a creative and fun way to educate voters.” In a state race, a lawmaker posted an AI-generated fake newspaper front page showing his opponent holding hands with Zohran Mamdani.
The defense: Campaigns are calling these ads “visualizations,” “parodies,” and “modern tools.” DSPolitical’s CEO disagrees: “When you’re trying to be deceitful or have something that never existed, that’s a big issue.”
The Trump effect: The president uses AI regularly in political content. NYT tech reporter Tiffany Hsu: “If the president sets the political tone, then candidates could be less cautious. Their calculus might be that the public is becoming increasingly desensitized to AI’s reality distortion effect.”
The money: AI companies have pledged $265 million for midterm super PACs. In New York, millions are being spent attacking state lawmaker Alex Bores for sponsoring an AI safety bill. The Wall Street Journal described his race as “a test case” for the industry to send a message that regulating AI will be punished.
The regulation gap: Some states require AI disclosure in political ads. But Trump’s new national AI framework could limit states’ ability to regulate. Many experts say national standards will be weaker than what states have already built.
By the numbers:
$265 million — AI company pledges for midterm super PACs
0 — verbal disclaimers in the Talarico deepfake
1 — barely visible watermark disclosing AI use
50 — states, many of which could see their AI regulations overridden
The bottom line: The tools are cheap. The enforcement is nonexistent. The president does it. And the companies making the tools are spending a quarter billion dollars to keep it that way. Between now and November, every political ad you see deserves the same question: is this real?
Thanks for reading! Comment your thoughts & reactions | Share to spread the word | Follow to stay in the loop


