The 2022 Los Angeles mayoral race served as a frantic testing ground for a technology that has now moved from the fringes of internet subcultures into the heart of American political strategy. While the public focused on the massive spending of billionaire developer Rick Caruso and the grassroots mobilization of Karen Bass, a quieter, more insidious force began to reshape the narrative: the deliberate use of AI-generated content to blur the lines between reality and fabrication. This wasn’t just a localized scandal. It was a warning shot.
The controversy centered on digital advertisements and social media assets that utilized synthetic media to place candidates in compromising, albeit fictional, scenarios. Unlike traditional attack ads that rely on out-of-context quotes or grainy slow-motion footage, these AI-driven assets created entirely new "realities." One specific instance involved an ad that appeared to show a candidate expressing views diametrically opposed to their public platform, using a voice and likeness so convincingly rendered that even seasoned political consultants struggled to identify the seams. This is no longer about "photoshopping" a flyer; it is about the wholesale manufacturing of human behavior.
The Mechanics of Deception
Political campaigns have always been in the business of persuasion, but the shift toward generative AI changes the fundamental physics of the industry. In previous cycles, creating a high-quality attack ad required a production crew, a studio, and a significant budget. Today, a single operative with a mid-range GPU and access to open-source diffusion models can produce a high-definition video of an opponent in minutes.
The process typically involves "training" a model on hours of public footage—speeches, debates, and press conferences. The AI learns the subtle cadence of a politician’s voice, the specific way their jaw moves when they say certain vowels, and the micro-expressions they make when under pressure. Once the model is refined, it can be scripted to say anything. In the L.A. race, the speed at which these assets were deployed meant that by the time a campaign could issue a formal denial, the video had already racked up hundreds of thousands of views on platforms like TikTok and X (formerly Twitter).
The Cost of Digital Truth
The financial barrier to entry has evaporated. A decade ago, a "dirty tricks" operation might cost a PAC upwards of $50,000 for a single television spot. Now, the cost is essentially the price of a monthly subscription to an AI platform.
| Resource | Traditional Production (Estimated) | AI-Generated Production (Estimated) |
|---|---|---|
| Equipment | $10,000 - $50,000 (Cameras, Lighting) | $1,500 (PC with high-end GPU) |
| Labor | $5,000 - $20,000 (Editors, Actors) | $0 - $500 (Single Prompt Engineer) |
| Time to Market | 1 - 2 Weeks | 2 - 4 Hours |
This efficiency creates a volume problem. Campaigns are now flooded with "chaff"—a constant stream of low-quality but high-impact misinformation that forces the opposition to play a permanent game of whack-a-mole. Every minute spent debunking a fake video is a minute lost talking about housing policy or crime statistics.
Why Los Angeles Was the Perfect Target
L.A. is a city of optics. As the global hub of the entertainment industry, the electorate is perhaps more attuned to high-production values than any other city in the world. However, that same proximity to Hollywood creates a dangerous paradox: voters are used to seeing "the impossible" on screen, making them potentially more susceptible to highly polished fakes that look like the evening news.
The race featured a stark divide between the established political class and an outsider with deep pockets. This dynamic created an environment where "disruption" was not just a buzzword but a tactical necessity. When the stakes are this high—controlling a city with a GDP larger than many countries—the moral qualms about using synthetic media tend to vanish in favor of electoral viability.
The specific ad that sparked the firestorm utilized a technique known as lip-syncing synthesis. The creators took an existing video of a candidate and digitally mapped a new mouth onto their face, synchronized with a cloned voice. This allowed the attackers to keep the candidate's real eyes and body language, which are often the hardest parts for AI to get right. It was a hybrid lie—half-truth, half-algorithm.
The Legal Vacuum
The most jarring aspect of the L.A. controversy was the total lack of recourse. Current federal laws, specifically Section 230 of the Communications Decency Act, largely shield social media platforms from liability for the content users post. Meanwhile, the Federal Election Commission (FEC) has been agonizingly slow to regulate AI in political advertising.
In California, state-level attempts to curb deepfakes—such as AB 730—prohibit the distribution of deceptive audio or video of candidates within 60 days of an election. But there is a catch. The law includes exceptions for "satire" and "parody." Any clever strategist can simply slap a tiny, barely visible disclaimer on a video or claim the content was intended as a joke to bypass the restriction. The legal system is effectively bringing a knife to a laser fight.
The Psychology of the First Impression
Neurological studies suggest that the human brain processes visual information significantly faster than text. More importantly, we tend to retain the emotional impact of a visual even after we are told it is fake. This is the continued influence effect. Even if a voter sees a "fact-check" five minutes after watching a fake video of a candidate taking a bribe, the visceral feeling of distrust often lingers.
The L.A. mayor race proved that you don't need to convince everyone. You only need to sow enough doubt to depress turnout or shift a few thousand undecided voters in key precincts. In a tight race, a 1% shift caused by a well-timed deepfake is the difference between victory and a concession speech.
The Rise of the "Liar's Dividend"
Perhaps the most damaging fallout of the L.A. incident isn't the fakes themselves, but the cover they provide for actual misconduct. This is known as the Liar’s Dividend. Now that the public knows AI can be used to manufacture scandals, any candidate caught on a real, legitimate recording doing something wrong can simply claim, "That’s a deepfake."
We saw the early stages of this defense during the 2022 cycle. When genuine but unflattering audio surfaced, the immediate response from certain camps was to question the authenticity of the file. This creates a reality where nothing can be proven. If everything could be fake, then nothing is definitively true. This erosion of a shared factual reality is the ultimate goal of many who deploy these tools.
Platforms and the Burden of Verification
The tech giants—Meta, Google, and TikTok—found themselves in a defensive crouch during the L.A. race. Their automated detection systems are designed to catch copyrighted music or explicit nudity, not the subtle nuances of a politically motivated deepfake.
While some platforms have since moved toward requiring labels for AI-generated content, these labels are often ignored or cropped out by users who resharing the content. Furthermore, the detection technology is always one step behind the generation technology. It is a mathematical arms race. If an AI detector looks for specific patterns in pixels, the next generation of AI generators will simply be trained to avoid those patterns.
The New Campaign Infrastructure
Wait-and-see is no longer an option for political consultants. The L.A. race forced a change in how "war rooms" are built. We are seeing the emergence of the Digital Forensic Officer—a staffer whose entire job is to monitor social media for synthetic attacks and use cryptographic tools to verify the campaign's own authentic footage.
Techniques like C2PA (Coalition for Content Provenance and Authenticity) are being discussed as a potential solution. This involves "digital watermarking" at the point of capture. When a camera records a candidate, it attaches a secure metadata tag that proves the footage hasn't been altered. However, this requires industry-wide adoption, from camera manufacturers like Sony and Canon to social media distributors. Until then, we are living in a period of digital anarchy.
The Strategy of Saturation
Strategic advisors are now moving toward a "saturation" model. If you cannot stop the deepfakes, you drown them out with your own AI-generated positive content. This leads to a future where the entire political discourse is a dialogue between two sets of algorithms, while human voters watch from the sidelines, increasingly alienated from the process.
The Los Angeles mayoral race was not an isolated incident of "dirty politics." It was the debut of a new era of cognitive warfare. The tools used were primitive compared to what is available today, and yet they managed to destabilize the conversation in one of the most sophisticated cities on earth. As we move into larger, national cycles, the lessons of L.A. shouldn't just be studied; they should be feared.
The real threat to democracy isn't a robot that thinks like a human. It's a human that can use a robot to make us stop believing in anything we see or hear. Campaigns that fail to build a "firewall of trust" through direct, unmediated contact with voters will find themselves powerless against the algorithmic tide. The only way to beat a deepfake is to have a reputation that a computer can't mimic.
Reach out to your local election officials and demand transparency on how they plan to handle synthetic media before the next ballot is cast.