Crackdown on AI Deception: Lawmakers Demand FEC Regulation of Misleading Campaign Ads

Over 50 congressional lawmakers and 30 organizations have rallied behind Public Citizen, a consumer rights group, demanding the Federal Election Commission (FEC) tighten its grip on the use of deceptive artificial intelligence (AI) in political campaigns. This call for action arrives amidst escalating concerns over AI’s potential to disrupt U.S. elections through the spread of deepfake images and audio recordings.

The push for regulation arises from the Federal Election Campaign Act’s provision against “fraudulent misrepresentation,” which Public Citizen argues should extend to AI-generated content that falsely depicts federal candidates. The digital era has unlocked doors for anyone to fabricate hyper-realistic content, ramping up the risk of spreading false information and eroding public trust in elections.

Craig Holman from Public Citizen emphasized the stakes, stating, “If voters lose trust in elections, they lose trust in the pillar of democracy.”

The urgency of this issue is reflected in the support for Public Citizen’s petition by 51 Senate and House Democrats, who are also advocating for mandatory disclaimers on AI-generated political ads. The Partnership on AI, a coalition of advocacy groups and tech giants like OpenAI and Meta, echoes the call for swift action, citing the ease and speed with which synthetic media can now be created.

Recent political campaigns have already started toying with AI, raising red flags. Former President Donald Trump’s campaign released a deepfake video clip, and Ron DeSantis’ campaign used AI-generated images in an ad against Trump. These instances demonstrate AI’s power to manipulate and mislead, highlighting the pressing need for regulatory intervention.

Public Citizen’s petition also references the Chicago mayoral race as a cautionary tale, where a deepfake recording almost derailed a campaign. FEC Chair Dara Lindenbaum acknowledged the complexity of drafting AI regulations in time for the 2024 presidential election, suggesting that campaigns could seek advisory opinions from the commission.

Despite the clear danger posed by deceptive AI in political ads, there’s debate over the FEC’s regulatory scope. Firms like Holtzman Vogel and the Antonin Scalia Law School Administrative Law Clinic question whether the FEC can extend its reach to such fraud. Yet, the Democratic National Committee contends that the FEC has the authority and responsibility to act against this threat.

The battle against AI deception in political media is not solely the FEC’s to fight. Congress also plays a crucial role. Senator Amy Klobuchar and Representative Yvette Clarke have introduced the REAL Political Ads Act, aiming to increase transparency in political ads using AI. Moreover, public opinion strongly backs government measures to control AI deception in political campaigns.

In a digital age where AI’s potential for misinformation is immense, the call for FEC action signifies a critical step towards safeguarding the integrity of U.S. elections and preserving the foundation of democracy. As the political landscape evolves, the need for robust and proactive measures against AI deception becomes increasingly imperative.