Exclusive: OpenAI to Save $97 Billion Through 2030 in Latest Microsoft Deal Start your subscription — save 25%

The Information
Sign in
Subscribe

    Data Tools

    • About Pro
    • The Executives Leading the Data Center Race
    • The Next GPs 2026
    • The Next GPs 2025
    • The Rising Stars of AI Research
    • Leaders of the AI Shopping Revolution
    • Enterprise Software Startup Takeover List
    • Org Charts
    • The Information 50 2025
    • Generative AI Takeover List
    • Generative AI Database
    • AI Chip Database
    • AI Data Center Database
    • Tech IPO Tracker
    • Tech Sentiment Tracker
    • Gigafactory Database

    Special Projects

    • The Information 50 Database
    • VC Diversity Index
    • Enterprise Tech Powerlist
  • Org Charts
  • Tech
  • Finance
  • Weekend
  • Charts
  • Events
  • TITV
    • Directory

      Search, find and engage with others who are serious about tech and business.

    • Forum

      Follow and be a part of discussions about tech, finance and media.

    • Brand Partnerships

      Premium advertising opportunities for brands

    • Group Subscriptions

      Team access to our exclusive tech news

    • Newsletters

      Journalists who break and shape the news, in your inbox

    • Video

      Catch up on conversations with global leaders in tech, media and finance

    • Partner Content

      Explore our recent partner collaborations

      XFacebookLinkedInThreadsInstagram
    • Help & Support
    • RSS Feed
    • Careers
  • About Pro
  • The Executives Leading the Data Center Race
  • The Next GPs 2026
  • The Next GPs 2025
  • The Rising Stars of AI Research
  • Leaders of the AI Shopping Revolution
  • Enterprise Software Startup Takeover List
  • Org Charts
  • The Information 50 2025
  • Generative AI Takeover List
  • Generative AI Database
  • AI Chip Database
  • AI Data Center Database
  • Tech IPO Tracker
  • Tech Sentiment Tracker
  • Gigafactory Database

SPECIAL PROJECTS

  • The Information 50 Database
  • VC Diversity Index
  • Enterprise Tech Powerlist
Deep Research
TITV
Tech
Finance
Weekend
Charts
Events
Newsletters
  • Directory

    Search, find and engage with others who are serious about tech and business.

  • Forum

    Follow and be a part of discussions about tech, finance and media.

  • Brand Partnerships

    Premium advertising opportunities for brands

  • Group Subscriptions

    Team access to our exclusive tech news

  • Newsletters

    Journalists who break and shape the news, in your inbox

  • Video

    Catch up on conversations with global leaders in tech, media and finance

  • Partner Content

    Explore our recent partner collaborations

Subscribe
  • Sign in
  • Search
  • Opinion
  • Venture Capital
  • Artificial Intelligence
  • Startups
  • Market Research
    XFacebookLinkedInThreadsInstagram
  • Help & Support
  • RSS Feed
  • Careers

In-depth insights in seconds. Ask Deep Research.

5 Predictions About How AI Will (and Won’t) Affect the 2024 U.S. Election5 Predictions About How AI Will (and Won’t) Affect the 2024 U.S. ElectionArt by Clark Miller
The Big Read

5 Predictions About How AI Will (and Won’t) Affect the 2024 U.S. Election

Doomers fear AI could unleash a torrent of deepfakes and other misinformation during this election. But two tech veterans of political campaigns predict the worst AI fears won’t come to pass this year—and the technology may even help democracy by reaching more voters.

By
Alexandra Lindsay
[email protected]
and
Greg Dale
[email protected]

Want to feel old? It was more than five years ago that director Jordan Peele teamed up with BuzzFeed to create a viral deepfake video of Barack Obama uttering a series of improbable lines, a clip meant to serve as a public service announcement for the dangers of how technology could be used to manipulate public opinion. “It may sound basic, but how we move forward in the age of information is going to be the difference between whether we survive or become some kind of fucked-up dystopia,” Peele as Obama ventriloquist said.

Now, on the precipice of the 2024 election season, that dystopia is just around the corner, courtesy of artificial intelligence—or so say some of the doomsayers. Last week, Fortune magazine quoted Oren Etzioni, an AI expert and professor emeritus at the University of Washington, who imagines a coming flood of AI-fabricated content showing President Joe Biden being rushed to the hospital or depicting a run on banks. “I am completely terrified,” Etzioni said.

And in June, former Google chair Eric Schmidt told CNBC that AI-generated misinformation in this year’s election was one of the biggest short-term dangers from the technology. “The 2024 elections are going to be a mess because social media is not protecting us from false generated AI,” Schmidt said.

We’re not so sure about that. As technologists with years of experience in the political trenches, we have collectively worked on digital strategy for hundreds of campaigns at the federal, state and local level, in addition to national voter mobilization campaigns.

From our perspective, the impact of AI on this election is likely to be more nuanced than many people predict (including the American public: A recent poll showed that 58% of American adults are concerned about the use of AI increasing the spread of false information during the 2024 presidential election).

Don’t get us wrong—there are real reasons to worry about how rapid advances in AI technology will impact elections. We’ve just emerged from several election cycles in which everyone from Russian agents to presidential candidates themselves spread disinformation widely via social networks. In this new era, foreign adversaries, campaign managers and meme lords will increasingly explore and exploit generative AI’s newfound abilities to make an impact on the political scene. But the AI election apocalypse isn’t likely to happen this year.

Upgrade to ask Deep Research to…

Here are our predictions for how AI will (and will not) impact the 2024 U.S. election cycle:


1. AI-generated audio will cause far more disruption than photos and videos.

Consider a hypothetical scenario: The day before a hotly contested GOP primary for a local sheriff’s race, an AI robocall, supposedly from Donald Trump, endorses one candidate and labels the other a RINO—a Republican in name only—carrying the former to a narrow win. It’s plausible enough. Trump does record real endorsement calls, so the local newspaper might not fact-check this one (if the local newspaper even exists anymore). But this recording is fabricated; the candidate has no such endorsement from the real Trump.

We predict that there will be at least one high-profile incident in which a deepfake audio incident impacts a local race—and many more incidents that fly under the radar.

Audio deepfakes can be extremely convincing and are vastly easier to produce than realistic synthetic video. The distribution channels for audio—particularly robocalls—are a longtime target for election meddling, and fakes are difficult to detect. Unlike social networks that monitor their users’ posts, phone carriers aren’t able to monitor and block illegal political robocall campaigns in real time due to long-standing telecommunication laws.

Audio deepfakes are already causing trouble in international elections. In 2023, audio deepfakes of well-known politicians and journalists were deployed to deceive voters in the U.K. and Slovakia. Assessing the influence of individual pieces of content on election results is always challenging, but thanks to delays in analyzing and responding to the deepfakes, the statements debunking them didn’t get nearly the attention the original recordings did. Deployed in a city or state legislative election, with little-known candidates and next to no media oversight, the impact will be even more significant.

Conversely, experts have been sounding the alarm about image and video deepfakes and their impact on elections for several election cycles, but there’s no indication they’ve had more influence yet than the kind of misleading media—memes depicting candidates as superheroes, for example—that campaigns and their allies have long created using traditional editing tools such as Photoshop. This holds true even in recent global elections in which cutting-edge generative image tools like Dall-E have been available.


2. No large-scale, personalized AI-generated persuasion will happen.

Another, sneakier way in which AI could influence elections, some technologists and lawmakers predict, is by supercharging political persuasion—the act of changing voters’ minds about an issue or candidate through arguments tailored to their personal concerns.

OpenAI CEO Sam Altman has repeatedly discussed his concerns about the impact of AI-powered “personalized 1:1 persuasion” on elections. The U.K. government has highlighted the same threat, U.S. Rep. Ted Lieu is talking about it and reporters are breathlessly covering it.

People in that camp worry about a future in which networks of conversational AI bots message voters after ingesting their social media posts on everything from their grievances to their pet causes, adjusting responses based on what they’re most likely to care about.

Imagine that in October 2024, a new mom whose three-month-old requires special care gets a text message from someone purporting to be another mom in her town, detailing an imaginary Trump plan to extend paid family leave for parents like her. But the mom that sent the text is actually a conversational AI bot that has read posts the real mom has made in the past on social media about paid family leave. The text chat convinces her to give Trump a shot.

On a large scale—in which tens of thousands of others are also convinced to shift their votes through personalized messages—such a scenario could make today’s concerns about personalized ads for products seem quaint. It would be even more frightening if a foreign adversary—Russian intelligence, say—harnessed these tools.

However, we predict personalized AI persuasion won’t be a factor in the 2024 election.

For starters, the research is clear: Real-world personalized political persuasion is extremely difficult to carry out. The general public overestimates the effectiveness of targeted political persuasion messages; in practice, the number of persuadable voters is low, and persuading that small fraction of the electorate to change their minds is difficult.

Second, it’s not cheap or easy to gather granular, individualized data on what voters might respond to or to predict what messages they’d find persuasive. While campaigns and bad actors can build databases of attributes like age, ethnicity and whether someone voted in the last election, it’s hard for them to know which households just had a new addition to the family, who was a Biden 2020 crossover voter or how that voter would respond to an extension of paid family leave. Even powerful AI systems would be limited to the data they are able to gather. Much of this individual data lives within the walled gardens of social media companies, which are hostile to crawling and scraping, including by outside AI companies hungry for data to train their models.

Third, a far-reaching AI-powered persuasion campaign would also require getting unparalleled access to the big tech platforms. U.S.-based social media companies scrutinize political ads more than they have in the past. Although political advertisers on Meta Platforms apps can target lists of individuals based on their characteristics just as commercial advertisers can, the company doesn’t allow them to run the hyperpersonalized ads AI critics fear. On YouTube, political advertisers face similar restrictions. And on traditional television, it’s not even technically possible. Social media companies also don’t allow unsolicited one-on-one direct messages from campaigns. Even though layoffs have recently diminished trust and safety teams at many large platforms, significant barriers remain to creating endless fake profiles on those platforms for bot armies.

Meanwhile, other channels for one-on-one communication, like text messaging, have additional hurdles in place. U.S. wireless carriers now require heightened identity checks for political campaigns, allowing the carriers to block bad actors before they can send any meaningful volume of texts. Major email providers have gotten much better at relegating suspicious emails to spam or promotion inboxes, which limits the reach of persuasion efforts. Regulatory hurdles in states like California make it illegal to use a bot for political purposes without clear disclosure.


3. Congress and the Federal Election Commission will prevent AI-generated political ads from taking off.

Despite bipartisan angst about AI and near-weekly hearings and forums on the topic, a sclerotic Congress has yet to pass meaningful AI regulation, and agencies like the FEC feel the pressure to take action but haven’t done more than seek comment.

However, we predict that before this year’s elections, Congress and the FEC will overcome gridlock to institute the first AI regulations restricting or labeling AI-generated political advertising by campaigns and political action committees.

Campaigns have plenty of good reasons to use AI tools for advertising. They hold the promise of lower production costs and more rapid development cycles, which allow the campaigns to iterate on content quickly and to deceptively alter the profile of their own candidate or opponent. However, states including Texas, Michigan, Minnesota and Wisconsin have already passed or are in the process of passing restrictions on AI-generated political ads. In 2024, we’ll likely see similar federal rules that require disclaimers for any ads that use AI-generated media and that prohibit the use of such media in the 90-day window prior to the general election.

Unlike proposals for much broader regulation of AI, these restrictions are likely to pass because there’s virtually no opposition to them. AI companies can write them into their terms of service with no damage to their business models or brands (they may even boost their brands in the process). For their part, members of Congress can regulate something they know well—political campaigns!—without the risk of embarrassing themselves. And they’ll finally be perceived as taking action on AI.

Even when these bills pass, there will still be some campaigns that thumb their noses at the new rules and have little regard for the consequences, but they’ll be few and far between. The average candidate on both sides of the aisle has no appetite for extra legal or financial headaches, and a slightly more memeable ad isn’t worth running afoul of regulators.

Combined with restrictions on AI-generated content by the big tech platforms—Google and YouTube, for example, began requiring disclosures on any political ads with AI-altered audio or imagery in November—ads with synthetic media simply won’t be a major factor in the election. Of course, these ad restrictions won’t do anything to stop unofficial online warriors from using generative AI to pump meme images and videos into news feeds that benefit their favored candidates. But the impact of those feeds will be muted compared to the predictable distribution and large audience of actual ads.


4. Campaigns communicating in five-plus languages will be the new normal.

In the U.S., 29.6 million people have low proficiency in English and speak a different language at home. Historically, those people have been difficult populations for campaigns to engage with. Millions miss out on voting altogether because of the language barrier. AI developments, though, are making multilingual outreach much more feasible.

We predict that in 2024, most campaigns will use AI-enabled translation tools to reach voters in their primary language—and that’s good news for democracy.

Most campaigns consist of very small teams with limited financial resources—a candidate running for state legislative office, for instance, typically has a campaign manager and a fundraising head. Campaigns operating in multicultural districts sometimes get lucky with a multilingual volunteer or team member who can help handle translation tasks that reach core constituencies. But that usually only provides the bandwidth to create high-priority pieces of content such as a website landing page. It’s unusual for anyone but the largest, most well-funded campaigns to have multilingual staff who can create social media posts, respond to text messages and reliably translate mailers.

Now a single volunteer can translate a campaign message quickly and at little or  no cost, using Google Translate or via translation application programming interfaces embedded in their marketing tools. Because these tools can perform impressively accurate idiomatic translations, it’s no longer a problem to reach voters who speak long-tail U.S. languages like Armenian, Amharic and Hmong. Politicians and candidates are already experimenting with such tools, and usage will escalate in the fall as campaigns reach out about voter registration, mail-in ballots and polling locations.


5. TikTok’s algorithm will be the Cambridge Analytica of the 2024 U.S. election cycle.

Despite all the hype and hand-wringing about generative AI, the predictive AI deployed in more traditional applications—for example, the recommendation engines that power TikTok and Instagram’s Reels—will have more of an impact on the election. TikTok in particular has exploded in use and importance since the last big U.S. election cycle, surging to more than 150 million users in the U.S. as of last March. A third of young adults in the U.S. regularly get their news from the platform.

TikTok’s rapid growth, ties to China and spotty record on controlling the spread of misinformation generates suspicion on both sides of the aisle from those nervous about election interference.

No matter which party loses the presidential election, we predict it will reserve special blame for TikTok’s algorithm—and no one will be able to prove or disprove the claim for years.

Unlike the classic friend-following feed model on Facebook or Twitter, TikTok’s algorithm tests the popularity of content with a subset of its users and then spreads it quickly beyond those users if the initial test subjects find it engaging. This speed makes TikTok a high-octane vector for highly engaging video clips, including a lot of dubious or false content. In 2022, journalists found that election misinformation spread quickly on TikTok before the company took it down. The algorithm’s dynamics haven’t changed much since then.

TikTok also has an immature trust and safety apparatus compared with older platforms. In 2022, researchers at Global Witness submitted ads with disinformation to various social media platforms. They found that TikTok accepted far more of the ads than Facebook and YouTube did. This is particularly concerning because advertisements are supposed to be subject to stringent approval processes by the big tech platforms, unlike ordinary videos that users share.

The existing mistrust of TikTok in media and political circles, combined with high-profile incidents that amplify pieces of political disinformation to a massive audience, will make TikTok’s algorithm too tempting a scapegoat for candidates and partisan media to pass up. As with the Cambridge Analytica scandal, it will take years for researchers combing through the data to know the extent of TikTok’s actual impact—at which point the elected president will already be well into their term.

Alexandra Lindsay is co-author of AI Political Pulse, a newsletter dedicated to the politics and policy of artificial intelligence. She is the board chair at Close the Gap California, a nonprofit that recruits women to run for the California State Legislature. She formerly served as Head of Product and Operations at Tech for Campaigns, a nonprofit focused on bringing advanced digital marketing and data science to politics.

Greg Dale is co-author of AI Political Pulse, a newsletter dedicated to the politics and policy of artificial intelligence, and is a marketing and product consultant. He formerly served as CEO of Tech for Campaigns, working with hundreds of Democratic campaigns and independent expenditures at all levels of the ballot

Most Popular

  • ExclusivePolymarket’s Homecoming Is Shaky and its U.S. CEO Is AWOL
  • ExclusiveCursor Staff Meet With xAI Employees as Layoffs, Exits Mount
  • ExclusiveOpenAI’s AI Chip Deal With Broadcom Hits $18 Billion Financing Snag
  • Tech CultureBillionaire Tax Refugees Flock to Ritzy Nevada Lake Town

Recommended