Here is how abruptly artificial intelligence became an issue in U.S. elections: when Vice President Kamala Harris brought it up while debating Donald Trump on September 10, it was the first time a candidate had mentioned AI in a general election presidential debate—but this technology has so thoroughly suffused our personal lives and the global economy that barely anyone batted an eye.
At that debate, Harris said the country must invest “in American-based technology so that we win the race on AI, on quantum computing.” Trump, too, has described a vision for U.S. leadership in the AI industry. The next president, no matter their party, is almost certain to continue using export controls on chips and related hardware to seek an economic and technical advantage over China. But when it comes to articulating the risks of AI, there is substantial difference between the candidates. Harris has been outspoken on the issue and has set out specific policy steps to protect vulnerable people from potential AI harms. In contrast, Trump has expressed vague resignation, calling AI “maybe the most dangerous thing out there” because it lacks a “real solution,” as he put it in a Fox Business Network interview earlier this year.
“We can anticipate that a second Trump administration would not actively attempt to secure people’s civil rights and liberties or elevate consumer rights,” says Alondra Nelson, a social scientist at the Institute for Advanced Study, who was the acting director of the Biden administration’s White House Office of Science and Technology Policy until 2023. “There would be less support for workers who are trying to mitigate big tech’s surveillance, its curtailment of workplace safety and its amassing and consolidation of more power.”
On supporting science journalism
If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
Whoever wins in November will do so at a critical time to shape AI policy. No U.S. state or federal safety law has yet addressed the most powerful forms of this technology. One high-profile bill that would have established guardrails for any AI developer doing business in California, SB 1047, was vetoed by Governor Gavin Newsom in late September. Members of Congress have not managed to pass comprehensive legislation either, though not for lack of ideas. After the public debut of OpenAI’s ChatGPT in 2022, proposals for House and Senate bills regarding the usage or governance of AI increased exponentially, according to the Brookings Institution, a nonpartisan think tank in Washington, D.C. The White House will face mounting pressure to respond to this technology—and to urge Congress to act.
AI Executive Orders
To see what each candidate might do in the AI arena if elected president, let’s look at what they have done so far. AI has been governed at the national level mainly by three executive orders—two issued by the Trump administration and one by the Biden-Harris administration, in which the vice president has taken an active role in AI policy.
Trump issued his first AI-related order, Maintaining American Leadership in Artificial Intelligence, in February 2019. It emphasized the right to privacy and other values this technology should reflect and launched the American AI Initiative, which called for prioritizing AI research and development in industry and academia. His second order, issued in 2020, outlined principles for federal AI use, such as transparency and lawfulness. Government agencies should have ways to “supersede, disengage, or deactivate” AI, the order said, if an application’s performance becomes “inconsistent” with its intended use.
“The Trump administration’s approach was fairly well received,” says Valerie Wirtschafter, an AI policy expert at the Brookings Institution, but “there was some criticism about the limited attention to risks.” If the American AI Initiative fell short, it was largely in its lack of specifics—especially funding, observers said at the time—that could translate policy goals into an on-the-ground reality.
The contrast between Trump’s two executive orders and the one issued under Biden is partly a result of philosophy and partly a result of timing. The Biden-Harris administration’s order came after the public launch of ChatGPT and similar large language and generative models—and after AI-made text and deepfakes (realistic pictures or videos) started flooding the Internet. Issued in October 2023, the order outlined policies for government use of AI and authorized a test of the National AI Research Resource, a conceptual program to support Americans who study or develop the technology. The Harris-Walz campaign recently pledged to transform that two-year pilot program into a permanent national infrastructure for AI.
But the order also defined AI’s potential dangers to privacy and consumers, building from the Blueprint for an AI Bill of Rights developed by Nelson and others at the Office of Science and Technology Policy: AI models must be safe and effective; algorithms should not discriminate; you should be able to opt out of them; your data needs to be safeguarded; and when you’re interacting with an automated system, you should be notified.
“The Trump executive orders were more focused on how we can foster innovation,” Wirtschafter says, whereas “the Biden administration really did tackle some of those risks.” Biden’s 2023 order establishes a reporting system through the Department of Commerce that requires U.S. companies to give updates on an ongoing basis about the development of the most powerful AI systems. Known as “foundation,” or “frontier,” models, such systems are capable of self-supervision and have at least tens of billions of parameters. (If an AI model is likened to a brain, parameters are analogous to its neural connections or synapses.)
Risks and AI Memes
In November 2023 Harris led the U.S. delegation to a first-of-its-kind global AI safety summit in the U.K., where she announced the creation of the U.S. AI Safety Institute within the National Institute of Standards and Technology. In her speech, Harris framed the risks AI poses to individuals, such as deepfakes and disinformation, as existential problems for their targets—broadening the definition of “existential” beyond, say, human extinction or other doomsday AI scenarios. The 2024 Democratic platform, released this summer, also notes the “extraordinary potential for both promise and peril” of AI, contrasting applications such as weather prediction with voice cloning and fraud.
Yet “one of the biggest” risks of AI, according to Trump’s pick for vice president, J. D. Vance, is an “insane political bias” against conservatism in models such as ChatGPT and Google’s Gemini. That’s overblown and inaccurate. Several studies—including a 2024 report in PLOS ONE that quizzed the AI systems Vance mentioned and 22 others—have found that most large language models reply to political questions with centrist or moderately left-leaning answers, not radical views.
The GOP platform, echoing Vance, suggests AI development ought to be “rooted in Free Speech and Human Flourishing.” It also alleges that the Biden-Harris executive order is “dangerous” because it “hinders AI Innovation.” Trump, too, has vowed to repeal the order. Other conservative critics have faulted it for relying on the Defense Production Act’s emergency powers to require reports to the Bureau of Industry and Security. If the order were revoked, the removal of this government oversight for frontier AI models would be the “biggest consequence,” Wirtschafter says.
There is another difference in the candidates’ approaches to AI. Harris will not use AI-generated images or text in campaign material, her campaign confirmed to Scientific American. Trump, by contrast, has amplified AI-made content on X (formerly Twitter) as well as on Truth Social, the social media platform he owns. These posts have included fake images that falsely claimed Taylor Swift and her fans supported Trump. Swift, possibly the most famous target of sexually explicit deepfakes, wrote on Instagram in September that the false endorsement “really conjured up my fears around AI, and the dangers of spreading misinformation.” In the post’s next paragraph, Swift endorsed Harris.
Nonconsensual AI images are a problem for more than just the world’s biggest pop stars. About 15 percent of U.S. high schoolers say they have heard about a sexually explicit deepfake that involves a person associated with their school, according to a recent report from the nonprofit Center for Democracy & Technology. That same report estimated two in five U.S. high school students were aware of deepfakes, irrespective of content, shared at school.
“This is going to be a suite of technologies that touches every facet of life, in some sense,” Wirtschafter says. AI existed long before large language models, she points out, but ChatGPT was a jolt that woke people up to the technology. Now, eyes opened, voters must decide which president is best equipped to grapple with the problems of an AI era.