Nobel Prize in Physics Awarded for Breakthroughs in Machine Learning

The 2024 Nobel Prize in Physics was given to John Hopfield and Geoffrey Hinton for development of techniques that laid the foundation for revolutionary advances in artificial intelligence

Nobel Prize in Physics medal

The Nobel Committee for Physics has announced that John Hopfield and Geoffrey Hinton won this year’s Nobel Prize in Physics “for foundational discoveries and inventions that enable machine learning with artificial neural networks.”

vanbeets/Getty Images (medal)

The human brain, with its billions of interconnected neurons giving rise to consciousness, is generally considered the most powerful and flexible computer in the known universe. Yet for decades scientists have been seeking to change that via machine-learning approaches that emulate the brain’s adaptive computational prowess. The 2024 Nobel Prize in Physics was awarded on Tuesday to U.S. scientist John Hopfield and British-Canadian scientist Geoffrey Hinton, each of whom used the tools of physics to develop artificial neural networks that laid the foundations for many of today’s most advanced artificial intelligence applications.

Reached via telephone while in California, Hinton told the Royal Swedish Academy of Sciences that he was “flabbergasted” to learn he’d received the award. After decades of effort to advance AI, he is now one of the most prominent advocates for better safeguards. Last year he stepped down from an influential position at Google to speak more freely about the technology’s risks. “[AI] will be comparable with the industrial revolution,” he said during his telephone interview with the academy. “But instead of exceeding people in physical strength, it’s going to exceed people in intellectual ability. We have no experience of what it’s like to have things smarter than us, and it’s going to be wonderful in many respects.... But we also have to worry about a number of possible bad consequences, particularly the threat of these things getting out of control.”

Artificial neural networks seek to emulate the brain’s cognitive function by using nodes with different values as stand-ins for neurons. These nodes form networks of connections, akin to the brain’s natural neural synapses, which can be made stronger or weaker through training on any arbitrary dataset. This adaptive response allows the artificial neural network to better recognize patterns within data and make subsequent predictions for the future—that is, to learn without being explicitly programmed.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


“This Nobel recognizes physics inspired by biology and the broader field of biological physics,” says Ajay Gopinathan, a professor and biophysicist at the University of California, Merced. “And here this interface has led to some truly transformative advances in our understanding of these fields, as well as applications in computer science and AI.”

In the early 1980s Hopfield, now a professor emeritus at Princeton University, and his colleagues devised and refined an artificial neural network—the so-called Hopfield network—inspired by the physics of atomic spin. The method proved to be transformative for storing, retrieving and reconstructing patterns in a manner thought to mimic that of the human brain.

A Hopfield network’s operations can be imagined as balls rolling across a landscape of hills and valleys, where connections between nodes form topographic contours; the network is trained by finding values for those connections that minimize their energy differences. Describing the process in a 1987 edition of Scientific American, Hopfield and his co-author explained that the network “computes by following a path that decreases the computational energy until the path reaches the bottom of a valley, just as a raindrop moves downhill to minimize its gravitational potential energy.” The technique proved broadly applicable to a host of optimization problems—mathematical quandaries in which one ideal solution is selected from a very large number of possibilities.

Hinton, now a professor emeritus at the University of Toronto, worked with his colleagues to advance Hopfield’s approach, making it the basis for a more sophisticated artificial neural network called the Boltzmann machine, which leveraged feedbacks between multiple node layers to infer statistical distributions of patterns from training data. Crucially, this more advanced artificial neural network could use “hidden” layers of nodes to catch and correct computational errors without prohibitive computational costs. Hinton’s method excels at pattern recognition and can be used, for example, to classify images or create novel elaborations of an observed pattern.

Hinton summarized many of the approach’s core ideas and possible applications in a 1992 article for Scientific American, in which he predicted that biologically inspired machine learning would eventually lead to “many new applications of artificial neural networks.” Today the technique has helped fuel the ongoing explosion of progress in AI that is transforming myriad sectors of our society.

“Artificial neural networks mimic biological neurons in the sense that they take in pieces of information (analogs to chemical signals for a biological neuron), compute a weighted sum of these pieces of information (factoring in the significance of the inputs in the ‘decision-making’ process) and produce an output (an analog to a neuron firing or at rest),” says Jerome Delhommelle, an associate professor and machine-learning expert at the University of Massachusetts Lowell. “Machine-learning models can learn intricate interdependencies from data, make predictions on the ideal composition of materials for a given functionality and even discover as-yet-unknown governing equations in complex systems. Machine learning is poised to make great contributions to physics.”

Ellen Moons, a professor at Karlstad University in Sweden and chair of the Nobel Committee for Physics, described the promise and peril of these developments in remarks at a press conference in Stockholm on Tuesday. “The laureates’ discoveries and inventions form the building blocks of machine learning that can aid humans in making faster and more reliable decisions—for instance, when diagnosing medical conditions. However, while machine learning has enormous benefits, its rapid development has also raised concerns about our future. Collectively, humans carry the responsibility for using this new technology in a safe and ethical way for the greatest benefits of humankind.”

Lee Billings is a science journalist specializing in astronomy, physics, planetary science, and spaceflight, and is a senior editor at Scientific American. He is the author of a critically acclaimed book, Five Billion Years of Solitude: the Search for Life Among the Stars, which in 2014 won a Science Communication Award from the American Institute of Physics. In addition to his work for Scientific American, Billings's writing has appeared in the New York Times, the Wall Street Journal, the Boston Globe, Wired, New Scientist, Popular Science, and many other publications. A dynamic public speaker, Billings has given invited talks for NASA's Jet Propulsion Laboratory and Google, and has served as M.C. for events held by National Geographic, the Breakthrough Prize Foundation, Pioneer Works, and various other organizations.

Billings joined Scientific American in 2014, and previously worked as a staff editor at SEED magazine. He holds a B.A. in journalism from the University of Minnesota.

More by Lee Billings