One of the Biggest Problems in Biology Has Finally Been Solved

Google DeepMind CEO Demis Hassabis explains how its AlphaFold AI program predicted the 3-D structure of every known protein

Art concept of Demis Hassabis's portrait.

Joe Anderson

There’s an age-old adage in biology: structure determines function. To understand the function of the myriad proteins that perform vital jobs in a healthy body—or malfunction in a diseased one—scientists have to first determine these proteins’ molecular structure. But this is no easy feat: protein molecules consist of long, twisty chains of up to thousands of amino acids, chemical compounds that can interact with one another in many ways to take on an enormous number of possible three-dimensional shapes. Figuring out a single protein’s structure, or solving the “protein-folding problem,” can take years of finicky experiments.

But last year an artificial-intelligence program called AlphaFold, developed by the Alphabet-owned company DeepMind, predicted the 3-D structures of almost every known protein—about 200 million in all. DeepMind CEO Demis Hassabis and senior staff research scientist John Jumper were jointly awarded one of this year’s $3-million Breakthrough Prizes in Life Sciences for the achievement, which opens the door for applications that range from expanding our understanding of basic molecular biology to accelerating drug development.

DeepMind developed AlphaFold soon after its AlphaGo AI made headlines in 2016 by beating world Go champion Lee Sedol at the game. But the goal was always to develop AI that could tackle important problems in science, Hassabis says. DeepMind has made the structures of proteins from nearly every species for which amino acid sequences exist freely available in a public database.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Scientific American spoke with Hassabis about developing AlphaFold, some of its most exciting potential applications and the ethical considerations of highly sophisticated AI.

[An edited transcript of the interview follows.]

Why did you decide to create Alpha-Fold, and how did you get to the point where it can now fold practically every known protein?

We pretty much started the project roughly the day after we came back from the AlphaGo match in Seoul, where we beat Lee Sedol, the world [Go] champion. I was talking to Dave Silver, the project lead on AlphaGo, and we were discussing “What’s the next big project that DeepMind should do?” I was feeling like it was time to tackle something really hard in science because we had just solved more or less the pinnacle of games AI. I wanted to finally apply the AI to real-world domains. That’s always been the mission of DeepMind: to develop general-purpose algorithms that could be applied across many, many problems. We started off with games because it was more efficient to develop AI and test things out in games for various reasons. But ultimately that was never the end goal. The end goal was to develop things like AlphaFold.

It’s been a mammoth project—about five or six years’ worth of work before CASP14 [the 14th Critical Assessment of Structure Prediction, a protein-folding competition]. We had an earlier version at the CASP13 competition, and that was AlphaFold 1. That was state of the art, you know, a good deal better than anyone had done before and I think one of the first times that machine learning had been used as the core component of a system to try to crack this problem. That gave us the confidence to push it even further. We had to reengineer things for AlphaFold 2 and put a whole bunch of new ideas in there and also bring onto the team some more specialists—biologists and chemists and biophysicists who worked in protein folding—and combine them with our engineering and machine-learning team.

I’ve been working on and thinking about general AI for my entire career, even back at university. I tend to note down scientific problems I think one day could be amenable to the types of algorithms we build, and protein folding was right up there for me always, since the 1990s. I’ve had many, many biologist friends who used to go on about this to me all the time.

Were you surprised that AlphaFold was so successful?

Yeah, it was surprising, actually. It’s definitely been the hardest thing we’ve done, and I would also say the most complex system we’ve ever built. The Nature paper that describes all the methods, with the supplementary information and technical details, is 60 pages long. There are 32 different component algorithms, and each of them is needed. It’s a pretty complicated architecture, and it needed a lot of innovation. That’s why it took so long. It was important to have all these different inputs from different backgrounds and disciplines. And I think something we do uniquely well at DeepMind is mix that together—not just machine learning and engineering.

But there was a difficult period after AlphaFold 1. We first tried to push AlphaFold 1 to the maximum. And we realized about six months after CASP13 that it was not going to reach the atomic accuracy we wanted to actually solve the problem and be useful to experimentalists and biologists. So I made the decision that we needed to go back to the drawing board and take the knowledge we had acquired, including where it worked and where it didn’t work, and then see if we could go back to almost a brainstorming stage with that experience and that knowledge and come up with a whole bunch of new ideas and new architectures. We did that, and ultimately that worked.

But for about six months to a year after that reset, things got worse, not better. The AlphaFold 2 system, the early one, was much worse than AlphaFold 1. It can be very scary during the period where you seem to be going backward in terms of accuracy. Fortunately, that’s where our experience in games and all the other AI systems we built before came into play. I’d seen us go through that valley of death and then get out the other side.

Can you explain, on a very simple level, how AlphaFold works?

It’s a pretty complicated thing. And we don’t know a lot of things for sure. It’s clear that AlphaFold 2 is learning something implicit about the structure of chemistry and physics. It sort of knows what things might be plausible. It’s learned that through seeing real protein structures, the ones that we know of. But one of the innovations we had was to do something called self-distillation, which is to get an early version of AlphaFold 2 to predict lots of structures—and to predict the confidence level in those predictions.

One of the things we built in was this understanding of chemical bond angles, as well as evolutionary history, using a process called multisequence alignment. These bring in some constraints, which help to narrow the search space of possible protein structures. The search space is too huge to solve it by brute force. But obviously real-world physics solves this somehow because proteins fold up in nanoseconds or milliseconds. Effectively, we’re trying to reverse engineer that process by learning from the output examples. I think AlphaFold has captured something quite deep about the physics and the chemistry of molecules.

The fascinating thing about AI in general is that it’s kind of a black box. But ultimately it seems like it’s learning actual rules about the natural world.

Yeah, it’s almost learning about it in an intuitive sense. I think we’ll have more and more researchers looking at protein areas that AlphaFold is not good at predicting and asking, “Are they actually disordered in biology when the protein doesn’t have a clear shape, when it’s not interacting with something?” About 30 percent of proteins [from organisms with a nucleus] are thought to be disordered. A lot of those kinds of proteins are implicated in disease, such as neurodegeneration, because they might get tangled. And you can see how they might do so because they’re just sort of floppy strings rather than forming structures.

The other extremely important thing we did with AlphaFold 2, which we don’t do with machine-learning systems, was output a confidence measure on every single amino acid because we wanted a biologist to be able to know which parts of the prediction they could rely on without needing to understand anything about the machine learning.

What are some of AlphaFold’s most exciting applications?

We have a lot of nice case studies from partners—early adopters—that have had a year to work with AlphaFold. They’re doing an incredibly diverse set of things, from addressing antibiotic resistance to tackling plastic pollution by designing plastic-eating enzymes. I’ve been talking to [CRISPR gene-editing pioneer] Jennifer Doudna about alfalfa crop sustainability—her team is trying to engineer crops to be a bit more sustainable in the face of climate change.

There’s also lots of very cool fundamental research being done with it. There was an entire special issue of Science on how scientists solved the structure of the nuclear pore complex. This group of membrane-spanning proteins in the nucleus of eukaryotic cells is one of the biggest proteins in the body. Several groups solved it at the same time from the cryo-EM [cryogenic electron microscopy] data—but they all needed AlphaFold predictions to augment those data in some places. So a combination of experimental structural data with AlphaFold turns out to be a boon to structural biologists, which we weren’t necessarily predicting.

And then in practical terms, almost every pharma company we’ve talked to is using AlphaFold. We’ll probably never know what the full impacts are because obviously they keep that proprietary. But I like to think we’ve helped accelerate drug development and cures for diseases by a few years.

There’s been a lot of hype around AI and everything it can do, especially for science and medicine. But AlphaFold seems to have a clear benefit.

I mean, it’s for you to decide. But I would say I’ve had a lot of people tell me that it’s the most concrete, useful case of AI doing something in science. I like the fact that we’re delivering on the promise of AI. I mean, you could say “hype,” but we try and let our work speak for itself.

I remember when we started in 2010, nobody was working on AI. And 12 years later it seems like everyone and their dog are talking about it. And in most cases, as I’m sure you have to sift through all the time, it’s like they don’t know what AI even is sometimes, or they’re misusing the term, or it’s not that impressive what’s going on. But I think AlphaFold is a very good proof of concept or role model of what could happen. And I think we’re going to see much more of that in the next decade—of AI helping to genuinely accelerate some scientific breakthroughs—and we hope to be part of a lot more. We think it’s just the beginning.

AI has been in the news a lot lately, whether for producing intelligent language or creating digital art. As AI becomes a bigger part of our lives, how should we think about its consequences?

We at DeepMind have our own internal versions of large language models and text-to-image systems, and we’ll probably be releasing some of them at some point [in 2023]. It’s really interesting seeing the explosion of developments. AlphaFold, obviously, is huge in the scientific community. But with language and image AIs, it’s starting to break through into the mainstream. Everyone, of course, knows about language and can appreciate images—you don’t have to have any scientific expertise.

But I think we should always be thinking about the ethical issues, and that’s one reason we haven’t released our language-based AI yet. We’re trying to be responsible about really checking what these models can do—how they can go off the rails, what happens if they’re toxic, all of these things that are currently top of mind. It’s our view that some of these systems are not ready to release to the general public, at least not unrestricted. But at some point, that’s going to happen. We have this phrase at DeepMind of “pioneering responsibly.” And for me, that’s about applying the scientific method to analyzing and building these systems. I think often, especially in Silicon Valley, there’s this sort of hacker mentality that “we’ll just hack it and put it out there and then see what happens.” I think that’s exactly the wrong approach for technologies as impactful and potentially powerful as AI.

I’ve worked on AI my entire life because I think it’s going to be the most beneficial thing ever for humanity, for things like curing diseases, helping with climate change, all of this stuff. But it’s a dual-use technology: it depends on how, as a society, we decide to deploy it—and what we use it for.

And I think we’re seeing the consequences of that with social media.

How is AI being used—or misused— in social media?

It’s not proper AI; it’s more statistical algorithms. But we’ve seen the unintended consequences for democracies. Probably the people who created the social media platforms did not have bad intent when they started; it just kind of got out of hand. But we want to make sure that we think about those knock-on effects early, before going, “Oh, oops, this happened,” and then trying to bolt the barn door after the horses have left. We shouldn’t do that with powerful technologies. We can take some inspiration and advice from, for example, CRISPR and other technologies. I think there are things we can learn from other scientific communities that have tackled these questions.

Tanya Lewis is a senior editor covering health and medicine at Scientific American. She writes and edits stories for the website and print magazine on topics ranging from COVID to organ transplants. She also appears on Scientific American's podcast Science, Quickly and writes Scientific American's weekly Health & Biology newsletter. She has held a number of positions over her seven years at Scientific American, including health editor, assistant news editor and associate editor at Scientific American Mind. Previously, she has written for outlets that include Insider, Wired, Science News, and others. She has a degree in biomedical engineering from Brown University and one in science communication from the University of California, Santa Cruz. Follow her on Bluesky @tanyalewis.bsky.social

More by Tanya Lewis
Scientific American Magazine Vol 328 Issue 2This article was originally published with the title “The AI Biologist” in Scientific American Magazine Vol. 328 No. 2 (), p. 28
doi:10.1038/scientificamerican0223-28