When I started writing about science decades ago, artificial intelligence seemed ascendant. IEEE Spectrum, the technology magazine for which I worked, produced a special issue on how AI would transform the world. I edited an article in which computer scientist Frederick Hayes-Roth predicted that AI would soon replace experts in law, medicine, finance and other professions.
That was in 1984. Not long afterward, the exuberance gave way to a slump known as an “AI winter,” when disillusionment set in and funding declined. Years later, doing research for my book The Undiscovered Mind, I tracked Hayes-Roth down to ask how he thought his predictions had held up. He laughed and replied, “You’ve got a mean streak.”
AI had not lived up to expectations, he acknowledged. Our minds are hard to replicate, because we are “very, very complicated systems that are both evolved and adapted through learning to deal well and differentially with dozens of variables at one time.” Algorithms that can perform a specialized task, like playing chess, cannot be easily adapted for other purposes. “It is an example of what is called nonrecurrent engineering,” Hayes-Roth explained.
On supporting science journalism
If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
That was 1998. Today, according to some measures, AI is booming once again. Programs such as voice and face recognition are embedded in cell phones, televisions, cars and countless other consumer products. Clever algorithms help me choose a Christmas present for my girlfriend, find my daughter’s building in Brooklyn and gather information for columns like this one. Venture-capital investments in AI doubled between 2017 and 2018 to $40 billion, according to WIRED. A Price Waterhouse study estimates that by 2030 AI will boost global economic output by more than $15 trillion, “more than the current output of China and India combined.”
In fact, some observers fear that AI is moving too fast. New York Times columnist Farhad Manjoo calls an AI-based reading and writing program, GPT-3, “amazing, spooky, humbling and more than a little terrifying.” Someday, he frets, he might be “put out to pasture by a machine.” Neuroscientist Christof Koch has suggested that we might need computer chips implanted in our brains to help us keep up with intelligent machines.
Elon Musk made headlines in 2018 when he warned that “superintelligent” AI, much smarter than we are, represents “the single biggest existential crisis that we face.” (Really? Worse than climate change? Nuclear weapons? Psychopathic politicians? I suspect that Musk, who has invested in AI, is trying to promote the technology with his over-the-top fearmongering.)
Experts are pushing back against the hype, pointing out that many alleged advances in AI are based on flimsy evidence. Last January, for example, a team from Google Healthclaimed in Nature that their AI program had outperformed humans in diagnosing breast cancer. In October, a group led by Benjamin Haibe-Kains, a computational genomics researcher, criticized the Google health paper, arguing that the “lack of details of the methods and algorithm code undermines its scientific value.”
Haibe-Kains complained to Technology Review that the Google Health report is “more an advertisement for cool technology” than a legitimate, reproducible scientific study. The same is true of other reported advances, he said. Indeed, artificial intelligence, like biomedicine and other fields, has become mired in a replication crisis. Researchers make dramatic claims that cannot be tested, because researchers—especially those in industry—do not disclose their algorithms. One recent review found that only 15 percent of AI studies shared their code.
There are also signs that investments in AI are not paying off. Technology analyst Jeffrey Funk recently examined 40 start-up companies developing AI for health care, manufacturing, energy, finance, cybersecurity, transportation and other industries. Many of them were not “nearly as valuable to society as all the hype would suggest,” Funk reports in IEEE Spectrum. Advances in AI “are unlikely to be nearly as disruptive—for companies, for workers, or for the economy as a whole—as many observers have been arguing.”
Science reports that “core progress in AI has stalled in some fields,” such as information retrieval and product recommendation. A study of algorithms used to improve the performance of neural networks found “no clear evidence of performance improvements over a 10-year period.”
The longstanding goal of “general” artificial intelligence, possessing the broad knowledge and learning capacity to solve a variety of real-world problems, as humans do, remains elusive. “We have machines that learn in a very narrow way,” Yoshua Bengio, a pioneer in the AI approach called deep learning, recently complained in WIRED. “They need much more data to learn a task than human examples of intelligence, and they still make stupid mistakes.”
Writing in The Gradient, an online magazine devoted to tech, AI entrepreneur and writer Gary Marcus accuses AI leaders as well as the media of exaggerating the field’s progress. AI-based autonomous cars, fake news detectors, diagnostic programs and chatbots have all been oversold, Marcus contends. He warns that “if and when the public, governments, and investment community recognize that they have been sold an unrealistic picture of AI’s strengths and weaknesses that doesn't match reality, a new AI winter may commence.”
Another AI veteran and writer, Erik Larson, questions the “myth” that one day AI will inevitably equal or surpass human intelligence. In The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do, scheduled to be released by Harvard University Press in April, Larson argues that “success with narrow applications gets us not one step closer to general intelligence.”
Larson says “the actual science of AI (as opposed to the pseudoscience of Hollywood and science fiction novelists) has uncovered a very large mystery at the heart of intelligence, which no one currently has a clue how to solve. Put bluntly: all evidence suggests that human and machine intelligence are radically different. And yet the myth of inevitability persists.”
When I first started writing about science, I believed the myth of AI. One day, surely, researchers would achieve the goal of a flexible, supersmart, all-purpose artificial intelligence, like HAL. Given rapid advances in computer hardware and software, it was only a matter of time. And who was I to doubt authorities like Marvin Minsky?
Gradually, I became an AI doubter, as I realized that our minds—in spite of enormous advances in neuroscience, genetics, cognitive science and, yes, artificial intelligence—remain as mysterious as ever. Here’s the paradox: machines are becoming undeniably smarter—and humans, it seems lately, more stupid, and yet machines will never equal, let alone surpass, our intelligence. They will always remain mere machines. That’s my guess, and my hope.
Further Reading:
How Would AI Cover an AI Conference?
Do We Need Brain Implants to Keep Up with Robots?
The Many Minds of Marvin Minsky (R.I.P.)
The Singularity and the Neural Code