Brain-to-Speech Tech Good Enough for Everyday Use Debuts in a Man with ALS

A highly robust brain-computer interface boasts low error rates and a durability that allows a user to talk all day long

Trial participant, Casey Harrell, who has lost his ability to speak due to ALS, using the BrainGate2 BCI while seated in his mobility device

Casey Harrell, who has lost his ability to speak due to ALS, using the BrainGate2 brain-computer interface while seated in his mobility device.

University of California Regents

By July 2023, Casey Harrell, then age 45, had lost the ability to speak to his then four-year-old daughter. The neurodegenerative disorder amyotrophic lateral sclerosis (ALS) had gradually paralyzed him in the five years since his symptoms began. As the effects spread to the lips, tongue and jaw, his speech devolved into indistinct sounds that his daughter could not understand.

But a month after a surgery in which Harrell had four 3-by-3 millimeter arrays of electrodes implanted in his brain that July, he was suddenly able to tell his little girl whatever he wanted. The electrodes picked up the chatter of neurons responsible for articulating word sounds, or phonemes, while other parts of a novel brain-computer interface (BCI) translated that chatter into clear synthetic speech.

“She hadn’t had the ability to communicate very much with me for about two years. Now that is very different,” Harrell says, speaking through the device a year after the surgery. “I can help her mother to parent her. I can have a deeper relationship with her and tell her what I am thinking.”


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


His face contorts with emotion, and after a pause, he adds, “I can simply tell her how much I love her.”

Neuroscientist Sergey Stavisky and neurosurgeon David Brandman, both at the University of California, Davis, and their team described the new BCI on August 14 in the New England Journal of Medicine. Harrell isn’t the first person with paralysis to talk with his thoughts. But his BCI is easier to use and far less error-prone than similar devices that were announced a year ago. The improvements are such that Harrell can use the new BCI regularly to chat with colleagues, friends and family.

“It marks a landmark in the field of speech BCIs,” says Christian Herff, a computational neuroscientist at Maastricht University in the Netherlands, who was not involved in the study. “It has achieved a level of quality that is now of actual use for patients.” The device predicts the wrong word less than 3 percent of the time, an error rate on par with nondisabled speakers reading a paragraph aloud. “We can basically call it perfect,” Herff says.

The system also maintains its performance throughout hours of use. “We did a lot of engineering and a lot of testing and a lot of small innovations to make this work all day reliably,” says the study’s lead author Nicholas Card, a postdoctoral fellow in Stavisky’s and Brandman’s lab. Harrell estimates he employs the BCI for 70 hours per week. “I’m what they call a power user,” he says.

Harrell is also the only user so far. But the success of the experiment cracks open the door to more widespread availability of neuroprostheses for speech difficulties resulting from paralysis, says Edward Chang, a neurosurgeon at the University of California, San Francisco, who is a pioneer in the field of speech neuroprostheses but was not involved in the new work. “It’s one important step forward toward making this a clinical reality,” a goal that seemed like science fiction five to 10 years ago, he adds.

Significant practical hurdles still stand in the way of speech BCIs becoming a realistic medical option. And questions remain about what the implant’s long-term viability is and how well the results will translate to individuals with more advanced paralysis. From Harrell’s perspective, however, there is more than enough upside to support wider use. When asked how the technology had benefited him, he flashes an impish smile and quips, “How long do you have?”

Harrell’s BCI is part of a large, ongoing clinical trial run by a consortium called BrainGate. Since 2004 individuals in the trial have, one by one, tested out the latest iteration of the technology, which is broadly aimed at restoring or replacing lost functions—the ability to type or drink from a cup—in people paralyzed by accidents, strokes or conditions such as ALS.

Participants have chips like Harrell’s embedded in the outer layer of their brain. The type of chip used in the trial, called a Utah Array, connects directly with brain tissue and reads the signals of individual neurons or small groups of neurons—typically, those in the motor cortex, a part of the brain that directs body movements. Machine-learning algorithms analyze the signals and translate them into the movement of, say, a cursor or a robotic arm. The basic setup is common to all BCIs. In another trial, for example, one man with such an implant used his thoughts to move a robotic arm to shake the hand of then U.S. president Barack Obama in 2016.

The renowned company Neuralink, founded by Elon Musk, invented a brain chip with 1,024 electrodes, compared with up to about 100 in a Utah array (64 in the case of Harrell’s). The Neuralink chip similarly makes contact with individual neurons, though its larger number of electrodes likely provide richer input to a decoder. A man named Noland Arbaugh, whose limbs were paralyzed in a swimming accident, has been using the chip to write e-mails, surf the web and play video games with signals from his brain, though the device does not produce speech. (Arbaugh can speak.)

The first brain-to-speech decoder to work in a person with speech paralysis surfaced in 2021, offering a vocabulary of 50 words. Then in August 2023, woman with ALS whose speech, like Harrell’s, had become unintelligible, gained access to a 125,000-word vocabulary using a BCI that records brain activity from Utah Arrays as part of the BrainGate trial. Harrell’s BCI offers a similar vocabulary, which is more than twice that of an average college-educated adult. Using a different system developed by Chang’s team, a woman who had been severely paralyzed by a stroke could direct an avatar to voice her words. Her device, which also debuted in August 2023, had a potential 30,000 terms.

Both of the speech neuroprostheses described last year had an error rate of around 25 percent, however, which limits their usefulness. “When you’re getting one of every four words wrong, a sentence quickly becomes difficult to understand,” Card says.

One reason Harrell’s device may be more accurate is that it has more electrodes. It draws information from four electrode arrays, resulting in a total of 256 electrodes. That is twice as many as those in the speech BCI that was used by the woman with ALS and described in 2023, which employs the same chip technology. Machine-learning advances also play a role. The algorithms used to translate Harrell’s speech continuously calibrate so that the decoder’s performance does not decline over the course of the day. “A big challenge in general with BCIs is that the signals that we’re recording can change on the order of minutes to hours,” says Jennifer Collinger, a neural engineer and associate professor at the University of Pittsburgh, who was not involved in the new study. Being able to update the system to account for those instabilities, she says, is “a really important design priority.”

Another priority for the team that developed Harrell’s BCI was a design that would enable a quick break-in period for a new user. “You have to move quickly to help these people,” Brandman says. To speed things up after the implant, the researchers tested their decoder on a biologically plausible computer model of how the brain might encode speech, broken down into its component frequencies. So when the team turned on the BCI for the first time, it began translating Harrell’s speech within half an hour. “The system works from day one,” Collinger says.

The words Harrell expresses with the device are spelled out on a screen before they are said out loud. When Harrell saw a word he wanted to say appear for the first time, he cried with joy, as did members of his family. On the second day of testing, Harrell spoke to his daughter, who happened to be dressed as a cheetah. “I’m looking for a cheetah,” her dad told her.

To use the BCI, Harrell doesn’t just think about what he wants to say. He tries to form the words in his mind, and that attempted movement activates the arrays, which read from the part of his motor cortex that commands mouth and jaw muscles. (Harrell’s lips tend to move as he operates the device.) The output from the arrays is sent to the decoding software, which matches them to phonemes, combines the phoneme sequences into words and, from there, builds sentences.

Over the months that followed his first use of the system, Harrell spoke with it daily, both repeating sentences the researchers gave him and speaking spontaneously. On the second day of testing, the vocabulary available to Harrell expanded to 125,000 words, and from there, the system’s accuracy improved. In addition, the synthetic voice was made to match Harrell’s before his illness. When Harrell speaks now, it sounds like him.

Harrell is still working as an environmental activist more than five years into the disease. And he says the BCI is one reason why. It is many times faster than the communication methods he was using before, which included a head-controlled mouse and the help of someone who could interpret his vocalizations. “The participant is actually using it at home for communication,” says Nick Ramsey, a cognitive neuroscientist at the University Medical Center Utrecht in the Netherlands, who was not involved in the research. “For speech decoding, that has not been shown before.”

Harrell is so happy with his device that he is eager for others to have one. “I want people who are suffering now to have the option to have the technology now because I think that it is good enough now,” he says. “If it is good enough for us to have this conversation without any more help, I think it can help people today.”

There are obstacles to that goal, however. Unlike the Neuralink BCI, the U.C. Davis technology is not wireless. Two ports on Harrell’s head sprout cables that transmit data from the arrays in his brain to four computers on carts—which also makes the system far from portable. And the ports require care because they are potential sites for infection. For now, a considerable number of workers are also required for each BCI recipient, including a neurosurgeon who knows how to insert the arrays.

Another question is whether Harrell’s system will work for people whose speech paralysis is more advanced or does not result from ALS. Harrell still has some ability to move his mouth and make sounds. “How much residual function someone has may be very important for the function of this,” Chang says. “You can’t extrapolate this necessarily to everyone who is paralyzed, especially people who have more severe paralysis.”

There is also a debate among BCI researchers about the electrode array technology used to detect brain signals. Some are wary of arrays that are embedded in brain tissue. The brain often reacts to the foreign material by building scar tissue around it, reducing the quality of the signal—and parts of the array may also degrade over time. “The brain doesn’t like needles being stuck into it,” Ramsey says. In some people, Utah Arrays have lasted as long as six years, but in others, their output decays much faster, and replacing them is risky. “Imagine you had one of these placed, and six months later, it’s not working well,” Chang warns.

For his team’s speech BCI, Chang used a less invasive technology: small disks called electrocorticography (ECoG) arrays that rest on the brain’s surface without penetrating the tissue. Unlike Utah Arrays, ECoG arrays do not read signals from single neurons but detect fuzzier patterns that reflect the output of thousands of brain cells. The less precise input is thought to limit their capabilities, yet they enabled the decoding of speech in Chang’s experiment. They were also recently used in a BCI that restored walking in a paralyzed man.

And in a separate paper in same issue of the New England Journal of Medicine, Ramsey and his colleagues report that a small ECoG array implanted in a woman with ALS who was almost totally paralyzed worked for more than seven years, allowing her to click through menus on a computer, switch on a television and call a caregiver. She depended fully on the device to communicate for the last four years, until a loss of brain tissue from her condition rendered her unable to control the BCI. “That’s showing that with different types of electrodes, you can have a system that keeps on working for many years,” Ramsey says.

After a year of use, Harrell has seen no decline in performance either. And the UC Davis team plans to implant the array in several more participants in the coming months to years. In the meantime, the researchers are adding bells and whistles to Harrell’s device, such as prosody—inflections in pitch and rhythm—and the ability to sing.

One feature Harrell already has is the ability to send text to his computer to write e-mails, including a few he sent to the author of this article. That exchange was, on its surface, unremarkable. He introduced himself, suggested times for his interview and expressed enthusiasm about the technology. His signature, however, showed there was nothing ordinary about these messages whatsoever. It read, “Sent from my 🧠.”