When it comes to talking, our brain does the heavy lifting. It subconsciously directs the complex coordination of lips, tongue, throat, and jaws we need to pronounce words. And it keeps directing, even in people with paralysis or who are unable to turn these commands into speech.

Now, scientists have harnessed this phenomenon to create brain implants that transform this neural activity into text with unprecedented speed and accuracy. In two new studies—both reported today in Nature—the devices enabled two people to “speak” for the first time in more than a decade. The implants produced speech from brain activity with about 75% accuracy and at a speed nearly half that of natural language—results far better than with any previous technology.

“It’s a game changer for the population that doesn’t have better options at this point,” says Vikash Gilja, an electrical engineer at the University of California (UC), San Diego, who was not involved in the studies. “We’re within striking range” of turning the technology into commercially viable medical devices, he adds.

Most previous attempts to develop brain-computer interfaces for speech have piggybacked off electrodes implanted in the brain to monitor seizures in people with epilepsy. The resulting speech was slow and error-prone, but the technology was promising enough for researchers to begin clinical trials in people unable to speak.

One group, led by neurosurgeon Edward Chang of UC San Francisco, published its first experiments with a paralyzed participant in 2021, reporting that he could produce sentences at a rate of up to 18 words per minute. Since then, the group has been improving the technology, doubling the number of electrodes in the implant and improving the algorithm used to predict words from brain signals.

In the new study, the team tested its system in a woman named Ann; 18 years ago, a stroke disrupted her brain’s ability to convey motor signals to the rest of her body.

The researchers placed arrays of electrodes, about the size of a lighter but paper thin, over the surface of brain areas that control the muscles involved in speech. For 2 weeks, the scientists asked Ann to say words displayed on a screen while their algorithm learned to recognize which of her neural signals correlate to 39 different phonemes—the sounds that make up words. The algorithm then tried to predict likely next words in sentences, much like ChatGPT does.

Given the choice of 1024 words, the algorithm was 95% accurate at matching Ann’s neural activity to the word she was most likely trying to pronounce, the team reports. The researchers predict that giving the program a broader vocabulary of 39,000 words to choose from would produce an accuracy rate of 72%. The algorithm was even able to use Ann’s neural signals to accurately determine words that it hadn’t been specifically trained to recognize.

The translation was much faster than previous systems, achieving 78 words per minute. Normal speech contains more than 150 words per minute. But a partially paralyzed person using small muscle movements to pick out letters might only produce a few words per minute.

Chang’s team also used neural recordings to predict Ann’s intended facial expressions and to control an avatar, which spoke in a voice synthesized from recordings of her speaking decades ago. Such an avatar might fit right in on a zoom call, allowing a person to more naturally express their thoughts in near–real time, Chang says. In an interview with the researchers, Ann said she would like to become a counselor and that an avatar could help her put her clients at ease.

A second study led by neural prosthetics researcher Frank Willett at Stanford University and colleagues achieved very similar results using a different type of electrode array. This one, much smaller than the other implant, pokes farther into the brain to measure the firing of individual neurons at close range and high resolution. The researchers tested their system with Pat Bennett, who has had the neurodegenerative disease amyotrophic lateral sclerosis for 11 years and has lost the ability to control her facial muscles. They achieved 91% accuracy when Bennett attempted to read from a set of 50 words chosen to help express needs such as “thirsty” and “family.” When the word bank was expanded to 125,000, the accuracy dipped to 76.2%.

Willett says it’s encouraging that the women’s brains didn’t lose the ability to speak after so many years. “It’s amazing that you still see this neural representation still preserved.” Implanting more patients will help the researchers determine how much the layout of this speech-controlling brain region differs between patients, and how much the algorithm and implant need to be personalized.

Each system has some pros and cons, Gilja says. The surface electrodes used by Chang’s group pick up less information from neurons, so the system relies more heavily on algorithms that predict entire sentences. That might mean longer lag time than the more sensitive, penetrating electrode approach Willett’s team uses, which allows the prediction of individual words in real time. But the needlelike electrodes may drift slightly over time and record from different neurons, necessitating updates to the algorithm, Willett notes.

The two findings represent “a big leap,” says Alexander Huth, a neuroscientist and computer scientist at the University of Texas at Austin who was not involved with either study. The implanted systems are far more accurate than a nonimplanted device he developed earlier this year to decipher word meanings from higher level brain activity instead of motor signals, he notes, although the latter may work better in people with damage to brain regions controlling movement.

People who would eventually use these technologies need to be part of the research process to guide how best to meet their needs, notes Melanie Fried-Oken, a speech-language pathologist at Oregon Health & Science University who serves as a consultant to the research consortium behind the technology Willett’s team used. Those who are unable to speak need to be able not only to relay basic needs, she says, but also have meaningful, personal conversations, which might require more speed or accuracy.

Right now, these systems rely on wires passing through the skull connected to a processor. A group of technicians is also required to monitor and tweak the system. Researchers are now working on ways for implants to transmit wirelessly to external devices. In the future, companies may be able to develop portable systems that synthesize speech and that the user can control on their own, Gilja says. “That’s the future I’m excited for.”