Sunday, August 18, 2024

Another advance in brain to machine communication

A SciAm article discusses what appears to be a significant advance in turning brain activity into coherent speech:
Brain-to-Speech Tech Good Enough for 
Everyday Use Debuts in a Man with ALS

A highly robust brain-computer interface boasts low error rates and a durability that allows a user to talk all day long
By July 2023, Casey Harrell, then age 45, had lost the ability to speak to his then four-year-old daughter. The neurodegenerative disorder amyotrophic lateral sclerosis (ALS) had gradually paralyzed him in the five years since his symptoms began. As the effects spread to the lips, tongue and jaw, his speech devolved into indistinct sounds that his daughter could not understand.


But a month after a surgery in which Harrell had four 3-by-3 millimeter arrays of electrodes implanted in his brain that July, he was suddenly able to tell his little girl whatever he wanted. The electrodes picked up the chatter of neurons responsible for articulating word sounds, or phonemes, while other parts of a novel brain-computer interface (BCI) translated that chatter into clear synthetic speech.
“She hadn’t had the ability to communicate very much with me for about two years. Now that is very different,” Harrell says, speaking through the device a year after the surgery. “I can help her mother to parent her. I can have a deeper relationship with her and tell her what I am thinking.”

His face contorts with emotion, and after a pause, he adds, “I can simply tell her how much I love her.”

Neuroscientist Sergey Stavisky and neurosurgeon David Brandman, both at the University of California, Davis, and their team described the new BCI on August 14 in the New England Journal of Medicine. Harrell isn’t the first person with paralysis to talk with his thoughts. But his BCI is easier to use and far less error-prone than similar devices that were announced a year ago. The improvements are such that Harrell can use the new BCI regularly to chat with colleagues, friends and family.

“It marks a landmark in the field of speech BCIs,” says Christian Herff, a computational neuroscientist at Maastricht University in the Netherlands, who was not involved in the study. “It has achieved a level of quality that is now of actual use for patients.” The device predicts the wrong word less than 3 percent of the time, an error rate on par with nondisabled speakers reading a paragraph aloud. “We can basically call it perfect,” Herff says.

After a year of use, Harrell has seen no decline in performance either. And the UC Davis team plans to implant the array in several more participants in the coming months to years. In the meantime, the researchers are adding bells and whistles to Harrell’s device, such as prosody—inflections in pitch and rhythm—and the ability to sing.

One feature Harrell already has is the ability to send text to his computer to write e-mails, including a few he sent to the author of this article. That exchange was, on its surface, unremarkable. He introduced himself, suggested times for his interview and expressed enthusiasm about the technology. His signature, however, showed there was nothing ordinary about these messages whatsoever. It read, “Sent from my 🧠.”
Background: Brain–computer interfaces can enable communication for people with paralysis by transforming cortical activity associated with attempted speech into text on a computer screen. Communication with brain–computer interfaces has been restricted by extensive training requirements and limited accuracy. [training time has been a major impediment to widespread use of BCI tech] 

Methods: A 45-year-old man with amyotrophic lateral sclerosis (ALS) with tetraparesis and severe dysarthria underwent surgical implantation of four microelectrode arrays into his left ventral precentral gyrus 5 years after the onset of the illness; these arrays recorded neural activity from 256 intracortical electrodes.

Results: On the first day of use (25 days after surgery), the neuroprosthesis achieved 99.6% accuracy with a 50-word vocabulary. Calibration of the neuroprosthesis required 30 minutes of cortical recordings while the participant attempted to speak, followed by subsequent processing. On the second day, after 1.4 additional hours of system training, the neuroprosthesis achieved 90.2% accuracy using a 125,000-word vocabulary. With further training data, the neuroprosthesis sustained 97.5% accuracy over a period of 8.4 months after surgical implantation, and the participant used it to communicate in self-paced conversations at a rate of approximately 32 words per minute for more than 248 cumulative hours.

No comments:

Post a Comment