Suara Malaysia
ADVERTISEMENTFly London from Kuala LumpurFly London from Kuala Lumpur
Monday, December 23, 2024
More
    ADVERTISEMENTFly London from Kuala LumpurFly London from Kuala Lumpur
    HomeTechBrain implants provide hope for those unable to speak

    Brain implants provide hope for those unable to speak

    -

    Fly AirAsia from Kuala Lumpur

    Restoring the power of speech to those who have lost it through illness or accident is becoming an ever more plausible concept, based on results from two brain implants that show encouraging results, researchers say.

    Pat Bennett, 68, was a dynamic and sporty human resources senior executive before being diagnosed more than a decade ago with amyotrophic lateral sclerosis, also known as Lou Gehrig’s disease, a neural disorder resulting from damage to nerves that transmit data from the brain and spinal cord to and from the rest of the body.

    The ailment, which attacks neutrons controlling movement, is neurodegenerative and progressively shuts down a patient’s movement to the point of paralysis.

    Pat started out experiencing difficulty in enunciating words, then eventually lost the ability to speak entirely.

    But important advances are being made in tackling such disorders through implants.

    The journal Nature reported Wednesday that researchers from Stanford University’s department of neuroscience in March last year implanted into Pat’s brain four small squares of 64 micro-electrodes made of silicone.

    Penetrating a mere 1.5 millimetres into the cerebral cortex, they record electrical signals produced by the areas of the brain that are linked to the production of language.

    The signals produced are conveyed outside the skull via a bundle of cables and processed by an algorithm.

    ‘Fluid conversation’

    Over four months the system “learned” to interpret the signals’ meanings by associating them with phonemes – units of sound that distinguish one word from another – and processing them with the help of a language model.

    ALSO READ:  News groups argue AI bots require material consent.

    “With these new studies it is now possible to imagine a future where we can restore fluid conversation with someone with paralysis,” Frank Willett, Stanford professor and co-author of the study, told reporters.

    Using her brain-computer interface (BCI) machine, Bennett can speak via a screen at more than 60 words a minute.

    That is short of the 150 to 200 words per minute for a standard conversation, but still more than three times faster than the previous machine-aided mark from 2021, when the Stanford team took charge of her case.

    Moreover, the error rate for a 50-word vocabulary has dropped to below 10% from 20% previously.

    Avatar

    In a second test, Edward Chang, chair of neurological surgery at the University of California San Francisco and his team used a device resting on a thin strip of 253 electrodes placed on cortical material.

    Its performance proved comparable to that of the Stanford team’s system in obtaining a median of 78 words per minute, or five times faster than before.

    It was a major leap forward for the patient, a paraplegic since suffering a brainstem haemorrhage who had previously been able to communicate only at a maximum 14 words per minute, through a technique relying on interpreting head movements.

    In both these two tests the rate of error rises to around 25 percent when patients use a vocabulary extending to thousands of words.

    The particularity of Chang’s system is that it is based on analysis of the signals emitted not only in brain areas directly linked to language but also more broadly in the sensorimotor cortex.

    That covers the brain’s primary sensory and motor areas and activates the facial and oral muscles to produce sounds.

    ALSO READ:  Coinbase kicks off grassroots campaign to advance US legislation

    “About five to six years ago we really started to understand the electrical patterns that give rise to the movements of the lips, jaw and tongue that allow us to produce the specific sounds of each individual consonant and vowels and words,” Chang said.

    His team’s brain-machine interface produces language in the form of text but also via a synthetic voice and an avatar that reproduces a patient’s facial expressions when they speak.

    “Speech isn’t just about communicating just words but also who we are — our voice and expression are also parts of our identity,” Chang said.

    The team is now seeking to come up with a wireless version of the device, which would have “profound implications” for a patient’s independence and social interactions, according to David Moses, a co-author of the study and adjunct professor of neurological surgery at the University of San Francisco. – AFP Relaxnews



    Credit: The Star : Tech Feed

    Suara
    Suarahttps://www.suara.my
    Tech enthusiast turning dreams into reality, one byte at a time 🚀

    Related articles

    ADVERTISEMENTFly London from Kuala Lumpur

    Subscribe to Newsletter

    To be updated with all the latest news, offers and special announcements.

    Latest posts