in

Scientists use brain scans and AI to “decode” thoughts.

The fMRI machine scans allowed the scientists to figure out how meanings and expressions elicited responses in different regions of the bra

The fMRI machine scans allowed the scientists to figure out how meanings and expressions elicited responses in different regions of the bra

The fMRI machine scans allowed the scientists to figure out how meanings and expressions elicited responses in different regions of the brain.

Scientists said Monday they’ve found a way to use brain scans and artificial intelligence models to transcribe “the gist” of what people are thinking in a move toward mind-reading.

While the voice decoder’s main goal is to help people who have lost the ability to communicate, the US scientists acknowledged that the technology raises questions about “mental privacy”.

To allay such fears, they ran tests that showed their decoder could not be used on someone who had not allowed themselves to be trained on their brain activity for long hours in a functional magnetic resonance imaging (fMRI) scanner.

Previous research has shown that a brain implant can enable people who can no longer speak or type to spell words or even sentences.

These “brain-computer interfaces” focus on the part of the brain that controls the mouth when trying to form words.

Alexander Huth, a neuroscientist at the University of Texas at Austin and co-author of a new study, said his team’s speech decoder “works on a whole different level”.

“Our system really works at the level of ideas, of semantics, of meaning,” Huth said at an online press conference.

According to the journal study, it is the first system capable of reconstructing continuous speech without an invasive brain implant nature neuroscience.

“Deeper than Language”

For the study, three people spent a total of 16 hours in an fMRI machine listening to spoken stories, mostly podcasts like The New York Times’ Modern Love.

This allowed the researchers to figure out how words, phrases and meanings evoke responses in the regions of the brain known to process language.

They fed this data into a neural network language model using GPT-1, the predecessor of the AI ​​technology later used in the hugely popular ChatGPT.

The model was trained to predict how each person’s brain would respond to the perceived speech, and then narrow down the options until it found the next answer.

To test the accuracy of the model, each participant then listened to a new story on the fMRI machine.

The study’s first author, Jerry Tang, said the decoder could “recover the essence of what the user heard.”

For example, when the participant heard the phrase “I don’t have a driver’s license yet,” the model responded with “she hasn’t even started learning to drive.”

The researchers admitted that the decoder struggled with personal pronouns such as “I” or “she”.

But even when participants made up their own stories — or watched silent films — the decoder was still able to capture the “essentials,” they said.

This shows that “we are decoding something that is deeper than language and then turning it into language,” Huth said.

Because fMRI scanning is too slow to capture single words, it collects a “hodgepodge, an accumulation of information over a few seconds,” Huth said.

“That way we can see how the idea is developing, even if the exact words are lost.”

Ethical Warning

David Rodriguez-Arias Vailhen, a bioethics professor at Spain’s University of Granada who was not involved in the research, said it goes beyond what has been achieved by previous brain-computer interfaces.

This brings us closer to a future where machines can “read minds and transcribe thoughts,” he said, warning that this could potentially happen against people’s will, such as when they’re sleeping.

The researchers anticipated such concerns.

They ran tests that showed the decoder wouldn’t work on a person unless it had already been trained for their own specific brain activity.

The three participants were also able to defeat the decoder without any problems.

While listening to one of the podcasts, users were asked to count to sevens, name animals and introduce themselves, or think of another story in their mind. All of these tactics “sabotage” the decoder, the researchers said.

Next, the team hopes to speed up the process so they can decode the brain scans in real time.

They also called for regulations to protect intellectual privacy.

“Up until now, our mind has been the guardian of our privacy,” said bioethicist Rodriguez-Arias Vailhen.

“This discovery could be a first step towards endangering this freedom in the future.”

More information:
Semantic reconstruction of continuous speech from non-invasive brain recordings, nature neuroscience (2023). DOI: 10.1038/s41593-023-01304-9

© 2023 AFP

Citation: Scientists use brain scans and AI to ‘decode’ Thoughts (2023, May 1), retrieved May 1, 2023 from https://techxplore.com/news/2023-05-scientists-brain-scans-ai-decode .html

This document is protected by copyright. Except for fair trade for the purpose of private study or research, no part may be reproduced without written permission. The content is for informational purposes only.


#Scientists #brain #scans #decode #thoughts

A decoder that uses brain scans to know what you mean - most of the time

A decoder that uses brain scans to know what you mean – most of the time

The Athletic

From eliminating goalie rotation to changing lines, Jim Montgomery didn’t do the Bruins any favors