in

AI is getting better at reading minds

AI is getting better at reading minds

Think of the words running through your head: that tacky joke you wisely kept to yourself at dinner; Your voiceless impression of your best friend’s new partner. Now imagine if someone could overhear you.

On Monday, scientists at the University of Texas, Austin took another step in that direction. In a study published in the journal Nature Neuroscience, researchers described an AI that can translate the private thoughts of human subjects by analyzing fMRI scans that measure blood flow to different regions in the brain.

Researchers have already developed speech decoding methods to intercept the language attempt of people who have lost the ability to speak and to enable paralyzed people to write while they only think about writing. But the new speech decoder is one of the first not to use implants. In the study, it was able to convert a person’s imaginary speech into actual speech, and when subjects were shown silent films, it was able to produce relatively accurate descriptions of what was happening on the screen.

“It’s not just a language stimulus,” said Alexander Huth, a neuroscientist at the university who co-led the research. “We approach meaning, something about the idea of ​​what is happening. And that this is possible is very exciting.”

The study focused on three participants who spent 16 hours in Dr. Huth’s lab came to hear The Moth and other narrative podcasts. As they listened, an fMRI scanner recorded blood oxygen levels in parts of their brains. The researchers then used a large language model to match patterns in brain activity to the words and phrases the participants had heard.

Large language models such as OpenAI’s GPT-4 and Google’s Bard are trained on large sets of scripts to predict the next word in a sentence or sentence. The models create maps that show how words relate to each other. A few years ago, Dr. Huth suggested that certain parts of these maps — called context embeddings, which capture the semantic features or meanings of phrases — could be used to predict how the brain will light up in response to language.

Basically, said Shinji Nishimoto, a neuroscientist at Osaka University who was not involved in the research, “brain activity is a kind of coded signal, and language models provide ways to decode it.”

In their study, Dr. Huth and his colleagues effectively reversed the process and used a different AI to translate the participant’s fMRI images into words and sentences. Researchers tested the decoder by having participants listen to new recordings and then seeing how closely the translation matched the actual transcript.

Almost every word was out of place in the decoded script, but the meaning of the passage regularly retained. Essentially, the decoders were paraphrasing.

original log: “I got up from the air mattress and pressed my face against the glass of the bedroom window, expecting to see eyes staring at me, only to find darkness instead.”

Decoded from brain activity: “I just kept walking to the window and opened the glass. I stood on my tiptoes and peered out. I saw nothing and looked up again, I saw nothing.”

During the fMRI scan, participants were also asked to silently introduce themselves, tell a story; After that, they repeated the story out loud for reference. Again, the decoding model has captured the essence of the unspoken version.

version of the participant: “Listen for a message from my wife saying she’s changed her mind and will be back.”

Decrypted version: “To see her for some reason I thought she would come up to me and say she misses me.”

Finally, subjects watched a short, silent animated film, again while undergoing an fMRI scan. By analyzing their brain activity, the language model was able to decode a rough summary of what they saw – perhaps their internal description of what they saw.

The result suggests that the AI ​​decoder not only captured words, but also meanings. “Language perception is an externally driven process, while imagination is an active internal process,” said Dr. Nishimoto. “And the authors showed that the brain uses common representations across these processes.”

Greta Tuckute, a neuroscientist at the Massachusetts Institute of Technology who was not involved in the research, said that’s “the question at the highest level.”

“Can we decode meaning from the brain?” She continued. “In a way, they show that we can do it.”

This speech decoding method had limitations, Dr. Huth and his colleagues. For one, fMRI scanners are bulky and expensive. In addition, training the model is a long, tedious process and, to be effective, it must be performed on individuals. When researchers tried to use a decoder trained on one person to read another person’s brain activity, they failed, suggesting that each brain has unique ways of representing meaning.

Participants were also able to screen off their inner monologues by triggering the decoder by thinking about other things. The AI ​​may be able to read our minds, but for now it needs to read them one at a time and with our permission.

#reading #minds

New measurements suggest rethinking the shape of the Milky Way

New measurements suggest rethinking the shape of the Milky Way

Interdependent superconducting networks

From Theory to Reality: A Groundbreaking Manifestation of Interdependent Networks in a Physics Laboratory