Decoding AI’s mind-reading ability: When tech translates brain activity into words

[ad_1]

Think of the words whirling around in your head: that tasteless joke you wisely kept to yourself at dinner; your unvoiced impression of your best friend’s new partner. Now imagine that someone could listen in.

On Monday, scientists from the University of Texas, Austin, made another step in that direction. In a study published in the journal Nature Neuroscience, the researchers described an artificial intelligence that could translate the private thoughts of human subjects by analyzing fMRI scans, which measure the flow of blood to different regions in the brain.

Already, researchers have developed language-decoding methods to pick up the attempted speech of people who have lost the ability to speak, and to allow paralyzed people to write while just thinking of writing. But the new language decoder is one of the first to not rely on implants. In the study, it was able to turn a person’s imagined speech into actual speech and, when subjects were shown silent films, it could generate relatively accurate descriptions of what was happening on screen.

“This isn’t just a language stimulus,” said Alexander Huth, a neuroscientist at the university who helped lead the research. “We’re getting at meaning, something about the idea of what’s happening. And the fact that that’s possible is very exciting.”

The study centered on three participants, who came to Huth’s lab for 16 hours over several days to listen to “The Moth” and other narrative podcasts. As they listened, an fMRI scanner recorded the blood oxygenation levels in parts of their brains. The researchers then used a large language model to match patterns in the brain activity to the words and phrases that the participants had heard.

Large language models like OpenAI’s GPT-4 and Google’s Bard are trained on vast amounts of writing to predict the next word in a sentence or phrase. In the process, the models create maps indicating how words relate to one another. A few years ago, Huth noticed that particular pieces of these maps – so-called context embeddings, which capture the semantic features, or meanings, of phrases – could be used to predict how the brain lights up in response to language.

In a basic sense, said Shinji Nishimoto, a neuroscientist at Osaka University who was not involved in the research, “brain activity is a kind of encrypted signal, and language models provide ways to decipher it.” In their study, Huth and his colleagues effectively reversed the process, using another AI to translate the participant’s fMRI images into words and phrases. The researchers tested the decoder by having the participants listen to new recordings, then seeing how closely the translation matched the actual transcript.

Almost every word was out of place in the decoded script, but the meaning of the passage was regularly preserved. Essentially, the decoders were paraphrasing.

AI Is Getting Better at Mind ReadingNYT News Service

A still image from video provided by Jerry Tang and Alexander Huth. Scientists recorded MRI data from three participants as they listened to 16 hours of narrative stories to train the model to map between brain activity and semantic features that captured the meanings of certain phrases and the associated brain response. (Jerry Tang and Alexander Huth via The New York Times) — NO SALES; FOR EDITORIAL USE ONLY WITH NYT STORY SLUGGED AI MIND READING BY OLIVER WHANG FOR MAY 1, 2023. ALL OTHER USE PROHIBITED. —

Original transcript: “I got up from the air mattress and pressed my face against the glass of the bedroom window expecting to see eyes staring back at me but instead only finding darkness.”

Decoded from brain activity: “I just continued to walk up to the window and open the glass I stood on my toes and peered out I didn’t see anything and looked up again I saw nothing.”

While under the fMRI scan, the participants were also asked to silently imagine telling a story; afterward, they repeated the story aloud, for reference. Here, too, the decoding model captured the gist of the unspoken version.

Participant’s version: “Look for a message from my wife saying that she had changed her mind and that she was coming back.”

Decoded version: “To see her for some reason I thought she would come to me and say she misses me.”

Finally the subjects watched a brief, silent animated movie, again while undergoing an fMRI scan. By analyzing their brain activity, the language model could decode a rough synopsis of what they were viewing – maybe their internal description of what they were viewing.

The result suggests that the AI decoder was capturing not just words but also meaning. “Language perception is an externally driven process, while imagination is an active internal process,” Nishimoto said. “And the authors showed that the brain uses common representations across these processes.”

Greta Tuckute, a neuroscientist at the Massachusetts Institute of Technology who was not involved in the research, said that was “the high-level question.”

“Can we decode meaning from the brain?” she continued. “In some ways they show that, yes, we can.”

This language-decoding method had limitations, Huth and his colleagues noted. For one, fMRI scanners are bulky and expensive. Moreover, training the model is a long, tedious process, and to be effective it must be done on individuals. When the researchers tried to use a decoder trained on one person to read the brain activity of another, it failed, suggesting that every brain has unique ways of representing meaning.

Participants were also able to shield their internal monologues, throwing off the decoder by thinking of other things. AI might be able to read our minds, but for now it will have to read them one at a time, and with our permission.

[ad_2]

Source link


Leave a Reply

Your email address will not be published. Required fields are marked *