‘Science fiction into science fact’: Decoder can turn your thoughts … – KUT

A new technology developed by University of Texas at Austin scientists can turn a persons thoughts into readable text.

Its called a semantic decoder, and it uses artificial intelligence to interpret brain activity. The decoder can allow communication with someone who is mentally conscious but may be unable to speak like someone whos had a stroke.

Alex Huth is an assistant professor of neuroscience in computer science at UT-Austin. He worked with doctoral student Jerry Tang to develop this decoder. Huth joined Texas Standard to talk about their research, the technology and the legal and ethical concerns surrounding it. Listen to the story above or read the transcript below.

This transcript has been edited lightly for clarity:

Texas Standard:Give us an overview of this brain activity decoder youve created. What is it and how does it work exactly? This sounds like one of these almost science fiction devices.

Alex Huth:Yeah, every day were moving a little bit of science fiction into science fact. So the way it works is we put a person in an MRI scanner, kind of like the one where you would get a medical MRI. But were doing functional MRI so we can record whats happening in the brain. We build up a mapping of where different ideas, different thoughts are represented in the brain over many hours of training. And then we can use our algorithms to read those ideas out into words. So we can decode words that somebody is hearing if theyre listening to a podcast, for example, or even just thinking if theyre telling a story inside their head.

So does the person, the subject, do they have to think particularly hard about it for it to register? How does it work?

So it seems like they have to be very intentionally thinking something. So we were concerned about kind of the mental privacy implications

I was going to say, youve seen the movie Minority Report, right? I would think that there might be something like that.

So were worried about this. So we did some tests to see if the person, if they werent thinking thoughts very deliberately or if they were trying to think something else, could that upset the decoder? Could that make it not work? And it turns out it can. So the person has to be very deliberately, actively sort of thinking a sentence in their head, and then we can read that out. And of course, this only works on a person where weve trained this model with many hours of data.

You mentioned listening to podcasts, for example. This was part of your experimentation?

Oh, yes.

So is this technology, which uses the kind of artificial intelligence language model that people may know from ChatGPT, how different is it from what has been achieved in the past?

So theres a couple strands of work in this direction in the past. One, the most successful so far, has been using electrodes that are actually implanted in the brain. So this is after neurosurgery to implant electrodes, which is only done for people who are undergoing epilepsy surgery or have another medical issue. Thats been very successful, but that gets at a kind of different level of language. So that looks at how a person is trying to move their mouth to speak, for example. Whereas our technology works at this much higher level. Its about like, what are the ideas the person is thinking about or hearing about? And we can actually read that out, which means that we can do it non-invasively. So we dont need a person to have brain surgery. They can do it with an MRI, which anybody can do.

How nuanced is it?

Its okay. So its not perfect by any stretch. It doesnt get the exact words that a person is, you know, hearing or trying to say. It gets the gist. So it gets kind of what are the main ideas. Sometimes it gets phrases correct you know, specific sequences of words but its still kind of at the level of gist. This is something that were working on improving.

So you talked about how this could help people whose brains are active but cant speak. Do you see any other applications for it?

Yeah. So potentially this could become something that consumers could use at home, certainly not with an MRI scanner, but with other neuro imaging technologies.

I imagine its only a matter of time.

Hopefully. Its kind of hard to say, you know, what kinds of technologies will be available for looking at brain signals in the future, but were kind of hopeful that thats going to work. And if thats something that people could use at home, then they could potentially use it to interface with their computer. They could use it to search for things to control their computer without actively, you know, typing things in. We dont really know what the killer app for that would be yet, but I think theres a lot of possibilities.

Nolan Zunk

/

University of Texas at Austin

You know, its hard not to think about this and think about those who might use it for more nefarious purposes. Weve heard the expression thought police.

Absolutely.

What sort of ethical or legal safeguards are going to have to be built in now that we are, from the sound of it, at that threshold?

This is something weve thought about a lot in this work. So we certainly think were at the point where considering codified legal safeguards is important is something that we should be actively worrying about that people should not have this done without their consent. We think this should not be used for law enforcement purposes because its certainly not accurate enough for that. That hasnt stopped law enforcement in the past, for example, with the polygraph. We dont want to have that kind of situation repeated. We did some testing on kind of these privacy concerns to see how much control does the person have and thats where we found, again, that if the person is trying to resist it trying to make it not work then they can do that. They can shut it off. And also, we had this lengthy training procedure where the person has to lay in the scanner for many, many hours and listen to podcasts. We tried to see, can we avoid that? Can we take the model that we trained on one persons brain and apply it to a new person so we wouldnt need to do this training on them? And that doesnt work at all, currently. So right now I think the concerns are not very acute, but its on the horizon, which is why I think we should think about it.

Whats the next frontier as you see it?

The biggest kind of avenue forward that we see with this is kind of bigger and better models like bigger and better A.I. models. As these models get more effective, as we move from the kinds of things that we used in this work which was GPT-1 from 2019 now theres much more effective models out there, and we think that those things will very much improve the system make it more accurate, which which were excited about. But a lot of what were doing is really, you know, what we try to study in my lab is how our brains process language, how we understand language. Thats our scientific goal. So a lot of what were doing is taking these same kinds of models and using them to try to figure out what the brain is doing, to try to say what are the different parts of the brain responding to, how is language and meaning

More broadly beyond the technology, in other words.

Right. So what are the scientific implications? So really most of our work is in that direction.

If you found the reporting above valuable, please consider making a donation to support ithere. Your gift helps pay for everything you find ontexasstandard.organdKUT.org. Thanks for donating today.

Read the original post:

'Science fiction into science fact': Decoder can turn your thoughts ... - KUT

Related Posts

Comments are closed.