Machines will read our minds – Maclean’s

AI brain sensors will translate our thoughts into speech, text or even other languages

Yalda Mohsenzadeh is a professor of computer science at Western University.

(This illustration was created by Macleans art director Anna Minzhulina using the generative AI image program Imagine. Minzhulina spent weeks feeding prompts into the program, inspired by the essay.)

The brain has always been the source of inspiration for artificial intelligence scientists, with billions of neurons that work together to enable us to think, see, hear and remember. Soon, AI will be able to do that tooby decoding the patterns of the mind.

Take, for example, the case of Ann Johnson, a Saskatchewan woman who had a brain-stem stroke at 30 years old, leaving her unable to speak. This year, as part of a clinical trial in California, she had more than 200 electrodes placed inside her head, in an area of the brain that produces speech. A port connected to a computer allowed an AI algorithm that uses a variety of deep-learning techniques to interpret her neural activity. From there, it produced speechAnn was able to communicate clearly with her husband through an avatar that spoke as she was thinking. We knew the AI was correctly reading her thoughts because researchers tested its ability to replicate controlled information. They had a dataset of sentences that contained a vast range of sounds. They showed Ann these sentences and got her to repeat them over and over in her mind in order to train the AI algorithm to recognize which brain signal corresponded to which sound.

MORE: The future of AIand Canadas place in it

After training the AI algorithm, the scientists tested it in real time. She thought it, and the avatar said it. Currently, this AI can process about 78 words a minute. Its capable of more than a few simple words. It has 39 distinctive sounds that are used to form whatever words and sentences Ann wants.

Is this something we can roll out to all patients who cant speak? Not yet. While all people have some commonality in terms of brain function and information processing, much brain activity is unique to each person, and it varies throughout the day. The other limitation of this work is that it has to be done in a controlled environment because the device is implanted directly into the head of the patient.

In our lab, we show individuals videos or images while recording their brain activity, using wearable sensors on their scalps that are sensitive to tiny changes in electrical fields. We then use AI techniques to decode what video or image they see. Essentially, were asking, what are the dynamics of the brain processes that give rise to visual cognition? Weve found we can successfully determine what the person was looking at and thus identify intricate neural dynamics and brain processes that create our meaningful perception of the visual world.Yes, the data is noisier than what you get when you attach sensors directly to the brain. But as this technology develops, it brings us closer to understanding and translating what the brain is doing. That means people with severe paralysis, stroke damage or other conditions that affect their ability to talk may soon have a means to do so.

What everyone wants to know, of course, is whether AI might be able to read our mindscould we control our computers with just a thought? I do not believe this is science fiction. Its not something that will only happen in 100 years; it could very likely happen in the next decade. But first, we need two key developments: better sensors to capture signals from the brain, and an improvement in AI techniques that can read brain signals and decode information.

Once we have those, the applications will not only be medical, but commercial as well. For example, right now, if we want to Google something, we have to type it into our mobile phone or our laptop, or ask an AI assistant to find it for us. It would be amazing if you could think of a question and then, with a wireless device, transmit that question to the cloud, where AI would search for the answer and send it right back to your brain.

MORE: Personalized, preventive medicine is on its way

The field of AI and deep learning is evolving fast. New algorithms, methods and techniques are appearing all the time. One day, we might be able to translate automatically and respond to someone in their own language, or control a vehicle with just our thoughts. Of course, all of this is still theoretical. It will require the blending of sensors and the AI algorithms that already do language translation or drive autonomous cars. But it shows the exciting horizons this technology could bring.

There are challenges to consider with this type of research. For example, reading brain impulses could also help companies develop targeted advertising. And what would the companies that read our minds do with that information? Wed need to ensure privacy, data security and consent. Its similar to the ethical considerations we have with social media today. We dont want the wrong people reading our minds.

We reached out to Canadas top AI thinkers in fields like ethics, health and computer science and asked them to predict where AI will take us in the coming years, for better or worse. The results may sound like science fictionbut theyre coming at you sooner than you think. To stay ahead of it all, read the other essays that make up ourAI cover story, which was published in the November 2023 issue ofMacleans.

Continue reading here:
Machines will read our minds - Maclean's

Related Posts

Comments are closed.