Jack Gallant never set out to create a mind-reading machine. His focus was more prosaic. A computational neuroscientist at the University of California, Berkeley, Dr. Gallant worked for years to improve our understanding of how brains encode information — what regions become active, for example, when a person sees a plane or an apple or a dog — and how that activity represents the object being viewed.
By the late 2000s, scientists could determine what kind of thing a person might be looking at from the way the brain lit up — a human face, say, or a cat. But Dr. Gallant and his colleagues went further. They figured out how to use machine learning to decipher not just the class of thing, but which exact image a subject was viewing. (Which photo of a cat, out of three options, for instance.)
One day, Dr. Gallant and his postdocs got to talking. In the same way that you can turn a speaker into a microphone by hooking it up backward, they wondered if they could reverse engineer the algorithm they’d developed so they could visualize, solely from brain activity, what a person was seeing.
The first phase of the project was to train the AI. For hours, Dr. Gallant and his colleagues showed volunteers in fMRI machines movie clips. By matching patterns of brain activation prompted by the moving images, the AI built a model of how the volunteers’ visual cortex, which parses information from the eyes, worked. Then came the next phase: translation. As they showed the volunteers movie clips, they asked the model what, given everything it now knew about their brains, it thought they might be looking at.
The experiment focused just on a subsection of the visual cortex. It didn’t capture what was happening elsewhere in the brain — how a person might feel about what she was seeing, for example, or what she might be fantasizing about as she watched. The endeavor was, in Dr. Gallant’s words, a primitive proof-of-concept.
And yet the results, published in 2011, are remarkable.
The reconstructed images move with a dreamlike fluidity. In their imperfection, they evoke expressionist art. (And a few reconstructed images seem downright wrong.) But where they succeed, they represent an astonishing achievement: A machine translating patterns of brain activity into a moving image understandable by other people — a machine that can read the brain.
Dr. Gallant was thrilled. Imagine the possibilities when better brain-reading technology became available? Imagine the people suffering from locked-in syndrome, Lou Gehrig’s disease, the people incapacitated by strokes, who could benefit from a machine that could help them interact with the world?
He was also scared because the experiment showed, in a concrete way, that humanity was at the dawn of a new era, one in which our thoughts could theoretically be snatched from our heads. What was going to happen, Dr. Gallant wondered, when you could read thoughts the thinker might not even be consciously aware of, when you could see people’s memories?
“That’s a real sobering thought that now you have to take seriously,” he told me recently. (...)
by Moises Velasquez-Manoff, NY Times | Read more:
Image: Derrick Schultz
By the late 2000s, scientists could determine what kind of thing a person might be looking at from the way the brain lit up — a human face, say, or a cat. But Dr. Gallant and his colleagues went further. They figured out how to use machine learning to decipher not just the class of thing, but which exact image a subject was viewing. (Which photo of a cat, out of three options, for instance.)
One day, Dr. Gallant and his postdocs got to talking. In the same way that you can turn a speaker into a microphone by hooking it up backward, they wondered if they could reverse engineer the algorithm they’d developed so they could visualize, solely from brain activity, what a person was seeing.
The first phase of the project was to train the AI. For hours, Dr. Gallant and his colleagues showed volunteers in fMRI machines movie clips. By matching patterns of brain activation prompted by the moving images, the AI built a model of how the volunteers’ visual cortex, which parses information from the eyes, worked. Then came the next phase: translation. As they showed the volunteers movie clips, they asked the model what, given everything it now knew about their brains, it thought they might be looking at.
The experiment focused just on a subsection of the visual cortex. It didn’t capture what was happening elsewhere in the brain — how a person might feel about what she was seeing, for example, or what she might be fantasizing about as she watched. The endeavor was, in Dr. Gallant’s words, a primitive proof-of-concept.
And yet the results, published in 2011, are remarkable.
The reconstructed images move with a dreamlike fluidity. In their imperfection, they evoke expressionist art. (And a few reconstructed images seem downright wrong.) But where they succeed, they represent an astonishing achievement: A machine translating patterns of brain activity into a moving image understandable by other people — a machine that can read the brain.
Dr. Gallant was thrilled. Imagine the possibilities when better brain-reading technology became available? Imagine the people suffering from locked-in syndrome, Lou Gehrig’s disease, the people incapacitated by strokes, who could benefit from a machine that could help them interact with the world?
He was also scared because the experiment showed, in a concrete way, that humanity was at the dawn of a new era, one in which our thoughts could theoretically be snatched from our heads. What was going to happen, Dr. Gallant wondered, when you could read thoughts the thinker might not even be consciously aware of, when you could see people’s memories?
“That’s a real sobering thought that now you have to take seriously,” he told me recently. (...)
Dear Brain
Not many people will volunteer to be the first to undergo a novel kind of brain surgery, even if it holds the promise of restoring mobility to those who’ve been paralyzed. So when Robert Kirsch, the chairman of biomedical engineering at Case Western Reserve University put out such a call nearly 10 years ago, and one person both met the criteria and was willing, he knew he had a pioneer on his hands.
The man’s name was Bill Kochevar. He’d been paralyzed from the neck down in a biking accident years earlier. His motto, as he later explained it, was “somebody has to do the research.”
At that point, scientists had already invented gizmos that helped paralyzed patients leverage what mobility remained — lips, an eyelid — to control computers or move robotic arms. But Dr. Kirsch was after something different. He wanted to help Mr. Kochevar move his own limbs.
The first step was implanting two arrays of sensors over the part of the brain that would normally control Mr. Kochevar’s right arm. Electrodes that could receive signals from those arrays via a computer were implanted into his arm muscles. The implants, and the computer connected to them, would function as a kind of electronic spinal cord, bypassing his injury.
Once his arm muscles had been strengthened — achieved with a regimen of mild electrical stimulation while he slept — Mr. Kochevar, who at that point had been paralyzed for over a decade, was able to feed himself and drink water. He could even scratch his nose.
There are about two dozen people around the world who have lost the use of limbs from accidents or neurological disease, who’ve had sensors implanted on their brains. Many, Mr. Kochevar included, participated in a United States government-funded program called BrainGate. The sensor arrays used in this research, smaller than a button, allow patients to move robotic arms or cursors on a screen just by thinking. But as far as Dr. Kirsch knows, Mr. Kochevar, who died in 2017 for reasons unrelated to the research, was the first paralyzed person to regain use of his limbs by way of this technology.
This fall, Dr. Kirsch and his colleagues will begin version 2.0 of the experiment. This time, they’ll implant six smaller arrays — more sensors will improve the quality of the signal. And instead of implanting electrodes directly in the volunteers’ muscles, they’ll insert them upstream, circling the nerves that move the muscles. In theory, Dr. Kirsch says, that will enable movement of the entire arm and hand. (...)
Zap That Urge
Not all the applications of brain-reading require something as complex as understanding speech, however. In some cases, scientists simply want to blunt urges.
When Casey Halpern, a neurosurgeon at Stanford, was in college, he had a friend who drank too much. Another was overweight but couldn’t stop eating. “Impulse control is such a pervasive problem,” he told me.
As a budding scientist, he learned about methods of deep brain stimulation used to treat Parkinson’s disease. A mild electric current applied to a part of the brain involved in movement could lessen tremors caused by the disease. Could he apply that technology to the problem of inadequate self control? (...)
Dr. Halpern’s approach takes as fact something that he says many people have a hard time accepting: that the lack of impulse control that may underlie addictive behavior isn’t a choice, but results from a malfunction of the brain. “We have to accept that it’s a disease,” he says. “We often just judge people and assume it’s their own fault. That’s not what the current research is suggesting we should do.”
I must confess that of the numerous proposed applications of brain-machine interfacing I came across, Dr. Halpern’s was my favorite to extrapolate on. How many lives have been derailed by the inability to resist the temptation of that next pill or that next beer? What if Dr. Halpern’s solution was generalizable?
Not many people will volunteer to be the first to undergo a novel kind of brain surgery, even if it holds the promise of restoring mobility to those who’ve been paralyzed. So when Robert Kirsch, the chairman of biomedical engineering at Case Western Reserve University put out such a call nearly 10 years ago, and one person both met the criteria and was willing, he knew he had a pioneer on his hands.
The man’s name was Bill Kochevar. He’d been paralyzed from the neck down in a biking accident years earlier. His motto, as he later explained it, was “somebody has to do the research.”
At that point, scientists had already invented gizmos that helped paralyzed patients leverage what mobility remained — lips, an eyelid — to control computers or move robotic arms. But Dr. Kirsch was after something different. He wanted to help Mr. Kochevar move his own limbs.
The first step was implanting two arrays of sensors over the part of the brain that would normally control Mr. Kochevar’s right arm. Electrodes that could receive signals from those arrays via a computer were implanted into his arm muscles. The implants, and the computer connected to them, would function as a kind of electronic spinal cord, bypassing his injury.
Once his arm muscles had been strengthened — achieved with a regimen of mild electrical stimulation while he slept — Mr. Kochevar, who at that point had been paralyzed for over a decade, was able to feed himself and drink water. He could even scratch his nose.
There are about two dozen people around the world who have lost the use of limbs from accidents or neurological disease, who’ve had sensors implanted on their brains. Many, Mr. Kochevar included, participated in a United States government-funded program called BrainGate. The sensor arrays used in this research, smaller than a button, allow patients to move robotic arms or cursors on a screen just by thinking. But as far as Dr. Kirsch knows, Mr. Kochevar, who died in 2017 for reasons unrelated to the research, was the first paralyzed person to regain use of his limbs by way of this technology.
This fall, Dr. Kirsch and his colleagues will begin version 2.0 of the experiment. This time, they’ll implant six smaller arrays — more sensors will improve the quality of the signal. And instead of implanting electrodes directly in the volunteers’ muscles, they’ll insert them upstream, circling the nerves that move the muscles. In theory, Dr. Kirsch says, that will enable movement of the entire arm and hand. (...)
Zap That Urge
Not all the applications of brain-reading require something as complex as understanding speech, however. In some cases, scientists simply want to blunt urges.
When Casey Halpern, a neurosurgeon at Stanford, was in college, he had a friend who drank too much. Another was overweight but couldn’t stop eating. “Impulse control is such a pervasive problem,” he told me.
As a budding scientist, he learned about methods of deep brain stimulation used to treat Parkinson’s disease. A mild electric current applied to a part of the brain involved in movement could lessen tremors caused by the disease. Could he apply that technology to the problem of inadequate self control? (...)
Dr. Halpern’s approach takes as fact something that he says many people have a hard time accepting: that the lack of impulse control that may underlie addictive behavior isn’t a choice, but results from a malfunction of the brain. “We have to accept that it’s a disease,” he says. “We often just judge people and assume it’s their own fault. That’s not what the current research is suggesting we should do.”
I must confess that of the numerous proposed applications of brain-machine interfacing I came across, Dr. Halpern’s was my favorite to extrapolate on. How many lives have been derailed by the inability to resist the temptation of that next pill or that next beer? What if Dr. Halpern’s solution was generalizable?
by Moises Velasquez-Manoff, NY Times | Read more:
Image: Derrick Schultz