acking into the Brain--Is the Brain the Ultimate Computer Interface?
How far can science advance brain-machine interface technology? Will we one day pipe the latest blog entry or NASCAR highlights directly into the human brain as if the organ were an outsize flash drive?
By Gary Stix
The cyberpunk science fiction that emerged in the 1980s routinely paraded “neural implants” for hooking a computing device directly to the brain: “I had hundreds of megabytes stashed in my head,” proclaimed the protagonist of “Johnny Mnemonic,” a William Gibson story that later became a wholly forgettable movie starring Keanu Reeves.
The genius of the then emergent genre (back in the days when a megabyte could still wow) was its juxtaposition of low-life retro culture with technology that seemed only barely beyond the capabilities of the deftest biomedical engineer. Although the implants could not have been replicated at the Massachusetts Institute of Technology or the California Institute of Technology, the best cyberpunk authors gave the impression that these inventions might yet materialize one day, perhaps even in the reader’s own lifetime.
In the past 10 years, however, more realistic approximations of technologies originally evoked in the cyberpunk literature have made their appearance. A person with electrodes implanted inside his brain has used neural signals alone to control a prosthetic arm, a prelude to allowing a human to bypass limbs immobilized by amyotrophic lateral sclerosis or stroke. Researchers are also investigating how to send electrical messages in the other direction as well, providing feedback that enables a primate to actually sense what a robotic arm is touching.
But how far can we go in fashioning replacement parts for the brain and the rest of the nervous system? Besides controlling a computer cursor or robot arm, will the technology somehow actually enable the brain’s roughly 100 billion neurons to function as a clandestine repository for pilfered industrial espionage data or another plot element borrowed from Gibson?
Will Human Become Machine?
Today’s Hollywood scriptwriters and futurists, less skilled heirs of the original cyberpunk tradition, have embraced these neurotechnologies. The Singularity Is Near, scheduled for release next year, is a film based on the ideas of computer scientist Ray Kurzweil, who has posited that humans will eventually achieve a form of immortality by transferring a digital blueprint of their brain into a computer or robot.
Yet the dream of eternity as a Max Headroom–like avatar trapped inside a television set (or as a copy-and-paste job into the latest humanoid bot) remains only slightly less distant than when René Descartes ruminated on mind-body dualism in the 17th century. The wholesale transfer of self—a machine-based facsimile of the perception of the ruddy hues of a sunrise, the constantly shifting internal emotional palette and the rest of the mix that combines to evoke the uniquely subjective sense of the world that constitutes the essence of conscious life—is still nothing more than a prop for fiction writers.
Hoopla over thought-controlled prostheses, moreover, obscures the lack of knowledge of the underlying mechanisms of neural functioning needed to feed information into the brain to re-create a real-life cyberpunk experience. “We know very little about brain circuits for higher cognition,” says Richard A. Andersen, a neuroscientist at Caltech.
What, then, might realistically be achieved by interactions between brains and machines? Do the advances from the first EEG experiment to brain-controlled arms and cursors suggest an inevitable, deterministic progression, if not toward a Kurzweilian singularity, then perhaps toward the possibility of inputting at least some high-level cognitive information into the brain? Could we perhaps download War and Peace or, with a nod to The Matrix, a manual of how to fly a
helicopter? How about inscribing the sentence “See Spot run” into the memory of someone who is unconscious of the transfer? How about just the word “see”?
These questions are not entirely academic, although some wags might muse that it would be easier just to buy a pair of reading glasses and do things the old-fashioned way. Even if a pipeline to the cortex remains forever a figment of science fiction, an understanding of how photons, sound waves, scent molecules and pressure on the skin get translated into lasting memories will be more than mere cyberpunk entertainment. A neural prosthesis built from knowledge of these underlying processes could help stroke victims or Alz heimer’s patients form new memories.
Primitive means of jacking in already reside inside the skulls of thousands of people. Deaf or profoundly hearing-impaired individuals carry cochlear implants that stimulate the auditory nerve with sounds picked up by a microphone—a device that neuroscientist Michael S. Gaz zaniga of the University of California, Santa Barbara, has characterized as the first successful neuroprosthesis in humans. Arrays of electrodes that serve as artificial retinas are in the laboratory. If they work, they might be tweaked to give humans night vision.
The more ambitious goal of linking Amazon.com directly to the hippocampus, a neural structure involved with forming memories, requires technology that has yet to be invented. The bill of particulars would include ways of establishing reliable connections between neurons and the extracranial world—and a means to translate a digital version of War and Peace into the language that neurons use to communicate with one another. An inkling of how this might be done can be sought by examining leading work on brain-machine interfaces.
Your Brain on Text
Jacking text into the brain requires consideration of whether to insert electrodes directly into tissue, an impediment that might make neural implants impractical for anyone but the disabled. As has been known for nearly a century, the brain’s electrical activity can be detected without cracking bone. What looks like a swimming cap studded with electrodes can transmit signals from a paralyzed patient, thereby enabling typing of letters on a screen or actual surfing of the Web. Niels Birbaumer of the University of Tübingen in Germany, a leading developer of the technology, asserts that trial-and-error stimulation of the cortex using a magnetic signal from outside the skull, along with the electrode cap to record which neurons are activated, might be able to locate the words “see” or “run.” Once mapped, these areas could be fired up again to evoke those memories—at least in theory.
Some neurotechnologists think that if particular words reside in specific spots in the brain (which is debatable), finding those spots would probably require greater precision than is afforded by a wired swim cap. One of the ongoing experiments with invasive implants could possibly lead to the needed fine-level targeting. Philip R. Kennedy of Neural Signals and his colleagues designed a device that records the output of neurons. The hookup lets a stroke victim send a signal, through thought alone, to a computer that interprets it as, say, a vowel, which can then be vocalized by a speech synthesizer, a step toward forming whole words. This type of brain-machine interface might also eventually be used for activating individual neurons.
Still more precise hookups might be furnished by nanoscale fibers, measuring 100 nanometers or less in diameter, which could easily tap into single neurons because of their dimensions and their electrical and mechanical properties. Jun Li of Kansas State University and his colleagues have crafted a brushlike structure in which nano fiber bristles serve as electrodes for stimulating or receiving neural signals. Li foresees it as a way to stimulate neurons to allay Parkinson’s disease or depression, to control a prosthetic arm or even to flex astronauts’ muscles during long spaceflights to prevent the inevitable muscle wasting that occurs in zero gravity.
Learning the Language
Fulfilling the fantasy of inputting a calculus text—or even plugging in Traveler’s French before going on vacation—would require far deeper insight into the brain signals that encode language and other neural representations.
Unraveling the neural code is one of the most imposing challenges in neuroscience—and, to misappropriate Freud, would likely pave a royal road to an understanding of consciousness. Theorists have advanced many differing ideas to explain how the billions of neurons and trillions of synapses that connect them can ping meaningful messages to one another. The oldest is that the code corresponds to the rate of firing of the voltage spikes generated by a neuron.
Whereas the rate code may suffice for some stimuli, it might not be enough for booting a Marcel Proust or a Richard Feynman, supplying a mental screen capture of a madeleine cake or the conceptual abstraction of a textbook of differential equations. More recent work has focused on the precise timing of the intervals between each spike (temporal codes) and the constantly changing patterns of how neurons fire together (population codes).
Some help toward downloading to the brain might come from a decadelong endeavor to build an artificial hippocampus to help people with memory deficits, which may have the corollary benefit of helping researchers gain insights into the coding process. A collaboration between the University of Southern California and Wake Forest University has worked to fashion a replacement body part for this memory-forming brain structure. The hippocampus, seated deep within the brain’s temporal lobe, sustains damage in stroke or Alzheimer’s. An electronic bypass of a damaged hippocampus could restore the ability to create new memories. The project, funded by the National Science Foundation and the Defense Advanced Research Projects Agency, might eventually go further, enhancing normal memory or helping to deduce the particular codes needed for high- level cognition.
The two groups—led by Theodore W. Berger at U.S.C. and Samuel Deadwyler at Wake Forest—are preparing a technical paper showing that an artificial hippocampus took over from the biological organ the task of consolidating a rat’s memory of pressing a lever to receive a drop of water. Normally the hippocampus emits signals that are relayed to cortical areas responsible for storing the long-term memory of an experience. For the experiment, a chemical temporarily incapacitated the hippocampus. When the rat pressed the correct bar, electrical input from sensory and other areas of the cortex were channeled through a microchip, which, the scientists say, dispatched the same signals the hippocampus would have sent. A demonstration that an artificial device mimicked hippocampal output would mark a step toward deducing the underlying code that could be used to create a memory in the motor cortex—and perhaps one day to unravel ciphers for even higher-level behaviors.
If the codes for the sentence “See Spot run”—or perhaps an entire technical manual—could be ascertained, it might, in theory, be possible to input them directly to an electrode array in the hippocampus (or cortical areas), evoking the scene in The Matrix in which instructions for flying a helicopter are downloaded by cell phone. Artificial hippocampus research postulates a scenario only slightly more prosaic. “The kinds of examples [the U.S. Department of Defense] likes to typically use are coded information for flying an F-15,” says Berger.
The seeming simplicity of the model of neural input envisaged by artificial hippocampus-related studies may raise more questions than it answers. Would such an implant overwrite existing memories? Would the code for the sentence “See Spot run” be the same for me as it is for you or, for that matter, a native Kurdish speaker? Would the hippocampal codes merge cleanly with other circuitry that provides the appropriate context, a semantic framework, for the sentence? Would “See Spot run” be misinterpreted as a laundry mishap instead of a trotting dog?
Some neuroscientists think the language of the brain may not be deciphered until understanding moves beyond the reading of mere voltage spikes. “Just getting a lot of signals and trying to understand what these signals mean and correlating them with particular behavior is not going to solve it,” notes Henry Markram, director of neuroscience and technology at the Swiss Federal Institute of Technology in Lausanne. A given input into a neuron or groups of neurons can produce a particular output—conversion of sensory inputs to long-term memory by the hippocampus, for instance—through many different pathways. “As long as there are lots of different ways to do it, you’re not even close,” he says.
The Blue Brain Project, which Markram heads, is an attempt that began in 2005 to use supercomputer-based simulations to reverse-engineer the brain at the molecular and cellular levels—modeling first the simpler rat organ and then the human version to unravel the underlying function of neural processes. The latter task awaits a computer that boasts a more than 1,000-fold improvement over the processing power of current supercomputers. The actual code, when it does emerge, may be structured very differently from what appears in today’s textbooks. “I think there will be a conceptual breakthrough that will have significant implications for how we think of reality,” Markram says. “It will be quite a profound thing. That’s probably why it’s such an intractable problem.”
The challenge involved in figuring out how to move information into the brain suggests a practical foreseeable limit for how far neurotechnology might be advanced. The task of forming the multitude of connections that make a memory is vastly different from magnetizing a set of bits on a hard disk. “Complex information like the contents of a book would require the interactions of a very large number of brain cells over a very large area of the nervous system,” observes neuroscientist John P. Donoghue of Brown University. “Therefore, you couldn’t address all of them, getting them to store in their connections the correct kind of information. So I would say based on current knowledge, it’s not possible.”
Writing to the brain may remain a dream lost in cyberspace. But the seeming impossibility does not make Donoghue less sanguine about ultimate expectations for feeding information the other way and developing brain-controlled prostheses for the severely disabled. He has been a leader in studies to implant an array of multiple electrodes into the brain that can furnish a direct line from the cortex to a prosthetic arm or even a wheelchair.
Donoghue predicts that in the next five years brain-machine interfaces will let a paralyzed person pick up a cup and take a drink of water and that, in some distant future, these systems might be further refined so that a person with an upper spinal cord injury might accomplish the unthinkable, perhaps even playing a game of basketball with prosthetics that would make a reality of The Six Million Dollar Man, the 1970s television series. Even without an information pipeline into the brain, disabled patients and basic researchers might still reap the benefits of lesser substitutes. Gert Pfurtscheller of the Graz University of Technology in Austria and his colleagues reported last year on a patient with a spinal cord injury who was able, merely by thinking, to traverse a virtual environment, moving from one end to the other of a simulated street. Duke University’s Miguel A. L. Nicolelis, another pioneer in brain-machine interfaces, has begun to explore how monkeys connected to brain-controlled prosthetic devices begin to develop a kinesthetic awareness, a sense of movement and touch, that is completely separate from sensory inputs into their biological bodies. “There’s some physiological evidence that during the experiment they feel more connected to the robots than to their own bodies,” he says.
The most important consequences of these investigations may be something other than neural implants and robotic arms. An understanding of central nervous system development acquired by the Blue Brain Project or another simulation may let educators understand the best ways to teach children and determine at what point a given pedagogical technique should be applied. “You can build an educational development program that is engineered to, in the shortest possible time, allow you to acquire certain capabilities,” Markram says. If he is right, research on neural implants and brain simulations will produce more meaningful practical benefits than dreams of the brain as a flash drive drawn from 20th-century science-fiction literature.
Note: This article was originally published with the title, "Jacking Into the Brain".
///////////////////////Mesonoxian
Pertaining to midnight
//////////////////////////
No comments:
Post a Comment