A Brain Made of Memristors




In 1997 the then world chess champion Garry Kasparov faced off with the IBM supercomputer, Deep Blue. Kasparov had defeated the computer five times, and he was ready to show the world once again that intelligent thought was still the sole province of human beings. But this match was different. After three drawn games and one win each, Deep Blue crushed Kasparov in the sixth game. The Artificial Intelligence (AI) community declared a victory: finally, man’s champion had been outmatched by the intelligence of a machine.

But the victory was declared prematurely. Deep Blue defeated Kasparov, but it was not intelligent; it simply calculated combinations of chess moves very efficiently in a “brute force” strategy. IBM retired the Deep Blue project after the final game against Kasparov, and the promise of a general purpose machine with human-like intelligence has yet to be realized.

The reason is simple: building intelligence comparable to that of a human brain in silicon requires a deep understanding of how the interactions of billions of neurons and synapses give rise to human intelligence, as well as the ability to replicate those interactions in sophisticated, powerful, and expensive computer hardware. Researchers in computer science and neuroscience have been steadily working to uncover the core design principles underlying intelligent behavior, and inventing key technologies needed to build machines that emulate it. Now, with a recent discovery at Hewlett-Packard Labs, the field is poised to make a massive leap forward by being able to finally build large, brain-like systems running on inexpensive and widely available hardware.

Until recently supercomputers have been the go-to machines for simulating intelligence. In 2005 Henry Markram announced that his team of neuroscientists and computer scientists at the Ecole Polytechnique Fédérale de Lausanne, Switzerland, would use an IBM supercomputer to simulate one square centimeter of cerebral cortex. In November 2009 IBM’s Dharmendra Modha claimed that his group used a similar machine to simulate a “cat sized brain”. Even though these supercomputers are fast enough to accurately simulate aspects of large neural systems, the result is not automatically intelligent. For that, we must build autonomous systems capable of learning intelligent behaviors, an achievement of the entire ensemble of simulated neurons, a task that biological organisms have evolved to master. Most basic research has focused on smaller, more manageable issues, such as characterizing the microscopic structure of synapses, the fundamental communication and memory elements in the brain, or using mathematical models to capture the coarse dynamics of large populations of neurons that comprise the visual areas devoted to object recognition.

Results of this research have already yielded significant real-world applications. High-level understanding of human cognition is influencing everything from elementary school education to training procedures for medical imaging technicians. And all kinds of artificial intelligence programs are already a critical part of daily life: simply using email, for example, would be unimaginable without the AI filters dutifully blocking most spam day after day.

Yet these are still only partial attempts at building truly general-purpose, intelligent systems. AI spam filters or chess players are highly specialized solutions for restricted, clearly defined problems. Biological intelligence, in contrast, uses general-purpose “wetware” to solve many different tasks with remarkable flexibility. A hungry mouse, for example, internally generates a “hunger drive” that triggers exploratory behavior. The mouse may follow familiar, memorized routes that it has learned are safe, but at the same time it must integrate signals from different senses as it encounters various objects in the environment. The mouse is able to recognize objects, such as a mouse trap, and avoid them even though he has never seen the object at that particular angle before. Upon reaching and consuming food, the mouse is able to quickly disengage its current plan and switch to its next priority. Even this apparently simple behavior controlled by a relatively small-sized brain involves the activity of networks of millions of neurons and billions of synapses, distributed across many brain areas, working together.

Far from being chaotic, this structure consists of multiple levels of organization, all the way from molecules up to assemblies of whole brain regions. These many kinds of structure require analysis at many depth levels, and suggest that it will not be possible to build a functional neuromorphic entity without replicating its complex behavior in simulations.

While the biological brain has evolved remarkably compact, low power circuitry, to date neuroscientists have had to simulate these massive dynamical systems on conventional computers. The recent rapid improvement of computing technology has made this effort easier and more affordable for even low-budget research labs. The fastest computers in the world are already large enough to simulate biological-scale neural systems, as Modha and Markram’s work demonstrates. There is still a fundamental problem with this approach however; conventional computers don’t work at all like biological brains. Data storage in computers is physically separated from where the data is processed, whereas every synapse in the brain is both an element of computation and an element of memory. Such wetware can be emulated on a digital computer, of course, but at a massive penalty in efficiency. For every byte of computation, a conventional computer must fetch a byte from memory, send it across a communication bus to the processor, move that byte into a register, perform the computation, and reverse the process to store results back to memory. A biological system, in contrast, performs computation in the same location as the memory and need not waste energy shuttling data around. This means a modern supercomputer big enough to simulate a human brain at close to real-time would need a dedicated power plant, while an actual human brain runs on about as much energy as a standard light bulb.

Given the incredible inefficiency of implementing biological computation on conventional silicon hardware, progress has been understandably slow. There is still much debate about how basic aspects of biological brains work and how they might be recreated in hardware, so it’s difficult for research institutions to justify allocating the kind of resources necessary to attempt construction of biological-scale artificial neural systems. Within United States, several groups are currently working to implement high density, bio-inspired chips that can have practical applications. Kwabena Boahen at Stanford University is developing a silicon chip that can be used to simulate the dynamics and learning of several hundreds of thousands of neurons and a few billion synapses. One of the goals of this research is to build artificial retinas to be used as medical implants for the blind. The 4th generation Johns Hopkins University system (IFAT 4G) designed by Ralph Etienne-Cummings will consist of over 60,000 neurons with 120 million synaptic connections. An earlier version of the chip has been used to implement a visual cortex model for object recognition. The European Union is also strategically investing in neuromorphic chips as a future key technology. The Fast Analog Computing with Emergent Transient States (FACETS) project is a large EU initiative where more than a hundred computer scientists, engineers and neuroscientists are realizing a novel chip that exploit the concepts experimentally observed in biological nervous systems. The non-von Neumann hardware implement state-of-the-art knowledge gathered by neuroscience including plasticity mechanisms and a complex neuron model with up to 16.000 synaptic inputs per neuron. The FACETS system, assembling up to 200.000 neurons and 50.000.000 synapses on a wafer, is not designed for a particular application yet, but it is the first of his kind in providing scientists with a research infrastructure to experiment with large-scale artificial systems at speeds 10.000 to 100.000 time faster than real time. This will allow FACETS researchers to simulate the lifespan of a system as big as a mouse brain in a few seconds, gaining tremendous insight into the computing principles of the brain, improving our understanding of mental disorders and helping to develop more targeted drugs.

In May 2008, USA’s Defense Advanced Research Projects Agency (DARPA), with a track record in promoting high risk, high reward projects (such as the precursor of the Internet), jump-started the process via the Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) initiative. The goal of this research program is to create electronic neuromorphic machine technology that is scalable to biological levels. SyNAPSE is a complex, multi-faceted project, but traces its roots to two fundamental problems. First, classical AI algorithms generally perform poorly in the complex, real-world environments where biological agents thrive. Second, traditional microprocessors are extremely inefficient at executing highly distributed, data-intensive algorithms, whereas biological computation is highly distributed and deeply data-intensive. SyNAPSE seeks develop a new generation of nanotechnology necessary for the efficient implementation of algorithms that more closely mimic biological intelligence. DARPA has awarded funds to three prime contractors: HP, HRL and IBM. Before the launch of the SyNAPSE project, HP made a key advance towards the creation of compact, low-power hardware that could support biological computation: the discovery of the memristor. The concept of the “memristor” wasn’t new, having been predicted by a symmetry argument by Leon Chua in 1971. Chua noticed that the three passive circuit elements, the resistor, inductor, and capacitor, ought to be part of a family of four. This fourth device, which Chua called the memristor, would behave like a resistor with a conductance that changed as a function of its internal state and the voltage applied. In other words, it will behave like a memory.

Chua’s work was by-and-large ignored, though, until Greg Snider at HP Labs realized that a strange nanoscale device he was working on exhibited behavior predicted by Chua. This behavior, called a pinched hysteresis loop, had shown up periodically in the nanotechnology literature going back many years. Greg, however, was the first person to find the connection between the data and Chua’s theory. This discovery was crucial for the future of neuromorphic technology because memristors are the first memory technology with high enough power efficiency and density to rival biological computation.

The new HP memristor-based neuromorphic chip is a critical step because it brings data close to computation, much as biological systems do. The architecture is closer to a conventional massively-multicore processor than the neuromorphic processors developed by other groups, but with a very high density memristive memory layered directly on top. Each core has direct access to its own large bank of memory, which dramatically cuts wire length and thus, power consumption.

This architecture is possible because memristors are passive components, compatible with conventional manufacturing processes, and extremely small. The passive property is what enables the miniscule power consumption. Memristors store information through physical changes to the device which require no power to maintain, so power is only needed when a memristor must be updated to a new value. This property is described by the term “non-volatile”. Flash memory circuits are also non-volatile, but they are vastly larger and use significantly more power. Because this memory is compatible with standard manufacturing processes, it can be layered directly on top of a conventional processor. This yields another massive drop in power consumption, because data need not be shuttled over long distances. Finally, because a memristive memory cell consists of nothing more than a single memristor sandwiched between two nanowires, memristive memories can be manufactured at extremely high density. The SyNAPSE project aims to support the manufacture of billions of synapses per square centimeter, but trillions or more are possible in the near future thanks to improvement in the technology and increase in the number of layers of memristors to be deposited.

Taken together, these factors wipe away one of the fundamental limiting factors in prior generations of neuromorphic hardware: the lack of scalability. With a conventional hardware manufacturing process, the amount of surface area dedicated to simulating a neuron is roughly the same as the surface area needed to simulate a synapse. In biological systems, the ratio in size between neurons and synapses averages ten thousand to one, and can range from a few to one all the way up to a hundred thousand to one. Memristors are sufficiently small compared with conventional components that the surface area required to simulate a synapse drops to a small fraction of the area needed to simulate a neuron — a ratio much closer to biology. This cuts power by dramatically reducing the switching and signaling overhead, enabling much higher scalability and computing density. A hardware device based on memristors capable of simulating about the same number of neurons and synapses of a large mammal would take up the volume of a shoebox, with a power consumption of about 1kW. As much power as a common espresso machine needs.

Even though the HP technology has provided a huge advance in the field of hardware compatible with neural computing, the ability to build intelligent software to run on these devices is a big challenge to the SyNAPSE program. HP is working with a team of researchers in the department of Cognitive and Neural Systems at Boston University (BU) to solve this challenge. The BU team is uniquely poised to offer such a solution because of its multidisciplinary approach to building intelligent machines. Such a project requires expertise from fields such as neuroscience, mathematics, engineering, psychology, and robotics. The Center of Excellence for Learning in Education, Science, and Technology (CELEST), a National Science Foundation (NSF) Science of Learning Center founded in 2004, — of which the BU team is an integral part — is attempting just that. CELEST’s research approach develops a new paradigm to simultaneously study and understand brains and behavior to apply insight from computational modeling to building intelligent machines.

The innovative memristor-based device, manufactured at HP and equipped with neural models designed, developed and implemented at BU, will dramatically lower the barrier to studying the brain and simulating large-scale brain-inspired computing systems, radically accelerating progress in basic neuroscience research, and spin-off technological applications. The new technology is also fundamentally inexpensive and small — so small that the equivalent of a fairly large network of cortical cells could be deployed in a cell phone. A major step will be to simulate the behavior of a fairly complex brain powering an artificial organism in a virtual environment. Boston University and HP are currently designing the perceptual, navigation, and emotional systems that will emulate some basic rodent behavior on hardware. The simulated nervous system, initially implemented on racks of conventional computers and then transferred to a number of smaller chips, will allow the animat to learn, via plastic changes in synaptic connections among neurons, how to interact intelligently with its environment: searching for food, following learned paths, avoiding punishment and predators, and later competing with other animats for resources.

There are still many challenges to overcome before memristive-based neuromorphic devices transform everyday life. For example, animal intelligence has evolved to cope with neural and synaptic loss. The loss of a transistor is catastrophic to a traditional processor, but biological brains suffer the constant loss of neurons and synapses as they age and show only a “graceful decline.” Memristive synapses are very small and vulnerable to manufacturing error; would a simulated neural system be as tolerant of failing memristive synapses as a biological brain is to equivalent failure?

Nevertheless, such advances in neuromorphic technology development are really leading the way towards development of human-like intelligent behavior in machines. One can hope that the next time a chess grandmaster sits down to a game with an intelligent computer, the computer may win or it may lose, buts its behavior, emotions, and decisions will be indistinguishable from its human counterpart. And that will be a true victory for AI.

Massimiliano Versace, PhD

Massimiliano Versace, PhD, is a Senior Research Scientist at the Department of Cognitive and Neural Systems at Boston University, Director of Neuromorphics Lab, and co-Director of Technology Outreach at the NSF Science of Learning Center CELEST: Center of Excellence for Learning in Education, Science, and Technology. He is a co-PI of the Boston University subcontract with Hewlett Packard in the DARPA Systems of Neuromorphics Adaptive Plastic Scalable Electronics (SyNAPSE) project. He earned his PhD in Cognitive and Neural Systems from Boston University in 2007.
See All Posts By The Author