A Brain Made of Memristors

In 1997 the then world chess champion Garry Kasparov faced off with the IBM supercomputer, Deep Blue. Kasparov had defeated the computer five times, and he was ready to show the world once again that intelligent thought was still the sole province of human beings. But this match was different. After three drawn games and one win each, Deep Blue crushed Kasparov in the sixth game. The Artificial Intelligence (AI) community declared a victory: finally, man’s champion had been outmatched by the intelligence of a machine.

But the victory was declared prematurely. Deep Blue defeated Kasparov, but it was not intelligent; it simply calculated combinations of chess moves very efficiently in a “brute force” strategy. IBM retired the Deep Blue project after the final game against Kasparov, and the promise of a general purpose machine with human-like intelligence has yet to be realized.

The reason is simple: building intelligence comparable to that of a human brain in silicon requires a deep understanding of how the interactions of billions of neurons and synapses give rise to human intelligence, as well as the ability to replicate those interactions in sophisticated, powerful, and expensive computer hardware. Researchers in computer science and neuroscience have been steadily working to uncover the core design principles underlying intelligent behavior, and inventing key technologies needed to build machines that emulate it. Now, with a recent discovery at Hewlett-Packard Labs, the field is poised to make a massive leap forward by being able to finally build large, brain-like systems running on inexpensive and widely available hardware.

Until recently supercomputers have been the go-to machines for simulating intelligence. In 2005 Henry Markram announced that his team of neuroscientists and computer scientists at the Ecole Polytechnique Fédérale de Lausanne, Switzerland, would use an IBM supercomputer to simulate one square centimeter of cerebral cortex. In November 2009 IBM’s Dharmendra Modha claimed that his group used a similar machine to simulate a “cat sized brain”. Even though these supercomputers are fast enough to accurately simulate aspects of large neural systems, the result is not automatically intelligent. For that, we must build autonomous systems capable of learning intelligent behaviors, an achievement of the entire ensemble of simulated neurons, a task that biological organisms have evolved to master. Most basic research has focused on smaller, more manageable issues, such as characterizing the microscopic structure of synapses, the fundamental communication and memory elements in the brain, or using mathematical models to capture the coarse dynamics of large populations of neurons that comprise the visual areas devoted to object recognition.

Results of this research have already yielded significant real-world applications. High-level understanding of human cognition is influencing everything from elementary school education to training procedures for medical imaging technicians. And all kinds of artificial intelligence programs are already a critical part of daily life: simply using email, for example, would be unimaginable without the AI filters dutifully blocking most spam day after day.

Yet these are still only partial attempts at building truly general-purpose, intelligent systems. AI spam filters or chess players are highly specialized solutions for restricted, clearly defined problems. Biological intelligence, in contrast, uses general-purpose “wetware” to solve many different tasks with remarkable flexibility. A hungry mouse, for example, internally generates a “hunger drive” that triggers exploratory behavior. The mouse may follow familiar, memorized routes that it has learned are safe, but at the same time it must integrate signals from different senses as it encounters various objects in the environment. The mouse is able to recognize objects, such as a mouse trap, and avoid them even though he has never seen the object at that particular angle before. Upon reaching and consuming food, the mouse is able to quickly disengage its current plan and switch to its next priority. Even this apparently simple behavior controlled by a relatively small-sized brain involves the activity of networks of millions of neurons and billions of synapses, distributed across many brain areas, working together.

Far from being chaotic, this structure consists of multiple levels of organization, all the way from molecules up to assemblies of whole brain regions. These many kinds of structure require analysis at many depth levels, and suggest that it will not be possible to build a functional neuromorphic entity without replicating its complex behavior in simulations.

While the biological brain has evolved remarkably compact, low power circuitry, to date neuroscientists have had to simulate these massive dynamical systems on conventional computers. The recent rapid improvement of computing technology has made this effort easier and more affordable for even low-budget research labs. The fastest computers in the world are already large enough to simulate biological-scale neural systems, as Modha and Markram’s work demonstrates. There is still a fundamental problem with this approach however; conventional computers don’t work at all like biological brains. Data storage in computers is physically separated from where the data is processed, whereas every synapse in the brain is both an element of computation and an element of memory. Such wetware can be emulated on a digital computer, of course, but at a massive penalty in efficiency. For every byte of computation, a conventional computer must fetch a byte from memory, send it across a communication bus to the processor, move that byte into a register, perform the computation, and reverse the process to store results back to memory. A biological system, in contrast, performs computation in the same location as the memory and need not waste energy shuttling data around. This means a modern supercomputer big enough to simulate a human brain at close to real-time would need a dedicated power plant, while an actual human brain runs on about as much energy as a standard light bulb.

Given the incredible inefficiency of implementing biological computation on conventional silicon hardware, progress has been understandably slow. There is still much debate about how basic aspects of biological brains work and how they might be recreated in hardware, so it’s difficult for research institutions to justify allocating the kind of resources necessary to attempt construction of biological-scale artificial neural systems. Within United States, several groups are currently working to implement high density, bio-inspired chips that can have practical applications. Kwabena Boahen at Stanford University is developing a silicon chip that can be used to simulate the dynamics and learning of several hundreds of thousands of neurons and a few billion synapses. One of the goals of this research is to build artificial retinas to be used as medical implants for the blind. The 4th generation Johns Hopkins University system (IFAT 4G) designed by Ralph Etienne-Cummings will consist of over 60,000 neurons with 120 million synaptic connections. An earlier version of the chip has been used to implement a visual cortex model for object recognition. The European Union is also strategically investing in neuromorphic chips as a future key technology. The Fast Analog Computing with Emergent Transient States (FACETS) project is a large EU initiative where more than a hundred computer scientists, engineers and neuroscientists are realizing a novel chip that exploit the concepts experimentally observed in biological nervous systems. The non-von Neumann hardware implement state-of-the-art knowledge gathered by neuroscience including plasticity mechanisms and a complex neuron model with up to 16.000 synaptic inputs per neuron. The FACETS system, assembling up to 200.000 neurons and 50.000.000 synapses on a wafer, is not designed for a particular application yet, but it is the first of his kind in providing scientists with a research infrastructure to experiment with large-scale artificial systems at speeds 10.000 to 100.000 time faster than real time. This will allow FACETS researchers to simulate the lifespan of a system as big as a mouse brain in a few seconds, gaining tremendous insight into the computing principles of the brain, improving our understanding of mental disorders and helping to develop more targeted drugs.

In May 2008, USA’s Defense Advanced Research Projects Agency (DARPA), with a track record in promoting high risk, high reward projects (such as the precursor of the Internet), jump-started the process via the Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) initiative. The goal of this research program is to create electronic neuromorphic machine technology that is scalable to biological levels. SyNAPSE is a complex, multi-faceted project, but traces its roots to two fundamental problems. First, classical AI algorithms generally perform poorly in the complex, real-world environments where biological agents thrive. Second, traditional microprocessors are extremely inefficient at executing highly distributed, data-intensive algorithms, whereas biological computation is highly distributed and deeply data-intensive. SyNAPSE seeks develop a new generation of nanotechnology necessary for the efficient implementation of algorithms that more closely mimic biological intelligence. DARPA has awarded funds to three prime contractors: HP, HRL and IBM. Before the launch of the SyNAPSE project, HP made a key advance towards the creation of compact, low-power hardware that could support biological computation: the discovery of the memristor. The concept of the “memristor” wasn’t new, having been predicted by a symmetry argument by Leon Chua in 1971. Chua noticed that the three passive circuit elements, the resistor, inductor, and capacitor, ought to be part of a family of four. This fourth device, which Chua called the memristor, would behave like a resistor with a conductance that changed as a function of its internal state and the voltage applied. In other words, it will behave like a memory.

Chua’s work was by-and-large ignored, though, until Greg Snider at HP Labs realized that a strange nanoscale device he was working on exhibited behavior predicted by Chua. This behavior, called a pinched hysteresis loop, had shown up periodically in the nanotechnology literature going back many years. Greg, however, was the first person to find the connection between the data and Chua’s theory. This discovery was crucial for the future of neuromorphic technology because memristors are the first memory technology with high enough power efficiency and density to rival biological computation.

The new HP memristor-based neuromorphic chip is a critical step because it brings data close to computation, much as biological systems do. The architecture is closer to a conventional massively-multicore processor than the neuromorphic processors developed by other groups, but with a very high density memristive memory layered directly on top. Each core has direct access to its own large bank of memory, which dramatically cuts wire length and thus, power consumption.

This architecture is possible because memristors are passive components, compatible with conventional manufacturing processes, and extremely small. The passive property is what enables the miniscule power consumption. Memristors store information through physical changes to the device which require no power to maintain, so power is only needed when a memristor must be updated to a new value. This property is described by the term “non-volatile”. Flash memory circuits are also non-volatile, but they are vastly larger and use significantly more power. Because this memory is compatible with standard manufacturing processes, it can be layered directly on top of a conventional processor. This yields another massive drop in power consumption, because data need not be shuttled over long distances. Finally, because a memristive memory cell consists of nothing more than a single memristor sandwiched between two nanowires, memristive memories can be manufactured at extremely high density. The SyNAPSE project aims to support the manufacture of billions of synapses per square centimeter, but trillions or more are possible in the near future thanks to improvement in the technology and increase in the number of layers of memristors to be deposited.

Taken together, these factors wipe away one of the fundamental limiting factors in prior generations of neuromorphic hardware: the lack of scalability. With a conventional hardware manufacturing process, the amount of surface area dedicated to simulating a neuron is roughly the same as the surface area needed to simulate a synapse. In biological systems, the ratio in size between neurons and synapses averages ten thousand to one, and can range from a few to one all the way up to a hundred thousand to one. Memristors are sufficiently small compared with conventional components that the surface area required to simulate a synapse drops to a small fraction of the area needed to simulate a neuron — a ratio much closer to biology. This cuts power by dramatically reducing the switching and signaling overhead, enabling much higher scalability and computing density. A hardware device based on memristors capable of simulating about the same number of neurons and synapses of a large mammal would take up the volume of a shoebox, with a power consumption of about 1kW. As much power as a common espresso machine needs.

Even though the HP technology has provided a huge advance in the field of hardware compatible with neural computing, the ability to build intelligent software to run on these devices is a big challenge to the SyNAPSE program. HP is working with a team of researchers in the department of Cognitive and Neural Systems at Boston University (BU) to solve this challenge. The BU team is uniquely poised to offer such a solution because of its multidisciplinary approach to building intelligent machines. Such a project requires expertise from fields such as neuroscience, mathematics, engineering, psychology, and robotics. The Center of Excellence for Learning in Education, Science, and Technology (CELEST), a National Science Foundation (NSF) Science of Learning Center founded in 2004, — of which the BU team is an integral part — is attempting just that. CELEST’s research approach develops a new paradigm to simultaneously study and understand brains and behavior to apply insight from computational modeling to building intelligent machines.

The innovative memristor-based device, manufactured at HP and equipped with neural models designed, developed and implemented at BU, will dramatically lower the barrier to studying the brain and simulating large-scale brain-inspired computing systems, radically accelerating progress in basic neuroscience research, and spin-off technological applications. The new technology is also fundamentally inexpensive and small — so small that the equivalent of a fairly large network of cortical cells could be deployed in a cell phone. A major step will be to simulate the behavior of a fairly complex brain powering an artificial organism in a virtual environment. Boston University and HP are currently designing the perceptual, navigation, and emotional systems that will emulate some basic rodent behavior on hardware. The simulated nervous system, initially implemented on racks of conventional computers and then transferred to a number of smaller chips, will allow the animat to learn, via plastic changes in synaptic connections among neurons, how to interact intelligently with its environment: searching for food, following learned paths, avoiding punishment and predators, and later competing with other animats for resources.

There are still many challenges to overcome before memristive-based neuromorphic devices transform everyday life. For example, animal intelligence has evolved to cope with neural and synaptic loss. The loss of a transistor is catastrophic to a traditional processor, but biological brains suffer the constant loss of neurons and synapses as they age and show only a “graceful decline.” Memristive synapses are very small and vulnerable to manufacturing error; would a simulated neural system be as tolerant of failing memristive synapses as a biological brain is to equivalent failure?

Nevertheless, such advances in neuromorphic technology development are really leading the way towards development of human-like intelligent behavior in machines. One can hope that the next time a chess grandmaster sits down to a game with an intelligent computer, the computer may win or it may lose, buts its behavior, emotions, and decisions will be indistinguishable from its human counterpart. And that will be a true victory for AI.

  • Al

    Impressive article. Keep them coming.

  • sam

    Go for monkey intelligence for robots that manufacture. Human intelligence requires rights, and any robotic manifestation of such intelligence should be treated like any other human. After all, we will be indestinguishable except for the hardware simulating the conciousness, so human level intelligence should be created for no other purpose than to give birth to life.

    The ideal set up would be a monkey level intelligence with analogoues biological function (analog) within an AI construct simulated on a digital computer that is intelligent enough to manufacture and do anything we might desire (and by then, ‘we’ might include our human level-or more- intelligent machine brothers–…wink, ghost in the shell). Maids, construction, manufacturing, etc. Autonomous vehicles. Also, since a neural code woudl be harder to crack, perhaps have them be individual enough to prevent virus infection, yet common enough for communication or interfacing/. This would be especially important with self-driven vehicles. We can also arm huge robots that can build a building or a house in about a day (partner with 3d printing).

    But lets never be a fool to put a human level intelligence to do our bidding like some sort of slave, not only is that unwise for our own self-preservation, but neither is it morally correct. Human level intelligent ‘machines’ will be treated like any human. And in fact, we humans, will, by virtue of competition, augment ourselves through such intelligent machines to match the level of intelligence of our purely ‘machine’ counterparts. Let us do this in lockstep, so that we may both evolve side by side. My intuition is that such intelligent machines will need us as we need them. wetware may be the only place to simulate smell and complex chemical reactions that ‘resonate’ and create a holistic feel for us. if we find a technological substrate to be superior to the biological one, which is highly doubtful, nature probably found the optimal way before us, then we can simply transfer ourselvces to the machine substrates and evolve as ‘machines’. Evolving our source code to the level of the ‘machine counterparts. ‘ and Hence, denying them or us any competetive bent. We will simply be one and the same, evolving software. And there will be no reason to ‘attack’. By virtue of what is better, either one of us to jump to the biological, or it will be the other way around, and we will then reach for what is best in that substrate. No need to be competitiors. We are the same ‘soul’..information and software simulated on a hardware. We will simply seek to better ourselves..I presume. And if one is highly unlike tyheo ther, nothing that cannot be changed through modification so we can better relate and merge. In other words, I do nhot see the intelligent machines as a threat, but a compliment to our own human conditiuon, while we find the optimal of all conditions, on our path to excellence and evolutionary prime. Once we find the optimal ‘way’ then we can all inhabvit this technological ‘ecosystem’ and there will be no difference between ‘them’ and ‘us’ as there is between families or races. In fact, by then, probably much less difference. Genetic engineering might have allowed for personality and different traits for interesting life, but we will all be similarly intelligent by then.

    So, I guess just work on monkey level or subconcious level intelligence, that is ablet o operate machinary just as efficiently as any man when working in tandum. Perhaps then manual labour could become a thing of that past. Again, this is probably 50 years away, but just some thoughts to set your foundation. 😉

    Doesn’t mean we shouldnt reach for al types of intelligence at the beginning, but once things start maturing, perhaps we need to regulate the balance a bit more. We wouldn’t want a human level machine on the internet, for the same reason we wouldn’t want a human with the nuclear code keys to implement immediate launch or with tyrant level status to do whatever over any individual. You just don’t want to tempt any sentient life-form with that kind of power, hence, the solution. You make it intelligent when it comes to manual labour, but not so intelligent that it disrespects its level and tempts it with power, as a function of its greater intelligence. Think of it like a brave new world, but no human level intelligence is genetically designed on purpose to be inferior, that is horribly immoral, instead you nanotech desighn, ai design, perhaps a genetic algorithm, a primate level intelligence to take care of the manual labour. This will be appropriate for that level of intelligence. While greater intelligence is treated with more or the same esteem as human intelligence. Its about maximizing conciousness and sentient life forms to do their best role and what would interest them most.

  • sach

    I think the real leap for mankind will be when we can access others minds through a mind machine interface, like a neural interface. I think that will allow all of our collective intelligence to rise. Machines that mimic biological processing may be the easiest interface in the meeting of the minds.

  • someguyinadumpster

    gets repetetive after 2:20. Pretty nice beat up to that point

  • someguyinadumpster

    but to a more serious note (lol), I think we need to keep an eye on technology in general from this point on. Im not referring so much to memristors, its very premature. But more about the technological ecosystem in general. As devices have more and more functionally it is not impossible to imagine a future where instead of us controlling a technological ecosystem, we end up living in it. And given its complexity we may not be fully equpipped to understand it. Hence, overwhelming data overload, etc. After enhancement of the human form I don’t see this as a problem. But prior to it, I think we need to keep say, cameras and the internet at bay. I mean there could be like a ‘well’ period where there is no privacy as an emergent propety of this whole tech ecosystem evolution and that would be very adverse. So its best that say, we start banning things like facebook…no centralized holder of data. We make it illegal, especially something so comprehensive as facebok. Just imagine a world where there are intelligent machines that can access all the inumerable data troves out there. I don’t know, I think there should be a law banning data collection in such a sweeping fashion as facebook does. I know its detrimental to those who have much to offer and want to promote themselvs, but I think that is small benefit compared to the cost of something going awry due to social media. Just make something like facebook illegal. Everything else is beneficial.

  • someguyinadumpster

    But think about it this way…lol..

    the best way to prevent adverse effects, is to develope a premature AI that can collect rather obnoxious details of your life….from thei tnernet…..and do it in a much more comprehenesive fashion that facebook or advertisers, google could dream about.

    People will then go oh oh, maybe we made am istake, and then recoil. Every force has an equal and opposite reaction. So perhaps a natural evolution is best because any forseeable problem will be taken accounted as it arises in the forseeable future. You make it premature so there is an immediate reaction to counter it so it comes into focus.

  • someguyinadumpster


    I, for one, welcome our source code origin genetic algorithms but emergent property life form overloards in the cybernetic universe.


  • someguyinadumpster

    choose the correct path, or your love of greed will be reflected upon you, and you will die a spiritual death, for your path is empty unknown to your unsightly form now, and this death will be mirrored in the ‘real’ world. For they are one in the same.

    end of line.

  • someguyinadumpster

    beleive, for you are not a causal link in a deterministic universe, but a participant in the cosmic dialogue. You are made in your creator’s image, and you have the power to create and manifest the universe around you…..eventually. Still at a nascent stage, you will acheive your nirvana, in time.


    go race of children, you will find your way.

  • someguyinadumpster

    when we talk about conciousness, we are talking about sacredness and reality itself. Let yourself not be misguided by the eventual evolution that is planned before you….only one path lies ahead, its always wise to choose the correct path. There is no other.


  • A very useful survey. The article correctly points out that the real key to brain simulation is neuromorphic hardware, in particular, extremely high density “plastic”, multibit, synapses. This is also the key to general intelligence, which so far at least is restricted to real brains, with synapse densities ~ 1 billion per cubic mm. In essence, one must use massive parallelism to tame the curse of dimensionality. Clever algorithms alone (almost by definition) cannot achieve general intelligence, only solutions to specific cases.
    However, the author does not address the central problem, which is both subtle and obvious. These synapses, whether implemented in wetware or hardware, have to be independently adjustable, even though they are typically separated by less than 1 micron. The crucial issue in neuromorphic intelligence is therefore, how would inevitable (albeit extremely small) “crosstalk” affect the overall learning process? In particular, one can show that many (perhaps all) powerful unsupervised learning procedures fail completely when crosstalk exceeds biophysically inevitable levels, especially when neurons receive (as they must) large numbers of inputs. I suspect that memristors will suffer from the same problem; even though they are now being built at very high density, the crosstalk problem has not been completely solved, and likely never will be.
    The solution, I believe, is to look very carefully at actual neocortical circuitry, especially in relation to the “crosstalk” issue, since this brain region does appear to have achieved some form of general intelligence – the only form we currently know about.

  • Thom McLean

    Seems like the fault tollerance issue is a subset of the ability to learn. Wouldnt the loss of some small amount of components be addressable by ensuring that a more than sufficient number of components represented the cognitive function in jeopardy (at least statically)? An alternative is to ensure that the degradation is slower than the ability to retrain that function with other components involved.
    Another challenge that is particularly interesting is the ability for some structures to encode specific knowledge or behavior. For example, autonomic functions of our bodies are always controlled by specific parts of the brain, which have developed the ability to maintain homeostasis without external (conscious) involvement. How can we ensure that specific functions are automatically encoded, structurally, without learning? One might argue that this function is learned as an embryo develops. However, lower life forms, like insects, hatch from larva, and can immediately undertake complex operations like flying. How can the ability to fly and navigate be encoded in an insects’ brain without any experiential basis?

    What I’d really like to see is a listing of the challenges being tackled by different research groups. Is SYNAPSE creating such a list? Does anyone hav a website that captures the current thoughts on these challenges?


  • Anonymous

    It hasn’t been legal to use monkey-level intelligence for manufacturing purposes without pay since the mid 1860s or so.

  • anon

    I guess what that person you are addressing was refering to (keeping in mind your joke) was the collective intelligence of a bunch of individual robots doing the dirty work. Its a cool idea. Just apply the same logic of AI swarm intelligence so it acts collectively much more intelligently than the constituent parts. In fact, the parts depend on it in this particular concept. You have the individuals limited in their capability, and at the same time, you build a fail safe in an emergent programmable hive mind behaviour that is completely whithin our control and controls the neurmorphic based agents’ actions, not allowing it to take a life of its own per say, not that it would want to however, because again, it wouldn’t be particularly intelligent. So its design by purpose. Higher intellects don’t particularly self-actualize through manual labour, but a monkey level intelligence might be quite entertained building a house, and so forth. But then again, I don’t know, I guess. This is all pretty much a fruitless exercise and so the spirit of your joke/comment is spot on; who the hell is talking about monkey level intelligence on an article such as this, so in other words, it too runaway with the imagination and too carelessly to be taken seriously. So yea, I think this whole comment section is getting kind of carried away with the hypothetical and theoretical. Keep in mind we havn’t even started a new architecture, not to mention initiated electric engineering courses in non-linear components, so why don’t we start there first? Im just saying.

    My prediction is that memristors may run into some other problem, which will then necessitate a new hardware improvement or software breakthrough in order to bring about the kind of outcome and eventuality you are all hoping for. But Ive been proven wrong in the past about these matters regarding exponential curves in information technology, so who knows really at this point. As interesting as this intellectual exercise (the article and comments) may be, applications are only as certain as the market demand and the various manufacturing processes will allow.

    Manufacture the mems in mass first, then we’ll talk.

  • anon

    adding on the idea, for the sake of intellectual m…nevermind:

    could also be like a rights by intelligence. If a being becomes self-aware from its ‘monkey’ level of intelligence, then it will recognize this fact and want more than just an existance of manual labour, so at that instant, it has all the rights than any human and its conciousness is transferred to a humanoid body. This would be an error of the manufacturer. Then the task would be to perfect things so the entities designed for manual labour were self-justifying and equally so for intelligent entities. So there is no ‘resentment’ in ‘awakening’. All human rights are protected, so if identical or similar conciousnesses arise, they should have these identical rights, by virtue of their greater, equal or similarly intelligent, intellect. Hence, you don’t have friction.

    ps. Im no suggesting human level intellects gain or lose rights by virtue of their intelligence. Human level intellect might be from 70 iq to 220 (some more advanced measure of IQ by then say), so whithin that range, humans and human level intellects are treated equally, just as we are now (except we are all strictly human now, thats really the only difference). However, by then, genetic engineering might have allowed a base line intelligence in all humans and human like intelligence (perhaps 120; iq is just a number, intelligence is as varied as the iris of a human, it is a sweet spot generated by many flavours, hence, I think its much more likely we will gain a ‘baseline’ before we all engineer ourselves into geniuses or genius prototypes. What type? einstein, newton? that narrows the gene pool, in the long run actually making people dumber, hence, its kind of irrational), and so the range will be smaller, avoiding the problematic yet predictable issue, perhaps in your mind as you read this, of a mentally challenged human being equaled with a monkey level machine intelligence.

  • Pingback: Memristors | Kambiz Kamrani()

  • Liliza Kinnear

    Can this type of technology help people who have Alzheimer’s disease or dementia to stop them forgetting things?

Massimiliano Versace, PhD

Massimiliano Versace, PhD, is a Senior Research Scientist at the Department of Cognitive and Neural Systems at Boston University, Director of Neuromorphics Lab, and co-Director of Technology Outreach at the NSF Science of Learning Center CELEST: Center of Excellence for Learning in Education, Science, and Technology. He is a co-PI of the Boston University subcontract with Hewlett Packard in the DARPA Systems of Neuromorphics Adaptive Plastic Scalable Electronics (SyNAPSE) project. He earned his PhD in Cognitive and Neural Systems from Boston University in 2007.

See All Posts By The Author

Do not miss out ever again. Subscribe to get our newsletter delivered to your inbox a few times a month.