Supercomputer Simulates 1% of the Brain – What’s Next?


Neural networks are used in neuroscience to create models that could potentially explain some cognitive phenomena. For example, many researchers have built models that create pretty accurate representations of child language acquisition. These networks can, essentially, learn new words and meanings, and their learning trajectory follows that of a typical child.

Neural networks have also been used to study the hemispheric lateralisation of letter recognition, the label-feedback hypothesis, and spreading-activation conceptual networks. (Neural nets are also used as machine-learning algorithms in other fields, but I will leave that discussion for another time.)

One point of contention about neural networks is that we never really know if they accurately represent the brain: the way that they are created is heavily influenced by neuroanatomy, and includes nodes that represent neurons and weights that represent neural connections, but it is impossible to accurately model all of the billions of neurons and trillions of connections in the brain.

So how do we know whether what we are modeling is a good representation of the brain? The answer is that we don’t. Through testing, though, we can compare the results of neural network with the results of human learning, and if they match up, it is generally accepted that the neural network accurately represents the cognitive phenomenon it was built to study.

A Japanese research group called RIKEN is undertaking a project using the K computer (the fourth most powerful supercomputer in the world) to simulate neural activity on a scale that has never been done before. They modeled 1.73 billion nerve cells and 10.4 trillion connections. That is a fantastically huge number, though it falls far short of the 86 billion neurons that was recently posited for the brain. One of the collaborators in the project reports that they modelled about 1% of the brain. Even so, that is a huge accomplishment.

So what did this simulated brain compute? As far as I can tell, pretty much nothing. After 40 minutes of using 82,944 processor cores and about a petabyte of memory, the K computer had simulated approximately one second of brain activity. That is 40 minutes of time on one of the world’s most powerful supercomputers for a single second of brain activity. Puts the complexity of the brain in perspective, does it not?

Even though this test was designed as a test of the programmers and hardware at RIKEN, it brings up some really interesting questions about neural networks, what they can do, and how rapidly we are improving our ability to simulate the brain. According to some estimates, we will be able to simulate the entire brain — down to individual neurons and synapses — within the next decade or so. To do this, we will need an exa-scale computer (the scale of which is completely beyond my comprehension).

Personally, I am not hugely hopeful about simulating the entire brain anytime in the foreseeable future. Even if we have the hardware capability, we still have to have the neuroanatomical knowledge, software power, and programming ability to make it all work together. This is no small task, even with the impressive self-organizing powers of neural networks. But technology is advancing at an unbelievable rate, so who knows? Maybe we will see a computerized brain in the next 20 years. What do you think? What comes next for neural network computation? Is there any ceiling for what it can accomplish?

The world will be watching this technology closely in the coming years. You can look forward to some really exciting and interesting developments!


Sparkes, M. (January 13, 2014) Supercomputer models one second of brain activity. The Telegraph.

Image via agsandrew / Shutterstock.

  • Daniel & others,

    I am now participating as an unpaid collaborator in the Human Brain Map Project. I have access to some of their data for my analysis & input. This is a very ambitious investigation to map a prototypical “healthy, normal” brain. This is the 1st step in creating a baseline-reference brain. Subjects participating in this project range in ages of 18 to 35.

    Half of our human genome is devoted to the form & function of the brain. Our genome contains some 3 B bits of chemical info & 4 M non-DNA swithches. Of all body systems, the brain utilizes the most energy.

    No doubt, that computers process huge amounts of info w unprecedeted speed & accuracy way beyond the speed of the brain!

    Researchers indicate that our brains comprise 1,000 Trillion connections among our neurons. Further, it processes more info than all the phone calls, e-mails, & texts, made by us across the globe. The current estimate is that the above-mentioned modes, result in only 1/100 of 1% comarped to our brain signals!

    And in FIFA 14 soccer, I win 96% of my more than 400 pro matches on my XBox 360!


    • prem

      For, most people, brain is the most under-nourished part of the body. Although it makes up just 2% of your body weight it uses 20% of your energy! Blue-green algaes metabolizes molecular nitrogen directly from air, allowing the biosynthesis of “low molecular weight peptides” that are precursors of the neurotransmitters used by the brain to influence many metabolic functions. Neurotransmitters are a chemical link that lets neurons communicate. The ability of the brain to manufacture neurotransmitters is controlled by the amount of amino acids in the bloodstream.Some of the amino acids in blue-green algae have actually been found to cross the blood-brain barrier where they are used to build neurotransmitters and influence other metabolic functions.

    • Thanks for commenting—that’s really cool that you’re contributing to the project! Please do keep us informed of any big breakthroughs or exciting developments. I’ll be watching the project with a lot of interest.

      I wasn’t aware that half of the genome is dedicated to the brain; that’s really amazing. It makes sense, though, if there are 1,000 trillion connections in the brain. That number totally blows my mind. I can’t even conceptualize it!

Daniel Albright, MA, PhD (c)

Daniel Albright, MA, is a PhD student at the University of Reading, studying the lateralization of linguistically mediated event perception. He received his masters in linguistics from the University of Colorado-Boulder. Get in touch with him at or on Twitter at @dann_albright.
See All Posts By The Author

Do not miss out ever again. Subscribe to get our newsletter delivered to your inbox a few times a month.