Supercomputer Simulates 1% of the Brain – What’s Next?
Neural networks are used in neuroscience to create models that could potentially explain some cognitive phenomena. For example, many researchers have built models that create pretty accurate representations of child language acquisition. These networks can, essentially, learn new words and meanings, and their learning trajectory follows that of a typical child.
Neural networks have also been used to study the hemispheric lateralisation of letter recognition, the label-feedback hypothesis, and spreading-activation conceptual networks. (Neural nets are also used as machine-learning algorithms in other fields, but I will leave that discussion for another time.)
One point of contention about neural networks is that we never really know if they accurately represent the brain: the way that they are created is heavily influenced by neuroanatomy, and includes nodes that represent neurons and weights that represent neural connections, but it is impossible to accurately model all of the billions of neurons and trillions of connections in the brain.
So how do we know whether what we are modeling is a good representation of the brain? The answer is that we don’t. Through testing, though, we can compare the results of neural network with the results of human learning, and if they match up, it is generally accepted that the neural network accurately represents the cognitive phenomenon it was built to study.
A Japanese research group called RIKEN is undertaking a project using the K computer (the fourth most powerful supercomputer in the world) to simulate neural activity on a scale that has never been done before. They modeled 1.73 billion nerve cells and 10.4 trillion connections. That is a fantastically huge number, though it falls far short of the 86 billion neurons that was recently posited for the brain. One of the collaborators in the project reports that they modelled about 1% of the brain. Even so, that is a huge accomplishment.
So what did this simulated brain compute? As far as I can tell, pretty much nothing. After 40 minutes of using 82,944 processor cores and about a petabyte of memory, the K computer had simulated approximately one second of brain activity. That is 40 minutes of time on one of the world’s most powerful supercomputers for a single second of brain activity. Puts the complexity of the brain in perspective, does it not?
Even though this test was designed as a test of the programmers and hardware at RIKEN, it brings up some really interesting questions about neural networks, what they can do, and how rapidly we are improving our ability to simulate the brain. According to some estimates, we will be able to simulate the entire brain — down to individual neurons and synapses — within the next decade or so. To do this, we will need an exa-scale computer (the scale of which is completely beyond my comprehension).
Personally, I am not hugely hopeful about simulating the entire brain anytime in the foreseeable future. Even if we have the hardware capability, we still have to have the neuroanatomical knowledge, software power, and programming ability to make it all work together. This is no small task, even with the impressive self-organizing powers of neural networks. But technology is advancing at an unbelievable rate, so who knows? Maybe we will see a computerized brain in the next 20 years. What do you think? What comes next for neural network computation? Is there any ceiling for what it can accomplish?
The world will be watching this technology closely in the coming years. You can look forward to some really exciting and interesting developments!
Sparkes, M. (January 13, 2014) Supercomputer models one second of brain activity. The Telegraph.