Brain Blogger » Neuroscience & Neurology Health and Science Blog Covering Brain Topics Fri, 28 Nov 2014 14:49:26 +0000 en-US hourly 1 Are We Superhuman? Part 1 – Feeling the Future Wed, 26 Nov 2014 12:00:57 +0000 A surprising number of researchers suspect that your brain and body may be giving you a potentially game-changing glimpse into your conscious awareness of future events. Your nervous system may in fact have the ability of presentiment, where it creates unconscious changes in your physiology in anticipation of an impending future event occurring up to a whopping 10 seconds in the future.

Putting any instinctual claims of crackpottery aside, in evolutionary terms this would be advantageous, where anticipating unpredictable life-threatening events in order to survive and pass on one’s genes is concerned. Yet it isn’t much of a shocker that peer-reviewed evidence in favor of our potential ability to sense the future, colloquially known as presentiment, or as it is known scientifically, predictive anticipatory activity (PAA), is fueling an intense battle between skeptics and proponents of PAA theory.

The term PAA describes a phenomenon that is predictive of randomly selected future events, anticipates these events more often than chance, and is based on physiological activity in the autonomic and central nervous systems. It can be considered as an unconscious physiological preview of our conscious awareness of future emotional or arousing events. This should not be confused with precognition, which considers conscious premonitions of future events instead of unconscious reactions to future events.

One metaphor that has presentably described PAA is that of watching how a river flows past an object, like a tree or rock. Imagine the river is your experience of the flow of time and the important event is the tree. The largest disturbance in the river is downstream of the tree, with the flow of water perturbed for many meters after the tree. This is representative of our conscious reaction to an important event for some length of time after the important event occurred.

Now, if you look closer at the flow of water near the tree, you will find that due to back pressure, there is also a small disturbance in the water upstream, yet pretty close to the tree. This upstream disturbance represents PAA, where we physiologically anticipate the event through an unconscious reaction to the disturbance caused by the downstream event.

The most common way to test for PAA thus far has been to show a series of randomized stimuli, some of which should provoke a strong physiological reaction (e.g. emotional vs. neutral images). While participants view the series of randomized stimuli, researchers continuously record physiological measurements, such as heart rate, skin conductance and fMRI BOLD or EEG signals. If PAA were indeed a real phenomenon, physiological responses to the emotional images (i.e. the trees in the water) should be detectable before participants have even viewed them on the screen.

In this way more than 40 such experiments have been published in the past 36 years, which prompted a meta-analysis that was published in the well-respected journal, Frontiers in Psychology. Of 49 PAA studies, 26 studies from 7 different research groups met the criteria for inclusion in the analysis. Even when using statistically conservative methods, the analysis revealed a small but statistically reliable effect size in support of PAA.

Other anticipatory physiological phenomena are generally well accepted in neuroscience and psychophysiology. Take anticipation of intentional motor activity for example, where the brain anticipates our conscious awareness of wanting to move at least 500 milliseconds and as much as 10 seconds before our first conscious thoughts of moving. Arguably this is what allows us to time our movements with real-time events, in order to catch a ball for example, or to coordinate body movement.

The difference between intentional motor activity and PAA that is causing all the fuss, is that PAA is not about making unconsciously processed decisions earlier than your consciously processed decisions, it’s about your body experiencing an unconscious micro-reaction to a real-world, consciously perceived event seemingly before it has even occurred.

Without some intense objective reasoning this would at first glance, bring attention to the elephant in the room, that an effect preceding the cause violates the direction of time, and causality, that is provided by the second law of thermodynamics, giving skeptics a defensible reason to invalidate the theory straight off the bat. Couple this with a need for improving and standardizing the methodologies used, acknowledged by both PAA theory proponents and opponents alike, and you have a relatively strong case against PAA’s existence.

One speculative mechanism that could account for PAA phenomena, lies in an epiphenomenon associated with quantum processing in biological systems, where observations in the future influence observations in the past.  The remarkable existence of both quantum phenomena existing in warm, wet and noisy biological systems and increasing support for retrocausal phenomena in physics leads one to consider that the human nervous system, like other biological systems, could indeed exploit such quantum effects.

Clearly skepticism and close scrutiny is vital in reaching a scientific consensus. Thankfully, serious investigators of PAA, like Dr. Mossbridge et. al., the authors of the meta-analysis paper, acknowledge the methodological and theoretical challenges to be overcome in determining the validity of PAA theories.

Similarly, in the critical analysis paper published this year in Frontiers in Human Neuroscience, as well as the 2012 meta-analysis, Dr. Mossbridge et. al., make no extraordinary claims regarding PAA’s mechanism of action, stating that “The cause of this anticipatory activity, which undoubtedly lies within the realm of natural physical processes (as opposed to supernatural or paranormal ones), remains to be determined.”

Astonishingly, some skeptics appear almost as extreme as avidly whacky psi-supporters themselves in publications contesting the meta-analysis results. In one paper, also published in Frontiers in Human Neuroscience, entitled We Should Have Seen it Coming, some very useful criticisms were made, many of which were shared by the meta-analysis authors and will undoubtedly be involved in the development of future PAA research. While as objective observers we should welcome skepticism and close scrutiny, the opposition paper bordered on hostile, implying that entertaining such theories is an example of non-skeptical and scientifically damning thinking, and that researchers are not actively striving for their hypotheses to be proven incorrect.

Thankfully, Mossbridge and others seem to be brave enough to entertain hypothesis that the current analysis of evidence is in favor of and will likely be providing further, more rigorous investigations into PAA in the near future. Equally, we should be thankful for the zealous critical assessment of PAA research that will undoubtedly follow. Until research bridges the gap between skeptics and proponents, you be the judge.


Franklin MS, Baumgart SL, & Schooler JW (2014). Future directions in precognition research: more research can bridge the gap between skeptics and proponents. Frontiers in psychology, 5 PMID: 25202289

Mossbridge J, Tressoldi P, & Utts J (2012). Predictive physiological anticipation preceding seemingly unpredictable stimuli: a meta-analysis. Frontiers in psychology, 3 PMID: 23109927

Mossbridge JA, Tressoldi P, Utts J, Ives JA, Radin D, & Jonas WB (2014). Predicting the unpredictable: critical analysis and practical implications of predictive anticipatory activity. Frontiers in human neuroscience, 8 PMID: 24723870

Schwarzkopf DS (2014). We should have seen this coming. Frontiers in human neuroscience, 8 PMID: 24904372

Image via lassedesignen/Shutterstock.

]]> 0
The Mystery of Left-Handedness Tue, 25 Nov 2014 12:00:36 +0000 Until recently, left-handedness was a matter of great prejudice, and in many cultures, it was common to force left-handed children to write with their right hand. Throughout the world, the prevalence of left-handedness is highly variable, ranging from approximately 5% to 25% and – for unknown reasons – is more common in men than in women. So what defines our handedness? And why is being left-handed less common?

This over-representation of right-handers is regarded as an indication of a genetic component defining handedness. Twin studies have shown a higher handedness concordance rate in identical twins than in fraternal (non-identical) twins. Moreover, it has been estimated that the probability of a child being left-handed is 8% when both parents are right-handed, 22% when only one parent is left-handed, and 36% when both parents are left-handed. Interestingly, when only one parent is left-handed, there is a higher prevalence of left-handedness in children with a left-handed mother than in children with a left-handed father, indicating a possible maternal transmission.

These facts have stimulated further research on the genetics of left-handedness. Due to the complexity of this behavioral trait, it is most likely that it may be associated not only with a single gene, but with multiple genetic and environmental factors. In fact, genome-wide association studies have failed to find a single gene that is significantly linked to left-handedness. A great difficulty in such studies is that handedness, by being a complex behavior, is not directly controlled by genes; genes may instead have an indirect effect by, for example, influencing brain development or function.

The human brain is an asymmetrical organ with different functional specializations of the left and right hemispheres. The brain is organized such that the left hemisphere controls the motor functions of the right side of the body and the right hemisphere controls those of the left side. Many brain functions are lateralized, i.e. they are processed in one of the hemispheres, which allows for a “division of labor” between the two hemispheres. This is thought to be an important evolutionary adaptation that has boosted the brain’s efficiency. Handedness is an example of a cerebral lateralized function (as well as motor speech), being dominantly processed in the left hemisphere. For most people, this results in right-handedness.

Interestingly, there are indications that left-handedness may be associated with a smaller lateralization of cognitive processing: 97% of right-handers have their motor speech area located exclusively in the left hemisphere. On the other hand (literally), motor speech processing is located exclusively in the left hemisphere only in 60% of left-handers, while 30% have bihemispheric processing and 10% have right hemisphere processing.

Studies have shown that the corpus callosum, the largest structure connecting the left and right hemispheres (commissure), of left-handers tends to be larger. Therefore, this may be a sign of a greater connectivity between hemispheres and may be associated with certain cognitive skills. In fact, a greater interhemispheric connectivity could be associated with the reported observations that left-handers can have better mathematical skills or higher creativity, for example.

Although handedness seems to be a matter of natural variation, it is intriguing why left-handedness is less common. Since neither left- nor right-handedness confers any obvious evolutionary advantage, a similar prevalence in the population would be expected. Also, if one were less advantageous, it would be expected to eventually disappear. But that is not the case.

Therefore, some researchers defend that there must be some disadvantage to being left-handed, which has led to a search for associations of this trait with other features or even diseases. Consequently, left-handedness has been linked to all types of disorders, although not always accurately.

But there is growing evidence from neuroimaging studies that the smaller cerebral lateralization associated with left-handedness can also be associated with neurodevelopmental disorders such as autism, specific language impairments, epilepsy, or schizophrenia, for example. However, it is hard to determine whether weak cerebral laterality causes the disorder or vice versa. Also, it is possible that genetic influences that determine weak laterality may also induce neurodevelopmental disorders without a cause-effect relation. The mystery persists.


Brandler WM, & Paracchini S (2014). The genetic relationship between handedness and neurodevelopmental disorders. Trends in molecular medicine, 20 (2), 83-90 PMID: 24275328

Forrester GS, Quaresmini C, Leavens DA, Mareschal D, & Thomas MS (2013). Human handedness: an inherited evolutionary trait. Behavioural brain research, 237, 200-6 PMID: 23022751

Gutwinski S, Löscher A, Mahler L, Kalbitzer J, Heinz A, & Bermpohl F (2011). Understanding left-handedness. Deutsches Arzteblatt international, 108 (50), 849-53 PMID: 22259638

Ocklenburg S, Beste C, & Güntürkün O (2013). Handedness: a neurogenetic shift of perspective. Neuroscience and biobehavioral reviews, 37 (10 Pt 2), 2788-93 PMID: 24091023

Uomini NT. (2009). The prehistory of handedness: archaeological data and comparative ethology. J Hum Evol. 57(4):411-9. PMID: 19758680

Image via Schankz / Shutterstock.

]]> 0
Improve Cognition With A Trip Down Memory Lane Sat, 22 Nov 2014 12:00:34 +0000 The human brain can concentrate on externally focused tasks, such as answering a question or solving a puzzle, or internally focused tasks, such as daydreaming. Until recently, these activities were believed to be mutually exclusive. That is, activating one suppressed the other. But now, evidence suggests that engaging the internally focused brain network actually improves performance of the externally focused network.

A recent study, published in the Journal of Neuroscience, reports that the default network – the one responsible for internal focus such as mind-wandering and reminiscing – supports task performance of executive control regions of the brain.

The researchers conducted functional neuroimaging of 36 adults who were shown a series of pictures of famous and non-famous faces. The participants were asked to identify and match faces that that appeared earlier in the sequence. This activity engaged the default network, since it involved reminiscing, but also involved a goal-oriented task. The authors concluded that better performance on the cognitive task was associated with increased activation of the default network.

This conclusion is only applied to tasks that support each other – recognizing faces about which a participant was reminiscing. It remains unclear if internally directed thought that is contextually irrelevant to the task at hand will improve externally focused cognition.

Neuroscientists have long believed that the dorsal attention network (the network that directs externally directed cognition) and the default network are competitive. The dorsal attention network controls task-based cognitive performance, and the default network that controls internally focused thought, such as spontaneous thought, stimulus-independent thought, mind-wandering and autobiographical planning, is thought to be active when the brain is at rest. Scientists hypothesized that the default network needed to be suppressed when the attention network was active in order to prevent interference.

Now, neuroscientists are realizing that the networks may not be entirely independent or competitive. They likely work together more than previously believed, and their respective performances are modulated by other brain networks.

The human mind likes to wander, and undirected thought forms a large part of our mental experience. Likely, our brain integrates external information and cognition with internal focus to determine personal meaning, such as knowledge about past experiences, motivations, future plans, and social context. According to the authors of the recent study, the default network and the attention network (and, probably, other networks) continuously interact to reconcile external goals with internal meaning.


Andrews-Hanna JR, Smallwood J, & Spreng RN (2014). The default network and self-generated thought: component processes, dynamic control, and clinical relevance. Annals of the New York Academy of Sciences, 1316, 29-52 PMID: 24502540

Christoff K (2012). Undirected thought: neural determinants and correlates. Brain research, 1428, 51-9 PMID: 22071565

Mason MF, Norton MI, Van Horn JD, Wegner DM, Grafton ST, & Macrae CN (2007). Wandering minds: the default network and stimulus-independent thought. Science (New York, N.Y.), 315 (5810), 393-5 PMID: 17234951

Spreng RN, DuPre E, Selarka D, Garcia J, Gojkovic S, Mildner J, Luh WM, & Turner GR (2014). Goal-congruent default network activity facilitates cognitive control. The Journal of neuroscience : the official journal of the Society for Neuroscience, 34 (42), 14108-14 PMID: 25319706

Spreng RN, Sepulcre J, Turner GR, Stevens WD, & Schacter DL (2013). Intrinsic architecture underlying the relations among the default, dorsal attention, and frontoparietal control networks of the human brain. Journal of cognitive neuroscience, 25 (1), 74-86 PMID: 22905821

Spreng RN, Stevens WD, Chamberlain JP, Gilmore AW, & Schacter DL (2010). Default network activity, coupled with the frontoparietal control network, supports goal-directed cognition. NeuroImage, 53 (1), 303-17 PMID: 20600998

Image via Andresr / Shutterstock.

]]> 0
An Equation for Happiness? Fri, 21 Nov 2014 12:00:49 +0000 It’s no secret that the level of personal happiness isn’t directly linked to the material things in life. This is well illustrated by the fact that this year, the crown of the happiest nation in the world went to Panama, according to the Gallup and Healthways Global report. In comparison, the UK is ranked 76th on the list. So what really makes people happy?

It’s hard to describe happiness, let alone to measure it. We all see it differently. Psychologists have multiple theories in this regard. Neuroscientists point to multiple brain mechanisms and the levels of different neuromediators. Clinicians have studied multiple environmental and medical factors leading to various mood disorders.

The picture is complex, and putting various influencing parameters into one equation that could have a predictive value would seem to be an impossible task. But this is exactly what researchers from University College London have attempted. In their article published in the prestigious Proceedings of the National Academy of Science they’ve suggested an equation that rather accurately calculates the level of moment-to-moment happiness.

The equation takes into account two major factors: expectation and reward. In their experiments, the researchers asked 26 volunteers to participate in decision-making tasks that could lead to some real monetary gains or losses. Brain activity of participants was monitored using MRI during the course of the experiment. At regular points during the tasks, the researchers asked participants to evaluate their current level of happiness with how they felt things were going.

It turned out that happiness wasn’t linked to the amount of wealth accumulated. Rather, happiness was experienced when things were going better than expected. On top of this, recent rewards substantially influenced the moment-to-moment happiness.

The data obtained in this experiment allowed researchers to develop an equation for predicting the relative level of happiness. The idea was put to the test in a further experiment involving much larger group of people (above 18,000), using a specially designed smartphone game called “What makes me happy?”.

The participants faced the same kind of decision-making tasks, but this time they were receiving only points, not real money. Nonetheless, the outcome was the same, and the previously developed equation accurately predicted the happiness level of participants.

The scientists consider the game as a model of real-life situations in which we should make important decisions, for instance when we start new job or project, get married, or make serious changes to our personal life. The outcome of such decisions is often far from clear, but we do expect the changes to be rewarding, at least to a certain extent. The perfect match between actual outcome and previous expectations makes us satisfied. When outcome is even greater, we feel happy.

The actual size of the reward is secondary. For instance, a person could feel quite happy after winning $100 in the lottery, since the chances of such a win are very slim and the objective expectation of winning anything at all is very low. On the other hand, an investor who expected to get $1 million from trading his shares, but received only half of this money is likely to be unhappy, even though the actual amount of money he received is very substantial.

What determines happiness on the molecular level remains rather uncertain.

MRI studies show that the neural signals in the part of the brain called the striatum can help in predicting moment-to-moment happiness. The striatum coordinates motivation with body movement. It’s involved in complex social interactions and may inhibit certain types of behavior. It’s also involved in executive functions such as working memory.

The striatum was shown to be activated by the stimuli associated with rewards. In the classic experiment performed back in the 1950s, it was shown that neuron-stimulating implants placed in the striatums of rats made the animals press the bar sending the stimulus to the brain for hours at a time.

Recent data suggest that the reward process is linked with dopamine neurotransmission. The striatum has lots of connections with dopaminergic neurons, and thus it appears that the changes in the level of dopamine in certain regions of the brain may be involved in determining the degree of happiness. Future research into the function of striatum may help us to reveal in more details the molecular determinants of happy feelings.

In the meantime, the equation developed by British researchers might be of practical importance for clinical purposes, particularly for treating patients with mood disorders. It also points out yet again the importance of managing expectations, both on a personal level and in wider society.

Obviously, the equation doesn’t calculate the level of overall personal happiness with life, the Universe, and everything. But let’s keep in mind that overall happiness is eventually the sum of your everyday experiences.


Voytek, B., & Knight, R. (2010). Prefrontal cortex and basal ganglia contributions to visual working memory Proceedings of the National Academy of Sciences, 107 (42), 18167-18172 DOI: 10.1073/pnas.1007277107

Cachope, R., & Cheer, J. (2014). Local control of striatal dopamine release Frontiers in Behavioral Neuroscience, 8 DOI: 10.3389/fnbeh.2014.00188

OLDS J, & MILNER P (1954). Positive reinforcement produced by electrical stimulation of septal area and other regions of rat brain. Journal of comparative and physiological psychology, 47 (6), 419-27 PMID: 13233369

Volman, S., Lammel, S., Margolis, E., Kim, Y., Richard, J., Roitman, M., & Lobo, M. (2013). New Insights into the Specificity and Plasticity of Reward and Aversion Encoding in the Mesolimbic System Journal of Neuroscience, 33 (45), 17569-17576 DOI: 10.1523/JNEUROSCI.3250-13.2013

Rutledge, R., Skandali, N., Dayan, P., & Dolan, R. (2014). A computational and neural model of momentary subjective well-being Proceedings of the National Academy of Sciences, 111 (33), 12252-12257 DOI: 10.1073/pnas.1407535111

Image via FWStudio / Shutterstock.

]]> 0
The Coordinates Of A Nobel Prize Wed, 19 Nov 2014 12:00:09 +0000 The neuroscientists John O´Keefe from University College London and May-Britt Moser and Edvard Moser from the Norwegian University of Science and Technology in Trondheim are this year’s Nobel laureates in Physiology or Medicine. Together (although more than 30 years apart) they helped us understand how we know where we are, how we know the way from one place to another, and how we store this information in order to immediately find the way when we repeat a path. They call it our “inner GPS” – the brain’s positioning system – and it is a groundbreaking discovery.

The first step towards the discovery of this system was taken in 1971, when John O´Keefe described its first component. He found that certain neurons in the hippocampus fired whenever a rat was in a certain place in the local environment, with neighboring neurons firing at different locations, such that the entire environment was represented by the activity of these cells throughout the hippocampus.  

O´Keefe called these neurons “place cells” and showed that they were able to form an inner map of the environment, drawn by the pattern of place cell activation in the hippocampus. He later showed that the firing rate of these place cells could be affected not only by the rat’s position in the maze, but also by visual input or by olfactory input, either because it found something new or found something missing from that specific location.

O´Keefe also found that there were other cells that would signal changes in the rat’s position relative to the environment. His studies showed that the combined activity of specific place cells build the memory of that environment in the form of a cognitive map.

An upgrade to the discovery of place cells came in 2005, when May-Britt and Edvard Moser presented another key component of this neuronal positioning system, the “grid cells”.

They discovered a pattern of activity in an area of the brain close to the hippocampus called the entorhinal cortex. Here, certain cells were activated at specific locations, similarly to place cells, but that each of these cells was activated by multiple locations. These locations formed a grid that covered the entire environment explored by the animal, indicating that they could generate a coordinate system that allowed for spatial navigation.

Together with other cells of the entorhinal cortex that recognize the direction of the head and the border of the room, they form circuits with the place cells in the hippocampus that give rise to the brain’s positioning system.

Although these studies were all conducted in rats, evidence from brain imaging techniques and from patients who have undergone neurosurgery has indicated that place and grid cells also exist in humans. Also, in patients with Alzheimer´s disease, both the hippocampus and the entorhinal cortex are frequently affected at an early stage of disease development, when these patients start losing their way and their ability to recognize the environment.

So, besides the very important knowledge about the brain´s positioning system per se, these findings may also help us understand how patients with Alzheimer´s disease develop spatial memory loss and, eventually, how to overcome it.

As the Nobel Committee for Physiology or Medicine stated, this discovery “represents a paradigm shift in our understanding of how ensembles of specialized cells work together to execute higher cognitive functions. It has opened new avenues for understanding other cognitive processes, such as memory, thinking and planning.”


Colgin LL, Moser EI, & Moser MB (2008). Understanding memory through hippocampal remapping. Trends in neurosciences, 31 (9), 469-77 PMID: 18687478

Hafting T, Fyhn M, Molden S, Moser MB, & Moser EI (2005). Microstructure of a spatial map in the entorhinal cortex. Nature, 436 (7052), 801-6 PMID: 15965463

Moser EI, Kropff E, & Moser MB (2008). Place cells, grid cells, and the brain’s spatial representation system. Annual review of neuroscience, 31, 69-89 PMID: 18284371

Moser EI, & Moser MB (2008). A metric for space. Hippocampus, 18 (12), 1142-56 PMID: 19021254

Moser EI, Roudi Y, Witter MP, Kentros C, Bonhoeffer T, & Moser MB (2014). Grid cells and cortical representation. Nature reviews. Neuroscience, 15 (7), 466-81 PMID: 24917300

O’Keefe J, & Dostrovsky J (1971). The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely-moving rat. Brain research, 34 (1), 171-5 PMID: 5124915

O’Keefe J (1976). Place units in the hippocampus of the freely moving rat. Experimental neurology, 51 (1), 78-109 PMID: 1261644

Sargolini F, Fyhn M, Hafting T, McNaughton BL, Witter MP, Moser MB, & Moser EI (2006). Conjunctive representation of position, direction, and velocity in entorhinal cortex. Science (New York, N.Y.), 312 (5774), 758-62 PMID: 16675704

Image via IrinaK / Shutterstock.

]]> 0
Halloween Special – Why Does the Brain Love a Scary Holiday? Fri, 31 Oct 2014 11:00:34 +0000 Why is a holiday filled with creepy ghosts, goblins, and haunted houses so much fun? Research in neuroscience may provide some answers.

The Department of Biological and Clinical Psychology recently teamed with the Institute of Diagnostic and Interventional Radiology at the Friedrich Schiller University of Jena, Germany, in an attempt to understand what happens in our brains when we view scary scenes.

While scanning subjects’ brains using functional magnetic resonance imaging (fMRI), the researchers showed people threatening clips from movies such as Aliens, Jaws, The Exorcist, The Shining, and Silence of the Lambs, as well as neutral scenes that do not normally illicit fear responses. Researchers also collected information on each subject’s tendency to seek out scary scenarios, as well as their reactions to the scenes to which they were exposed.

Compared to those who do not like getting spooked, those who seek out eerie circumstances showed less activity in the thalamus when exposed to neutral scenes and more activity in the visual cortex when exposed to scary scenes. The pattern of activity in these brain regions, involved in sensory processing, suggests that the enjoyment of scary situations may be partially explained by the enriched sensory experience that accompanies them.

Interestingly, the amygdala, which contributes to emotional processing, was not differentially activated by scary versus neutral scenes in either group. In fact, activity in the prefrontal cortex, important for executive functions, was the only brain area whose activity correlated with subjects’ reported anxiety during exposure to scary movie scenes.

Hyperactivity in the prefrontal cortex is associated with certain anxiety disorders that involve rumination, like obsessive-compulsive disorder. The worry that is experienced in these disorders may be physiologically and psychologically similar to the feelings invoked by scary movies, which would help explain the heightened prefrontal activity that is experienced during anxiety-provoking movie clips.

This type of neural activity is different from activity seen in disorders that involve intense fear, such as panic disorder or phobias, wherein prefrontal activity is actually diminished. Reduced prefrontal activity disinhibits the amygdala (which receives information from the prefrontal cortex) and leads to fear responses that are not as likely to be observed in a movie theater or a haunted house. Unlike what we typically observe in movie theaters or on Halloween, people exposed to phobia cues show extreme avoidant or escape behaviors, as if they were enduring a real biological threat.

Our brains are so sophisticated that they can usually distinguish situations that appear threatening from those that actually are. Though the scary scenes we see on Halloween may resemble scenes that would be threatening in other contexts, we are able to quiet our anxiety because other parts of our brain tell us that we are not in danger. But apparently the sensation elicited by stimulation that bears some likeness to threat can be lots of fun.

Happy Halloween!


Gilmartin, M., Balderston, N., & Helmstetter, F. (2014). Prefrontal cortical regulation of fear learning Trends in Neurosciences, 37 (8), 455-464 DOI: 10.1016/j.tins.2014.05.004

Kaufmann, C., Beucke, J., Preuße, F., Endrass, T., Schlagenhauf, F., Heinz, A., Juckel, G., & Kathmann, N. (2013). Medial prefrontal brain activation to anticipated reward and loss in obsessive–compulsive disorder NeuroImage: Clinical, 2, 212-220 DOI: 10.1016/j.nicl.2013.01.005

Roozendaal, B., McEwen, B., & Chattarji, S. (2009). Stress, memory and the amygdala Nature Reviews Neuroscience, 10 (6), 423-433 DOI: 10.1038/nrn2651

Sladky, R., Höflich, A., Atanelov, J., Kraus, C., Baldinger, P., Moser, E., Lanzenberger, R., & Windischberger, C. (2012). Increased Neural Habituation in the Amygdala and Orbitofrontal Cortex in Social Anxiety Disorder Revealed by fMRI PLoS ONE, 7 (11) DOI: 10.1371/journal.pone.0050050

Straube, T., Preissler, S., Lipka, J., Hewig, J., Mentzel, H., & Miltner, W. (2009). Neural representation of anxiety and personality during exposure to anxiety-provoking and neutral scenes from scary movies Human Brain Mapping DOI: 10.1002/hbm.20843

Image via g-stockstudio / Shutterstock.

]]> 0
Can Brain Imaging Detect Risk Takers? Sat, 18 Oct 2014 11:00:54 +0000 Risk-taking seems to come naturally for some people – from those who don’t hesitate asking for a new promotion, to those who don’t flinch before artfully diving off a cliff into the ocean below. Others play it safer. While upbringing may have some role in our risk-taking probabilities, there are plenty of cases where siblings raised in the same environment have different tendencies to take risks.

Several studies have investigated the correlation between brain structure and risk-taking. In response to the statistic that unintentional injuries are the leading cause of death among adolescents, the Center for Brain Health at The University of Texas at Dallas conducted a study that found brain differences in risk-taking teens, who are currently in some of their most important years for brain development.

Individual brains and risk-taking

The leading causes of death for adults are cancer and heart disease. For teens, it’s unintentional injuries. Although better overall health is one reason the stat seems to suggest that teens are likelier to take risks than their older peers, what the Center for Brain Health found was that some brain regions in risk-taking teens are more “amplified” compared to teens who “play it safe.”

“Our brains have an emotional-regulation network that exists to govern emotions and influence decision-making,” said Sam DeWitt, the study’s lead author. “Antisocial or risk-seeking behavior may be associated with an imbalance in this network.”

The study’s details

To discover the sectional amplification among risk-takers, the study looked at 36 adolescent participants aged 12-17. Half of them were risk-takers – fairly accustomed to sexual promiscuity, physical violence and/or drug and alcohol use – while the other half were not. They all underwent MRI scans to examine their emotional regulation network, which, as DeWitt says, is the primary controller in governing emotions and influencing decision-making.

The MRIs were conducted when most of the adolescents’ minds were in a “wandering” state, essentially focused on something fairly intently (such as reading or playing a video game) other than the study itself. As Dr. Sina Aslan explains: “Most fMRI scans used to be done in conjunction with a particular visual task. In the past several years, however, it has been shown that performing an fMRI scan of the brain during a ‘mind-wandering’ state is just as valuable.”

It’s suggested that it’s practical to evaluate all participants while they’re in a ‘mind-wandering’ state, as to avoid certain thoughts or triggers from causing outlier data.

The results

The study, which was conducted by Dr. Francesca Filbey, found that risk-taking teens have hyperconnectivity between the amygdala and areas of the prefrontal cortex associated with critical thinking skills and emotion regulation. The amygdala is responsible for emotional reactions. The nucleus accumbens and prefrontal cortex also showed increased activity, with the former often a point of study in addiction research.

What this all shows is that there’s a correlation between risk-takers and those whose minds are pre-conditioned to addiction.

Self-control systems: a common culprit

Another study from researchers at The University of Texas at Austin, UCLA, and elsewhere, found a link between risk-takers and their brains’ self-control systems. They used specialized software that examined brain regions prior to, and during, both a risky choice and a safe choice, in addition to a video game called Balloon Analogue Risk Task (BART), where participants can show their risk-taking tendencies by choosing to take a risk (by inflating a balloon further and earning money) or playing it safe (stop inflating the balloon and cashing out).

The balloon game is essentially a microcosm of other riskier choices, such as choosing to drink too much before driving home. With each additional drink consumed or additional air put into the balloon, there’s an increased risk of something unpleasant and potentially harmful occurring. The researchers used this concept to help study areas of the brain that are impacted when participants are confronted with a decision that involves risk.

The importance of these studies

Identifying the link between risk-taking and one’s brain could open several possibilities that have the potential to improve our lives. “Our findings are crucial in that they help identify potential brain biomarkers that, when taken into context with behavioral differences, may help identify which adolescents are at risk for dangerous and pathological behaviors in the future,” says DeWitt.

Similarly, the study that utilized the Balloon Analogue Risk Task accommodates the analysis of risk-taking in a controlled environment with little variability, which suggests that studying the correlation between the brain and risk-taking will become even more useful for scientists as additional research methods, like BART, emerge.


DeWitt SJ, Aslan S, & Filbey FM (2014). Adolescent risk-taking and resting state functional connectivity. Psychiatry research, 222 (3), 157-64 PMID: 24796655

Helfinstein SM, Schonberg T, Congdon E, Karlsgodt KH, Mumford JA, Sabb FW, Cannon TD, London ED, Bilder RM, & Poldrack RA (2014). Predicting risky choices from brain activity patterns. Proceedings of the National Academy of Sciences of the United States of America, 111 (7), 2470-5 PMID: 24550270

Image via Mandy Godbehear / Shutterstock.

]]> 0
Brain Trickery – Seeing in Slow Motion Thu, 09 Oct 2014 11:00:22 +0000 The brain is capable of endless thoughts, visions, and perceptions. It’s a complex but perfectly tuned system that operates smoothly most of the time. But sometimes things can get strange. Not necessarily abnormal, but strange nonetheless. Seeing events in a slow motion is a rare phenomenon that certainly belongs to this category of rather unusual things.

One of the patients who has experienced this effect described clearly seeing the droplets of water stop moving from the shower head while he was washing. To him, they were almost frozen in the air, giving him enough time to observe their shape and size. It was like watching a slowed down movie.

This phenomenon is known as akinetopsia, the loss of motion perception. Patients do see the objects but cannot perceive their movement for some time. The so-called Zeitruffer phenomenon is similar to akinetopsia and manifests itself as an altered (usually slowed down) perception of the velocity of the moving objects.

On the surface, the ability to perceive an object – or time itself – moving slower than “normal” could be considered as rather advantageous. For instance, in sporting events or in extreme situations when decisions need to be made in a split second, such an ability would most certainly come in handy. The problem is, the phenomenon usually points to the presence of a problem, such as stroke or epilepsy. Stroke may damage the brain’s visual cortex, and this kind of damage may lead to the inability to perceive objects moving at a particular speed. This is why symptoms often include freeze frame vision or some variation of this.

Recent research demonstrates that our brain receives visual information as a sequence of discrete snapshots, similar to the frames of a film reel. The clues that the brain gathers visual information in discrete snapshots are all around us. We are usually unable to visually register very fast movements. This lack of speed can lead to curious phenomena. For instance, the wheels of cars moving on the motorway often appear to be standing still. This happens because our brain fails to capture the intermediate snapshots of the wheel rotating between individual snapshots. As a result, at certain speeds, the wheel appear to be static.

Damage to the V5 region of the visual cortex can cause problems with the processing of these snapshots, or frames, resulting in the perceived slowing down or complete stopping of movement. The frames can also “break down”, with some parts of visual field showing the movements and others freezing. Some patients also describe the loss of synchronization between visual perception and other sensory inputs. For instance, during conversations, they might be able to hear the voices of other people normally but view them as out of sync with their facial movements.

Although akinetopsia and the Zeitruffer phenomenon are rare, it seems that in extreme circumstances, many of us might experience something similar. People who have been through life-threatening events such as car accidents or aeroplane crashes often describe a sense of time “standing still“, and an awareness that they could clearly see details they would never usually be able to perceive. This state might be linked to the sudden release of stress hormones that, in turn, trigger the speeding up of the brain’s processing mechanisms.

It also appears that the ability to process visual information faster can be trained. Elite athletes playing sports that require fast movements and snappy decision-making, such as boxers, surfers and football players, are known for their exceptional reaction speeds. It is speculated that these athletes are able either to gather more visual snapshots in the same period of time, or to analyze them much faster compared to the average person.

At present, the exciting area of neurobiology dealing with our perception of time and reality is still in infancy. We have barely scratched the surface of this enormous field of research, but already have encountered quite a few surprises. One can only wonder what future studies will uncover.


Cooper SA, Joshi AC, Seenan PJ, Hadley DM, Muir KW, Leigh RJ, & Metcalfe RA (2012). Akinetopsia: acute presentation and evidence for persisting defects in motion vision. Journal of neurology, neurosurgery, and psychiatry, 83 (2), 229-30 PMID: 21217160

Ovsiew F (2014). The Zeitraffer phenomenon, akinetopsia, and the visual perception of speed of motion: a case report. Neurocase, 20 (3), 269-72 PMID: 23557277

Sakurai, K., Kurita, T., Takeda, Y., Shiraishi, H., & Kusumi, I. (2013). Akinetopsia as epileptic seizure Epilepsy & Behavior Case Reports, 1, 74-76 DOI: 10.1016/j.ebcr.2013.04.002

Arstila V (2012). Time Slows Down during Accidents. Frontiers in psychology, 3 PMID: 22754544

Phillips, I. (2013). XII-Perceiving the Passing of Time Proceedings of the Aristotelian Society (Hardback), 113 (3pt3), 225-252 DOI: 10.1111/j.1467-9264.2013.00353.x

Wittmann, M. (2013). The inner sense of time: how the brain creates a representation of duration Nature Reviews Neuroscience, 14 (3), 217-223 DOI: 10.1038/nrn3452

Image via Andrey Klepikov / Shutterstock.

]]> 0
Swear Your Pain Away Mon, 06 Oct 2014 11:00:01 +0000 Who wouldn’t swear after slamming a finger in a door (not to mention giving birth without anesthesia)? I certainly would. A lot. Swearing is a very common instinctive reaction to pain but it’s not a very common research topic, with most studies focusing on the psychology of swearing.

Although there is very little research on the effect of swearing on pain, there are a couple of studies worth mentioning. There are two theories for why people swear in response to pain. The ‘disinhibition’ theory states that the stress of pain leads to social disinhibition and reduced self-control that allows swearing. The other theory affirms that swearing represents ‘pain catastrophizing’ behavior – the belief that something is worse than it really is – leading to an increased perception of pain.

In line with the second theory, a study was designed to test the hypothesis that swearing could be a maladaptive response to pain, i.e. a behavior that inhibits the ability to adjust to a certain situation, and that, therefore, it should decrease pain tolerance and increase pain perception.

To assess their pain responses, participants were asked to submerge one hand in ice-cold water until it became painful. The time they took to withdraw their hand (the latency) was measured and they were asked to rate their pain intensity. This was performed in two situations: while swearing and while saying neutral words.

The authors found that the withdrawal latencies were longer in the swearing condition and also in males relative to females. They also observed that their pain rating was reduced in the swearing condition, to a greater extent in females. Thus, the opposite of what the authors had hypothesized occurred: swearing led to an increased pain tolerance and a decreased pain perception. They concluded that instead of being a maladaptive response, swearing actually has a pain reducing (hypoalgesic) effect, and that it’s higher in women.

Interestingly, it was also observed that the pain reducing effect in females was independent of the tendency to catastrophize. Men, on the other hand, lost the effect of swearing as their tendency to catastrophize increased. The reason for such sex differences is unclear, but it can be attributed to men swearing more often than women (statistically), with the power of catastrophizing more easily overcoming the effect of swearing.

These findings actually led to another study evaluating whether the habit of swearing could diminish the hypoalgesic effect of swearing. The same procedures as the previous study were used, but here, the daily swearing frequency was taken into account. It was observed that the more often the participants swore in their daily life, the less pain reducing effect was obtained from swearing.  This is basically attributable to habituation: people who swear more often have a smaller emotional response to swearing, leading to a decreased effect of swearing in pain.

The reason swearing has such an effect on pain remains to be elucidated. The authors hypothesize that swearing may induce a kind of alarm reaction to threat: people feel more aggressive which sets off the ‘fight or flight’ response that, in turn, has a pain reducing effect.

Remember to choose your words wisely.


Bowers JS, & Pleydell-Pearce CW (2011). Swearing, euphemisms, and linguistic relativity. PloS one, 6 (7) PMID: 21799832

Stephens R, Atkins J, & Kingston A (2009). Swearing as a response to pain. Neuroreport, 20 (12), 1056-60 PMID: 19590391

Stephens R, & Umland C (2011). Swearing as a response to pain-effect of daily swearing frequency. The journal of pain : official journal of the American Pain Society, 12 (12), 1274-81 PMID: 22078790

Stephens R (2013). Swearing – the language of life and death. The Psychologist, 26(9):650-653

Image via Robert Kneschke / Shutterstock.

]]> 0
The Phantom Sound Of Tinnitus Tue, 30 Sep 2014 11:00:40 +0000 Anyone who has ever been to a loud nightclub has probably experienced that ringing in the ears that comes home with you. The feeling is uncomfortable, but it eventually disappears. Less fortunate are the 10-15% of people in the world that experience this continuously and, most likely, will experience it forever (and I am one of them).

Tinnitus (a Latin word  meaning “ringing”) is a hearing disorder characterized by the conscious perception of a sound in the absence of a corresponding external acoustic stimulus. Tinnitus can be transient, but after having lasted longer than 12 months it is considered chronic. Although in rare cases a treatable source can be identified, in the majority of situations tinnitus is subjective and occurs as an idiopathic condition with unknown mechanisms.

Tinnitus usually manifests as a ringing, hissing or sizzling, although it is possible for more complex sounds to arise, such as voices or music. Unlike auditory hallucinations associated with mental illnesses, voices or music heard as a form of tinnitus are indistinct and suggest no meaning. Tinnitus can be unilateral, bilateral, or be localized centrally within the head, although some patients describe an external point of origin. The sound can be continuous or intermittent, sometimes being a rhythmical or pulsatile sound. Pulsatile tinnitus can be synchronous with the heartbeat, in which case a vascular origin is likely.

The onset of tinnitus can be abrupt, but it is insidious in most cases. The perceived volume can range from just above hearing threshold to high-intensity sounds. Tinnitus becomes highly distressing in the latter case, when it is too loud to ignore. High-intensity tinnitus is commonly accompanied by other symptoms such as frustration, irritability, anxiety, insomnia, inability to concentrate, and decreased sound tolerance (hyperacusis).

The causes for tinnitus are mostly unclear, but there are several risk factors known to be associated with its development. The main risk factor is hearing loss, despite many people with tinnitus having otherwise normal hearing, and many people with hearing loss having no tinnitus. People commonly exposed to loud noises are naturally more likely to develop tinnitus. Other risk factors include smoking, alcohol consumption, history of head injuries, and hypertension. Various common drugs can also trigger tinnitus, as well as emotional factors and stress.

Little is known about the pathological mechanisms. Tinnitus is most likely a disorder involving both peripheral and central pathways in the nervous system. Although it can be triggered by peripheral mechanisms such as cochlear alterations, it usually persists after auditory nerve section, indicating that the brain plays a crucial role in its pathophysiology. Evidence exists for a tinnitus-associated brain network that includes sensory auditory areas and cortical regions involved in such functions as emotions, memory or attention.

Treatments are mostly ineffective. No pharmacological treatment has shown efficacy in long term reduction of tinnitus. Cochlear implants and hearing aids are frequently used for tinnitus associated with hearing loss.

Most commonly, therapeutic approaches are directed towards achieving habituation to the phantom sound. These approaches include counseling, cognitive behavioral therapy, sound therapy, and brain stimulation. Tinnitus retraining therapy, for example, is a protocol that combines counseling and sound therapy, aimed at achieving habituation by psycho-education, allowing the neutralization and strength reduction of the tinnitus signal.

Another approach is cognitive-behavioral therapy, which uses psycho-education, relaxation training, and attention-control techniques, among others, to reduce the tinnitus-related distress by altering maladaptive cognitive, emotional, and behavioral responses. Sound generators can also be used; these devices produce a sound less disturbing than the tinnitus sound, thereby reducing the perception of tinnitus. Environmental sound generators play relaxing sounds such as sea waves, waterfalls, rain, or white noise. Custom sound generators produce a sound with an adjusted frequency and volume that allow the masking of the tinnitus sound. Alternatively, many individuals also resort to complementary and alternative medicine aimed at inducing relaxation and allowing them to cope with their distress.


Adjamian P, Hall DA, Palmer AR, Allan TW, & Langers DR (2014). Neuroanatomical abnormalities in chronic tinnitus in the human brain. Neuroscience and biobehavioral reviews, 45C, 119-133 PMID: 24892904

Baguley D, McFerran D, & Hall D (2013). Tinnitus. Lancet, 382 (9904), 1600-7 PMID: 23827090

Knipper M, Van Dijk P, Nunes I, Rüttiger L, & Zimmermann U (2013). Advances in the neurobiology of hearing disorders: recent developments regarding the basis of tinnitus and hyperacusis. Progress in neurobiology, 111, 17-33 PMID: 24012803

Langguth B, & Elgoyhen AB (2012). Current pharmacological treatments for tinnitus. Expert opinion on pharmacotherapy, 13 (17), 2495-509 PMID: 23121658

Langguth B, Kreuzer PM, Kleinjung T, & De Ridder D (2013). Tinnitus: causes and clinical management. The Lancet. Neurology, 12 (9), 920-30 PMID: 23948178

Image via Dora Zett / Shutterstock.

]]> 0
Thinking Slow About Thinking Fast – Part IV – A New Perspective on the Framing Effect Sat, 27 Sep 2014 11:00:16 +0000 Our tendency to choose options that appear less valuable than alternative options (such as choosing to stick with our original choice in the Monty Hall Problem) is often cited as evidence for our irrationality. However, the view that we are irrational derives too from inconsistency in our preferences.

Nobel Prize winner, Daniel Kahneman, and his colleague, Amos Tversky, described such inconsistency with the following example of the Framing Effect:

In response to the news that a disease outbreak is expected to kill 600 people, 2 programs are proposed:

  • Program 1 allows 200 people to be saved
  • Program 2 provides a 33.3% chance that 600 people will be saved and a 66.67% chance that 0 people will be saved

If asked to choose the superior program, which would you choose? Most people say Program 1, which ensures the safety of 1/3 of those in danger.

Imagine that instead of Programs 1 and 2, these programs are offered:

  • Program A results in the death of 400 people
  • Program B provides a 33.3% chance that no one will die and a 66.67% chance that 600 people will die

Which option do you prefer? A or B? People tend to choose Program B.

If we examine the programs, it is clear that Programs 1 and A are equivalent, and Programs 2 and B are equivalent, but people reliably choose different options depending on how the options are framed. That our preferences in these scenarios are inconsistent makes us, by several definitions, irrational.

But which program is actually superior? 1/A or 2/B? The question is perhaps philosophical, as there is no way to determine which choice will save the most lives/result in the least deaths. As economists point out, the options are equivalent from a utility perspective. Just as in Program 1, the expected average outcome in Program 2 is 200 lives saved (33.3% x 600 people). Similarly, 400 deaths is the expected average outcome of both Program A and Program B (66.67% x 600 deaths). This analysis leads to 2 essential questions:

(1) If all options are equivalent, why don’t we choose Programs 1/A  as often as we choose Programs 2/B?

(2) If all options are equivalent, does our vulnerability to frames carry any significant implications?

Let’s start with Question 1.

When there is no difference in the value of options, we expect choice to occur randomly. Accordingly, we should expect people to choose Program 1/A half the time and Program 2/B half the time. Many scholars point to psychological factors to explain why our behavior deviates from this prediction. Specifically, they describe us as “risk averse” with respect to gains and “risk seeking” with respect to losses. In this context, “risk” means having an uncertain outcome. In the positive frame above (“lives saved”), psychologists say we are risk averse because we choose the option where the outcome is certain (save 200 people). On the other hand, when the frame is negative (“deaths”), we appear risk seeking because we choose the uncertain outcome.

I think that the pattern of our choices illustrated through the Framing Effect is actually quite practical, considering the computational complexity of risk and our use of rules, or heuristics, to facilitate decision making. Heuristics such as “guarantee gains” and “avoid guaranteed losses” would reliably result in asymmetric preferences that have led researchers to conclude that we feel losses more deeply than we feel gains (more on this when we discuss the Endowment Effect).

In the context of the Framing Effect, considering whether to save some lives or take a gamble that involves a chance that no one is saved, a “guarantee gains” approach makes Program 1 the obvious choice. When the frame is switched so that the focus is on loss of lives rather than the saving of lives, an “avoid guaranteed losses” heuristic makes Program B (the risky option) appear superior. Conscious application of such heuristics is even plausible in the Situation Room (“You have to save people” ; “You can’t just let people die.”)

Question 2.

Does our vulnerability to frames even matter if outcomes are roughly equivalent? Rather than exposing a weakness in our decision making abilities, I think that the Framing Effect demonstrates our adaptive tendency to save time and energy when expending those resources would be fruitless. If outcomes are roughly equivalent, then allowing frames to focus us on an aspect of choice for the sake of efficient decision making should not be detrimental. Indeed, when the value of outcomes differs significantly, frames tend to have less of an impact on our choices.

Image via Sandro Donda / Shutterstock.

]]> 0
The Phantom Menace Tue, 16 Sep 2014 11:00:15 +0000 After the amputation of a body part, patients frequently feel that the amputated area is still present. Sensation of the position and movement of the limb, as well as of heat, cold, itching, and even pain, are often described for a limb that is no longer present.

Pain in a limb that has been amputated is known as “phantom limb pain”. According to statistical data, it can occur in up to 80% of all amputees, although to varying extents. The incidence of phantom limb pain seems to increase with age. In congenital amputees, there are occasional reports of phantom limb pain arising later in life, but these are extremely rare. Phantom limb pain is also very rare in children who are amputated at a very young age. Older children, on the other hand, show an incidence of phantom limb pain that is much closer to that of adults.

Whether phantom limb pain diminishes over time after the amputation is unclear. There are reports indicating that long-term amputees maintain a similar degree of pain, while others indicate that there is a slight decline over the years. Even so, it is but a slight decrease; this type of pain is most likely to prevail for life.

Given its incidence and the degree of disability it causes, the neurobiological mechanisms of phantom limb pain are naturally a matter of great interest. Changes in both the amputation site and in the central nervous system are thought to contribute to the generation of pain.

Locally, one of the paramount mechanisms is the generation of spontaneous neuronal discharges at the site of amputation. When nerves are cut or injured, swelling, growth and sprouting of axons occurs as an attempt to reconnect the cut nerve. In the case of an amputation, nerves have nothing to reconnect to, which leads to the formation of a twisted mass of axons known as a neuroma. These disorganized nerve fibers in neuromas have an increased excitability that leads to exacerbated pain due to stimulation of those areas; these nerve fibers can actually even fire without any stimulation, leading to spontaneous pain. This abnormal discharge of neurons, originating from areas other than nerve endings, is known as ectopic activity.

Due to the massive neurochemical changes that an amputation induces, this ectopic activity can even arise in neuronal structures other than the cut terminal, such as the cell body of the neuron. This makes it harder to eliminate pain that could otherwise be treated with a local anesthetic, for example.

These changes in the peripheral nervous system lead to changes in the central nervous system. Neurons in the spinal cord also become more responsive and the mechanisms of pain modulation and inhibition become less effective, contributing to the increased sensation of pain.

One important question concerning phantom limb pain is why is this pain being perceived as arising from a body part that is no longer there.  It is believed that this is a consequence of changes in the somatosensory cortex in the brain, the area where touch and pain are perceived. The primary sensory cortex allows the identification of the body part from which pain is arising by being organized as a “map” representing the body. After amputation, there may be a shift in the representational scheme of the somatosensory cortex. This means that, for example, what used to represent the elbow now erroneously represents the hand; consequently, pain that is originating from the stump located next to the elbow will be perceived as originating from the hand, the phantom limb.

Due to its characteristics, the pharmacological treatment of phantom limb pain widely overlaps with that used for neuropathic pain conditions. However, these are only moderately effective.

An alternative therapy that has shown promising results is mirror therapy. In this procedure, a visual reconstruction of the lost limb is produced using a mirrox box that allows the reflection of the intact limb to visually replace the missing limb. After a continued use of this approach, significant decreases in pain have been obtained. This therapy is thought to act by allowing the restitution of the correct representation of the missing limb in the brain, although this is still unclear.


Flor H (2002). Phantom-limb pain: characteristics, causes, and treatment. Lancet Neurol, 1(3):182-9. PMID: 12849487

Flor H, Nikolajsen L, & Staehelin Jensen T (2006). Phantom limb pain: a case of maladaptive CNS plasticity? Nature reviews. Neuroscience, 7 (11), 873-81 PMID: 17053811

Foell J, Bekrater-Bodmann R, Diers M, & Flor H (2014). Mirror therapy for phantom limb pain: brain changes and the role of body representation. European journal of pain (London, England), 18 (5), 729-39 PMID: 24327313

Giummarra MJ, & Moseley GL (2011). Phantom limb pain and bodily awareness: current concepts and future directions. Current opinion in anaesthesiology, 24 (5), 524-31 PMID: 21772144

Moura VL, Faurot KR, Gaylord SA, Mann JD, Sill M, Lynch C, & Lee MY (2012). Mind-body interventions for treatment of phantom limb pain in persons with amputation. American journal of physical medicine & rehabilitation / Association of Academic Physiatrists, 91 (8), 701-14 PMID: 22286895

Image via Oleg Mikhaylov / Shutterstock.

]]> 0
How Temperature Affects People With Multiple Sclerosis Sat, 13 Sep 2014 11:00:30 +0000 Multiple Sclerosis (MS) is a neurological disorder that presents with myriad of symptoms. The disease causes physical as well as emotional changes in the patients. One peculiar symptom seen in people with MS is their sensitivity to heat. While heat sensitivity is a symptom of many other conditions as well, the exacerbation of the other symptoms, when the core body temperature rises, is a disturbing and unfortunate feature that affects people with MS.

What causes the symptoms in MS?

Multiple Sclerosis is an autoimmune disorder. This auto-immunity, the tendency of the body’s immune system to offend its own tissues or organs, results in nerve damage in MS patients. The nerve fibers of the central nervous system have a protective coating called the myelin sheath. The sheath also covers many nerves of the peripheral nervous system. Immune reactions are initiated against the main component, myelin, of this protective sheath which eventually leads to nerve damage.

Most of the symptoms seen in multiple sclerosis patients are due to the damage caused to the specific nerves. The symptoms and signs depend on the part of the nervous system affected and the extent of damage caused to the nerves.

Heat sensitivity in people with MS

Studies report that about 60 to 80% of people diagnosed with MS show excessive sensitivity to heat. People with this neurological disorder experience a temporary exacerbation of their existing symptoms and also new disturbing symptoms when they are exposed to elevated temperatures. They are sensitive to even a slight increase in their core body temperature (0.25°C to 0.5°C) that may be due to physical exercise or a warmer environment.

In many people with MS, the disease presents with intermittent periods of relapses and remissions. But the heat sensitivity causing exacerbation of symptoms is different from relapses. The problem is actually a ‘pseudo-exacerbation’ of the symptoms. Increase in temperature as such does not cause any nerve damage. Most people with MS can actually figure out their intolerance to heat and they are aware of the fact that their symptoms get worse with increasing temperature.

But what causes this heat sensitivity? Scientists, who initially attributed vascular and hormonal causes, currently propose that the reason is a disturbance or block in the normal physiological nerve conduction mechanisms.

Recent research data suggest that demyelination not only causes slowing of nerve impulse conduction along the affected nerve fibers but also linked to a phenomenon called Frequency Dependent Conduction Block. Demyelinated nerves can conduct only single frequency or low frequency impulses. They are not capable of effectively conducting the high frequency nerve impulses. This being the case, it is also found that a slight increase in temperature can slow down or completely block the action potentials in the demyelinated nerve fibers.

Increase in temperature worsens the symptoms of MS

Most of the symptoms of MS are due to the impaired nerve impulse conduction and with an increase in temperature the symptoms worsen in the affected people. Among the various symptoms of MS, fatigue, limb weakness, visual problems, pain and numbness and cognitive dysfunctions are commonly worsened when the core body temperature increases.

  • Wilhelm Uhthoff in 1890 described the peculiar phenomenon of ‘temporary worsening of symptoms with exercise’ in optic neuritis patients. Optic neuritis is a condition affecting the eyes. It is a common problem in many people with MS. Uhthoff noticed that visual symptoms were aggravated when people with MS performed exercise. While he attributed exercise to be the etiology of this problem, it was later realized that any action or condition that increases the core body temperature can worsen the symptoms in MS patients. This is called the Uhthoff’s phenomenon or Uhthoff’s sign.
  • Fatigue is a common symptom and it is seen in nearly 70% of people with MS. Premature fatigue occurs in people with MS when they are exposed to even a slight increase in temperature. Weakness, especially affecting the limbs is also a symptom that is perceived during or aggravated by a temperature rise.
  • Central pain is another common symptom that worsens with an increase in temperature. Studies propose that the reason behind this may be the damage caused to the thalamus and the spinothalamic-cortical pathways leading to thermo-regulatory dysfunction. Numbness is another symptom that worsens with rising core body temperature.
  • Cognitive functions in MS patients are also sensitive to heat. Memory problems, judgment difficulties, concentration difficulties and problems with other cognitive skills like language comprehension are more pronounced with the increase in body temperature. A recent study pointed out that people with MS demonstrated worsening of cognitive functions in warmer days.

The influence of temperature on MS patients was established as early as the late 19th century. Heat sensitivity has been realized as an important reason for the disabling symptoms of the disease. Though the pseudo-exacerbation of the symptoms is a temporary phenomenon, they are quite disturbing and severe enough to restrict the activities of MS patients, especially during warmer seasons.


Davis SL, Wilson TE, White AT, Frohman EM. Thermoregulation in multiple sclerosis. J Applied Physiology. 2010 Nov; 109(5): 1531-1537. doi: 10.1152/japplphysiol.00460.2010

Flensner G, Ek AC, Söderhamn O, & Landtblom AM (2011). Sensitivity to heat in MS patients: a factor strongly influencing symptomology–an explorative survey. BMC neurology, 11 PMID: 21352533

Giuliodori MJ, & DiCarlo SE (2004). Myelinated vs. unmyelinated nerve conduction: a novel way of understanding the mechanisms. Advances in physiology education, 28 (1-4), 80-1 PMID: 15149966

Leavitt VM, Wylie G, Chiaravalloti N, DeLuca J, & Sumowski JF (2014). Warmer outdoor temperature is associated with task-related increased BOLD activation in patients with multiple sclerosis. Brain imaging and behavior, 8 (1), 128-32 PMID: 24146082

Marino FE (2009). Heat reactions in multiple sclerosis: an overlooked paradigm in the study of comparative fatigue. International journal of hyperthermia : the official journal of European Society for Hyperthermic Oncology, North American Hyperthermia Group, 25 (1), 34-40 PMID: 19219698

Rasminsky M, & Sears TA (1972). Internodal conduction in undissected demyelinated nerve fibres. The Journal of physiology, 227 (2), 323-50 PMID: 4647244

Rasminsky M (1973). The effects of temperature on conduction in demyelinated single nerve fibers. Archives of neurology, 28 (5), 287-92 PMID: 4696011

Image via Sergey Nivens / Shutterstock.

]]> 0
Thinking Slow About Thinking Fast – Part III – The Monty Hall Problem Wed, 20 Aug 2014 11:00:42 +0000 To wrap our minds around human behavior it’s helpful to consider why certain behaviors may have evolved. Natural selection tells us that behaviors that increase our chances of passing along our genes will continue to show up in future generations. It therefore follows that aspects of our behavioral tendencies at some point likely conferred an advantage over alternative behaviors. Efficiency may be the specific advantage afforded to us by our so-called irrational behaviors.

Before delving into classic examples of “irrational” behavior, I’d like to share my favorite example of how our brains are not built to choose optimally in certain scenarios wherein choosing optimally requires a large amount of energy.

In 1990, a magazine columnist named Marilyn vos Savant was approached with a question about the best strategy to use on the game show Let’s Make A Deal. On the show, a player is told that there is a large prize behind one of three doors, and if he chooses the correct door, he wins the prize. Once the player has selected a door, the host opens one of the two remaining doors, revealing no prize behind that door. The host then gives the player the option to change his guess as to which door leads to the prize.

If you were a contestant, would you stick with your original choice or switch your choice at the last minute, given the opportunity? According to Marilyn, switching to the other door would make you more likely to win the prize. Don’t believe it? Neither did dozens of mathematicians who read Marilyn’s response. But it turns out that Marilyn was right, and here’s why:

When the player originally chooses a door, there is a 1/3 probability that he has chosen the door that will win him the prize and a 2/3 probability that he has not. The key to the problem is that the host will always open a door that does not lead to the prize. Thus, regardless of where the prize actually is, the host, by eliminating one of the two doors that does not lead to the prize, gives the player the opportunity to switch their bet from a 1/3 probability of winning the prize to a 2/3 probability of winning.

So – the “rational” choice in Let’s Make A Deal is to switch doors before the door with the prize is revealed. But our brains do not readily identify the advantage associated with the switch. We have to almost turn off our fast brain to grasp the solution to this problem, which has come to be known as The Monty Hall Problem. Before Marilyn’s publication, Let’s Make A Deal players reliably defaulted to their original choice – an example of what is known as the “status quo bias”.

Why are there scenarios in which we tend to choose the seemingly less optimal option? My suspicion is that the relative advantage afforded by the “optimal” option does not outweigh the energy cost associated with fully assessing the options, particularly in light of the fact that the option we choose against in these scenarios does not guarantee a gain (remember, switching doors increases one’s chances of winning the prize but does not guarantee a win).

If we employ a rule of thumb, or heuristic, such as “go with your gut”, that can be applied habitually to all choice situations where outcomes are probabilistic, we can save an amount of energy that may be more valuable than forgone gains. And indeed, choosing the status quo is associated with increased activity in the parts of the brain that are active while performing habits, while choosing against the status quo activates the parts of the brain important for slow, laborious thinking. It therefore usually takes more effort and energy to NOT choose the status quo.

One last consideration – before moving on to the Framing Effect in the next post – is why our brains did not evolve to more efficiently identify advantageous choices like switching doors in Let’s Make A Deal. Throughout human history, we have dealt with the inundation of stimuli, the meaning of which we have learned through experience. However, gambling-type choices with known risk probabilities represent modern choices without obvious prehistoric analogues. Our brains thus likely did not evolve to assess such choices with high precision. Perhaps they would have had our ability to acquire food and other resources depended on identifying subtle statistical advantages.

Image via Designsstock / Shutterstock.

]]> 0
When To Think Less About Your Choices Mon, 18 Aug 2014 13:26:41 +0000 Smart people have a tendency to think hard about the choices they make. Who are you going to marry? What house are you going to buy? What flavor of gelato should you get? Some make lists of pros and cons, some try to think about the most important features of the choices, and some make up new strategies on the fly. The more important the decision, the more we feel it’s warranted to think hard about it. It seems self-evident that thinking more would produce better choices. But in science, even self-evident things have to be tested.

Psychologists Ap Dijksterhuis and Zeger van Olden ran an experiment in which people were asked to choose which art poster they liked best. The first group of people, the conscious thought condition, were shown the posters on a screen, one by one. They were encouraged to look at the posters carefully, and to even list what they liked and disliked about the posters on a piece of paper. Finally, all posters appeared on the screen and the people clicked which one they liked best.

In the unconscious thought condition, people were shown the posters, and then did anagrams for seven minutes and thirty seconds – the same amount of time that the conscious thought condition people were thinking about the posters – and then they were shown the posters again and asked to make a choice. People in both groups got to take home the poster they chose.

Three to five weeks later the people were phoned and asked how satisfied they were with their poster, on a scale of one to ten. Contrary to what we might expect, people in the unconscious thought condition, the ones who did anagrams instead of thinking about the posters, were happier with what they went home with!

What’s going on here? When people think hard about something, they tend to focus on only a few variables. Conscious thought is like that. Unconscious thought, on the other hand, tends to be a bit more holistic, and relies more on emotion. While the unconscious thought group was solving anagrams, their unconscious minds were working on the posters. Interestingly, there was another group, the immediate decision group, which chose a poster immediately, with no time to think about it, and no anagram delay. This group had the same poster satisfaction scores as the conscious thought condition. This suggests that unconscious processing is useful – your immediate reaction might not be the best, and your conscious thought might not be the best either.

Overthinking can mean that too much attention is being paid to unimportant attributes. When it comes to liking a poster, it is your feelings about the poster that matter the most. By limiting the input of the rational system, intuition gets a greater weight in the generation of predicted and actual emotions, increasing one’s ability to correctly know what one feels, or will feel, about a choice.

In short, it’s easy to overthink things.


Dijksterhuis, A., & van Olden, Z. (2006). On the benefits of thinking unconsciously: Unconscious thought can increase post-choice satisfaction. Journal of Experimental Social Psychology, 44,692-698.

Image via Sergey Nivens / Shutterstock.

]]> 0