Opinion – Brain Blogger http://brainblogger.com Health and Science Blog Covering Brain Topics Sat, 30 Dec 2017 16:30:10 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.1 In a World of Never Ending Tension, Seek Compassionate Neutrality http://brainblogger.com/2017/12/30/in-a-world-of-never-ending-tension-seek-compassionate-neutrality/ http://brainblogger.com/2017/12/30/in-a-world-of-never-ending-tension-seek-compassionate-neutrality/#respond Sat, 30 Dec 2017 16:30:10 +0000 http://brainblogger.com/?p=23142 Amidst the rising tensions in the world around us, people are finding themselves in the unique position of having to make hard decisions about choosing passive observation or active participation, causing some to toss their opinions into the fray of the multitude of voices speaking out within society today.

In contrast to what we have been accustomed to living and being for several decades, the pendulum of Change has begun its swing towards the opposite direction. The social tension created from this shift in direction has many of us unsettled. Enough so that many of us have turned inward in order to make sense of our external reality.

The need to seek balance and stability is inherent within us as human beings. We are meant to understand the other person’s point of view and seek the middle ground in between our two views in order to create perspective. Out of that informed perspective, we make decisions on how to act, our behavior evolves from those decisions, and consequently, our lives come into being from those personal thoughts and actions. This is how human beings are naturally designed to grow, evolve, and become within a constantly ever-changing world. If we are in the right place (along the spectrum of choices), at the right time (to make a decision), then right action (behavior and action) is effortless as we pivot our way into a greater life. This is one of the key tenets of my book, How Me Found I: Mastering the Art of Pivoting Gracefully Through Life.

Compassionate Neutrality vs. “My Way or the Highway”

The word “Dual” means two or something composed of two parts – harmony and balance is achieved in the coming together. The word “Duality” is when the two parts are in opposition to each other – competitive adversaries moving away from each other towards extreme polarization.

In that context, we are naturally hardwired to seek balance, “true inner balance”. Yet our external world is filled with ego-driven polarization ? “duality” ? the corrupted version of what “dual” really means. Because we are naturally designed to always seek balance (whether we are aware of it or not) and because the only constant we can expect in life is “Change”, then balance in an environment of dynamic change is achieved through a “two-party system”, a dual-system structure of compensating complimentary counterparts. As humans, psychologically our dual-system structure is the Ego-Heart handshake. This uniquely human complimentary relationship is inherent within what I call a “Natural Person”. To strive for continual balance is our natural state of mind in dealing with changing realities.

Unfortunately, our society, in its current presence of mind, does not recognize that the ego and the heart has a dual-system relationship, meant to counter balance each other so that we humans can continue to evolve within an ever-changing dynamic environment. For many of us, as we age and grow, the individual personalities that we exhibit outwardly are the reflections of our egos maturing as we learn how to adhere to the conditional social norms set forth for us to survive and operate within society. In contrast to the “natural person”, this ego-developed persona is what I call our “Conditioned Personality”.

Today, the word “dual” has become synonymous with the word “duality”, “It’s my way or the highway,” or “I’m right, you’re wrong”. Absent of the true meaning of a two-party system of balance, we have disintegrated into a mindset of where everything is now seen from the corrupted filters of polarized duality. Collaborative and communal dialogue has given way to personalized monologues based on absolute judgment and opinion.

This need to convince people that one way, and only one way, “my way”, is the cause for the rising tensions in the world today, as evidenced in the political, socio-economic, and ethno-diverse arenas of discussion. Unaware that balance is inherently a desire that we are hardwired for and ignorant that it is a partner to the heart, the ego interprets that innate desire as a need to convince others that its viewpoint is the right way to go and disregards, seeks to dominate, even eliminate the natural role the heart is meant to play. For the ego, all roads must lead to Rome and it is all about me, only me. There can be no other.

We all have individual egos; each convinced that its way is the correct way to go. Our own heartfelt knowing and the innate need for natural equilibrium have been mutated into a need to strive for a dominant view, absent of room for another view to exist. We have left the middle field in the center to take up position at either end of the playing field. As it meets resistance from other egos, our ego’s need for superiority can only lead to an outcome of aggressive force, more domination, and ultimately violence. We will kill to be right. We must be right at all costs. The end justifies the means.

And yet, the heart does exist and is very much a part of our physical and psychological makeup. It cannot be ignored, subjugated, or disregarded. Without the heart, we physically cease to exist. Without the heart, we have no conscience. It is through the heart that we connect to the greater wholeness of life in accordance to God, Nature, and the Universe, whichever you choose to refer to the larger part of who we are. The heart inherently knows that balance is necessary for our very own existence as a species. It defines our humanness, guides our humanitarian endeavors, and nourishes our humanity. It knows that it is the counterpart to the ego in a dual-system structure designed to move towards compassionate neutrality, thus bringing well-being into our lives. It is what allows us to respond to the environment in a nonjudgmental and loving way. It doesn’t need to be justified; it just needs us to be aware so that it can guide the ego towards creating a better life. Together, this heart-ego handshake is what allows us to make sense of what is happening in our extrinsic environment and make the right intrinsic decisions that can mutually benefit ourselves and others, not only ourselves. It is this communal awareness that we are both individually a person, and yet connected to each other as part of an overall human community, that gives us comfort that we are truly ever alone, unprotected, isolated, or abandoned. This awareness is the umbilical existence of our personality within the nesting doll collective of the human species. If we take care and look out for the welfare of others, then in turn we also receive benefit for our own self. Vice versa, if we look for balance within, then our external society also receives the benefit of that internal balance because we will emanate that behavior out into our external world.

In order for us to de-escalate the rising tension and violence in the external world of our society today, we, as members of humanity, need to look within and seek balance the way a natural person would. We need to reestablish the dual-system structure of the ego-heart handshake through intention, voice, and action. Because we are human and inherently designed for balance, we will naturally always seek to return to a state of compassionate neutrality; regardless of how long it takes to do so and despite what our ego thinks. The pendulum will eventually swing from any extreme edge to the center fulcrum.

So when we are at the extreme edge of a pendulum swing (the ego’s viewpoint), the pull from the other extreme edge (the heart’s viewpoint) will become intense enough to cause the pendulum to begin swinging back to the center where the heart’s communal compassion and a neutral ego’s informed judgment jointly resides. It is Mother Nature’s way of ensuring survival in a dynamically changing way.

The fallacy of our egos not understanding this natural fundamental principle of life will cause us to resist that innate impulse of allowing the other viewpoint to change our mind. Instead, we will literally fight to enforce our viewpoint to maintain our position. We don’t want the pendulum to swing at all. We want to remain in status quo, defiant to the extreme, and will react violently, determined to maintain our current position, against all natural forces of movement. This is the cause of the rising tensions in our world today.

We All Have a Choice

As an alternative, may I suggest that it is compassionate neutrality that we all seek, an aligned response to the natural forces of constant change occurring within our environment. It is what Buddhism calls, “the middle way”. It is operating from the center fulcrum of the pendulum swing, the vesica piscis of creation, the ability to see both sides and optimally benefit from the combined viewpoints. It is living life from both the macro and micro views of both the mountain peak and the valley below. This is how we survive, evolve, and grow as humans being humans, citizens of humankind, and as members of the human species.

]]>
http://brainblogger.com/2017/12/30/in-a-world-of-never-ending-tension-seek-compassionate-neutrality/feed/ 0
Cracking the Code–Revealing the Secret Behind Our Perception http://brainblogger.com/2017/11/03/cracking-the-code-revealing-the-secret-behind-our-perception/ http://brainblogger.com/2017/11/03/cracking-the-code-revealing-the-secret-behind-our-perception/#respond Fri, 03 Nov 2017 15:00:55 +0000 http://brainblogger.com/?p=23042 When you’re an eye doctor, and I’ve spent my entire career as one, you learn a lot about how people use, and misuse, the sense of sight to perceive the world around them. As humans, we’re constantly interpreting and occasionally manipulating our experiences to distinguish fantasy from reality. Some people are better at this than others. Some, for example, are consistently taken in by conspiracy theories or fake news stories, whereas others can quickly sniff them out as bogus.

A few years ago, I asked myself-what’s the difference between people with keen powers of perception and those with weaker powers? Is it education? Experience? Genetics? I began researching the topic and discovered there isn’t even a term to classify our power of perception, so I adopted one. I call it perceptual intelligence, and it’s the title of my new book (in bookstores this month).

“Perceptual Intelligence,” (or PI), is our ability to interpret sensory data and arrive at a decision.  Just as with other forms of intelligence, some people have higher PI than others. Good decision-makers exhibit a high level of Perceptual Intelligence, whereas bad decision-makers demonstrate weaker PI.

PI, I learned, is an acquired skill. We can improve our PI, in fact, through awareness and practice.  You may, for instance, find yourself overreacting to certain situations or circumstances. But with proper knowledge and a different perspective, you can train yourself to arrive at a more appropriate reaction.

In this fast-paced digital age, where we’re often forced to make decisions on the fly; we often “leap before we look.”  That might mean handing over your credit card number without verifying a website’s security, or trusting a news story without considering the integrity of the source. People with high PI, however, consistently “look before they leap.” Before making a decision, they ask themselves, instinctively: Am I interpreting this sensory data correctly and making the best choice?

Every millisecond, our senses take in a massive amount of information, which then travels to the brain.  The brain, in turn, is where our perceptions originate. Those perceptions may accurately reflect reality but may also derail us toward fantasy. The driving question behind my book is: Why do our perceptions sometimes clash with reality? There are many reasons, I discovered.

One is medical. For example, a condition known as synesthesia can cause a person to literally see music or taste sounds. (A second form of synesthesia connects objects such as letters and numbers with a sensory perception such as color or taste.) Even the common cold, which affects the eyes, ears, nose, and throat—not to mention the brain, when our heads fill with congestion—has been known to distort our power of perception. When we are under the weather from the flu, our power of perception might seem so foggy that we develop a pessimistic view of situations that we might otherwise view with optimism. Another medical factor influencing perception is sleep deprivation. As any insomniac or parent of a newborn will tell you, a lack of sleep can distort our perception of the world, sometimes even fogging our memory of what happened during our sleepless state.

An obvious (and sometimes deadly) influence on our power of perception is drugs and alcohol. We don’t need to review criminal cases and “beer goggle” studies to see how drugs and alcohol impair our senses and affect our judgment.

There’s also our psychology, biology, genetics, habits, cultural upbringing, and memories, all of which combine to create our unique perceptual filter, influencing our decisions, thoughts and beliefs. The pope’s belief in life after death, for example, is diametrically opposed to that of theoretical physicist Lawrence Krauss’. Yet each is convinced that his view is the correct one. Is the pope blinded by faith? Is Dr. Krauss closed to any idea that isn’t evidence-based? We all create a version of the world unlike anyone else’s. And how could it not be? It is shaped by our perceptions.

Often, we mold our perceptions like Play-Doh to suit the story we create of our lives. But sometimes our perceptions work behind the scenes, shaping our thoughts and behaviors without us realizing.  When we have a vague memory of a painful incident, what purpose does it serve? Why do we hold onto an incorrect and hurtful perception when instead we could make something good of it? People with finely-tuned PI can identify and topple faulty ideas that try to sabotage them.

Part of strong perceptual intelligence is recognizing that your mind is more plastic than you think and can be molded. PI can be improved, like any other skill, such as driving a car, playing a sport, or learning an instrument.  Improving PI can have a profound effect on your life. Better decisions can reduce the risk of financial, health, family problems, and other issues that can arise from low perceptual intelligence.  You could say, therefore, that high PI even improves happiness.

Dr. Brian Boxer Wachler, M.D., an expert in human perception, is America’s TV Eye Doctor and internationally renowned for his expertise in Keratoconus treatments, LASIK and other vision correction procedures. His book, Perceptual Intelligence (published by New World Library), is available in bookstores October 17, 2017 on Amazon, Barnes & Noble, and Indie Bound.

Image via PublicDomainPictures/Pixabay.

]]>
http://brainblogger.com/2017/11/03/cracking-the-code-revealing-the-secret-behind-our-perception/feed/ 0
David And Goliath: The Art of Turning All Weaknesses Into Strengths http://brainblogger.com/2017/10/27/david-and-goliath-the-art-of-turning-all-weaknesses-into-strengths/ http://brainblogger.com/2017/10/27/david-and-goliath-the-art-of-turning-all-weaknesses-into-strengths/#respond Fri, 27 Oct 2017 15:00:07 +0000 http://brainblogger.com/?p=22922 Hello, everyone! I’m Arda Cigin, founder of Stoic-Leaders.com and in this article, I’m about to change your whole mindset towards all “disadvantages” and “less than stellar situations”.

How?

I’ll be telling you about the battle between David and Goliath as an instructive case study to understand how advantages can actually be the source of our greatest weakness, and vice versa.

And then, I’ll give you many practical solutions and mindset shifts that you can apply to your life today to turn disadvantages circumstances into your greatest strengths.

But before we get into the insights, for those who may not know, let’s analyze David and Goliath’s timeless story.

– Why even tell a story?

I want to tell a story because the human brain relates through stories, not facts and theories. If you truly want to take away an action plan at the end of this article, pay attention to this timeless story.

 

The Instructive Story of David and Goliath

Goliath is this giant who is six-foot nine, wearing a bronze helmet and full body armor. He is carrying a javelin, a spear, and a sword.

Why? Because he is about to go into a fierce battle.

The giant, Goliath was about to fight with a fledgling shepherd boy named, David.

David, as a fragile young man, inherently knew he was incomparably weak to his opponent, yet he wanted to take the stand and confront the wrath of Goliath nonetheless.

Is David’s confidence misplaced? Maybe there is more to David than meets the eye…

Only time will tell.

Naturally, everyone judged David to have no chance against Goliath. Most people who looked at this combat duo would bet their money on Goliath.

And trust me. You would too.

If we were to observe Goliath, he was prepared for close combat since he was wearing heavy armor and was armed with various spears and swords.

What many didn’t know was that the great and almighty Goliath, who was regarded as the supreme winner of this fight, had one fatal, characteristic flaw:

He had awful eye-sight.

That being said, the fight started.

At the beginning of the battle, Goliath shouted the words “Come to me!”.

Yet do not mistake this as an arrogant battle cry. Goliath needed David to be in arm’s-length so that he could see David and defeat him. It was more of a desperate cry than anything else, a definite side effect of his weak eyesight.

If you think about it, Goliath didn’t excel in close combat just because he chose to do so. He had no other option but to excel in close combat.

Remember his awful eye-sight? If he were to be a strong warrior, he can not be a long-ranged one like an archer. Combined with his bulky physical nature, a simple fault in eye-sight turned Goliath into a wrathful close-ranged warrior with almost-blind eyes.

Goliath was on, what Robert Greene calls, the “death ground”.

He was trapped and had no other option. Either he was going to master close combat or he’d lead his life as a blind giant. With the help of outside pressures and internal obstacles, he became the best in his niche—ruthless close combat.

David, the fragile young man, may be a shepherd but he was a smart boy. He was not going to fight with Goliath in close combat. That would be foolish.

Therefore, he’d prepared himself for a long-ranged combat—a kind of fight Goliath was not prepared for.

As Goliath started to get agitated, David took out his trusty slingshot, swiftly positioned a rock, pulled the end of the sling and shot right at the forehead of the giant.

Goliath couldn’t even see the rock because of his faulty eyesight.

The speed at which the rock traveled was more than enough to put Goliath into a deep slumber he’d never wake up from.

And so the shepherd boy won the fight he was predestined to lose. All the cards were stacked against him, or so it appeared.

A supremely disadvantageous fragile man came to be victorious against a supremely advantageous killing machine.

Naturally, everyone was shocked. They told themselves how lucky David was.

But this has nothing to do with luck.

All the spectators were wrong. There was one thing David was far superior to Goliath in.

It was neither his size nor his strength, but his ability to think strategically.

And this exact strategy that David had used to kill Goliath will be the topic of our discussion today.

______

Most often in life, strategic thinking is the secret ingredient to turning unfavorable situations into favorable ones

Understand: Strategic minds will always rise victorious—whatever the circumstance, whoever the enemy.

 

What Can We Learn From The Grand Strategist, David?

1) Adaptation: make it your greatest asset

While we are making decisions, if it proved successful before, we tend to repeat the same tactics and maneuvers we’ve familiarized ourselves with.

Humans are innately lazy creatures and naturally, we cling to what succeeded before and expect it to continue to do so in the future.

This move will prove ineffective in the long term.

Realize: by doing so, you only create rigid pathways, neural-connections, and habits you are better off not adopting.

I want you to see life as a chess game. As long as you repeat the same moves, you are bound to lose.

Always have the flexibility to adapt to your ever-changing circumstances. If something doesn’t work (e.g., self-actualization efforts or business and career success), then change your actions and thoughts. Start thinking strategically to find options that you haven’t thought of before.

Make adaptability your greatest asset.

As Darwin pointed out,

It is neither the survival of the strongest nor smartest, but the most adaptable.

 

2) Shift the Battlefield 

Close combat? That’s what Goliath wants.

Use your wits: In this circumstance, always use the slingshot, never the sword.

Understand: Never play in a field you are oblivious to. The knowledge of the terrain will give you unimaginable and untold power.

Realize: no one can force you to play a game you suck at. If they attempt such a thing, just politely decline, as David did, lead them into playing in your arena—a field where no one but you holds the cards. A field where you become the god and they become the puppet.

The lure of such power is undeniable, don’t you think?

 

  3) The Phenomenon of the Masked Opposite

Most often in life, people tend to mistake appearance for reality. In your interactions with people, always remember the facade of appearances. No one is as they appear to be.

Everyone sees what you appear to be, few experience what you really are – Nicollo Machiavelli

When you confront your enemies, never be intimidated by their appearances. Instead look at the parts that make up the whole. Once you identified the weakness, attack with all your might. They’ll surely fall swiftly just like Goliath.

Remember: The hypocrital nature of appearances always deceives the naive

If you see an extreme behavior pattern in someone (e.g., superiority, arrogance, extreme shyness, avoidance) you are most often confronted with the phenomenon of the masked opposite—what you see is actually the exact opposite.

For example:

If someone acts particularly arrogant, realize that they are actually trying to mask their insecurity and lack of confidence. Someone who is already confident wouldn’t need to act superior in the first place.

If someone smiles incessantly and laughs at every little thing you say, would you assume they are being natural?

No, of course not. They are only using what I’d like to call “the supreme joker mask”. No one can be extremely happy and euphoric all the time. Therefore, they are only happy when you are around.

Maybe they like you and want you to like them back, maybe they want to get close to you and hurt you or maybe they just want to break the ice, whatever the reason might be, they are wearing a mask that most definitely does not express their actual feelings.

You need to train yourself to see what is underneath the mask. Everyone you meet will wear some sort of mask. And there is a reason for that.

If we openly judged people around us, naturally we’d generate unnecessary offense and malice. Therefore, from an early age, most of us have learned to hide our real thoughts and emotions.

Otherwise, we’d be vulnerable and open to attacks. We’d be left alone and isolated. To prevent such unfavorable situations, we choose to cooperate and hide our less than favorable qualities.

This is really nothing more than our ancestor survival instincts.

Therefore, if one is actually insecure but masking it with arrogance, you need to get them to drop their guards.

They need to lose the control. Do something that will make them panic. Anger them on purpose if necessary.

Anger them on purpose if necessary.

 

Final Words

As final words, I want you to remember David and Goliath’s story every time you find yourself in a less than ideal situation.

– You financially struggle but want to start your own coaching business?

Well, that’s good, because. it is possible to bootstrap an online business by being creative and resourceful.

While wealthy business owners spend billions of dollars on advertisements—mistaking ad-generated customers for long term customers—you’ll find your unique selling proposition and create loyal customers much faster thanks to your creative product, resourceful marketing, and sheer hustle.

Starting a business without capital, especially nowadays, is actually a blessing in disguise.

– You want to write a book but you are not native?

Well, that’s good. As a language learner, your humble determination towards studying grammar, vocabulary, and phrases will enable you to get a better grasp of the nuances where most native writers will get over confident and skip the many important stages of becoming a writer—understanding how narratives work, how readers are captivated, how great writers structure stories.

Your humble and hardworking attitude towards writing will enable you to progress at a faster rate than most native writers.

Can you see the power of this strategy?

Nothing can be a disadvantage for you if you are equipped with the right mindset.

Before we wrap this up, don’t forget to share this article and comment below if you’ve experienced a similar “David and Goliath” situation.

Were you in David’s or Goliath’s position? Do you have any specific stories you would like to ask me about?

I’d love to hear your story. (I reply to almost all comments)

]]>
http://brainblogger.com/2017/10/27/david-and-goliath-the-art-of-turning-all-weaknesses-into-strengths/feed/ 0
Follow Me: Astrocytes in Spinal Cord Repair http://brainblogger.com/2017/09/09/follow-me-astrocytes-in-spinal-cord-repair/ http://brainblogger.com/2017/09/09/follow-me-astrocytes-in-spinal-cord-repair/#respond Sat, 09 Sep 2017 15:25:08 +0000 http://brainblogger.com/?p=22766 You are standing in the middle of King’s Cross with a postcode in your hands, your feet trodden on by the busy crowd. So much has changed since you were last here 20 years ago, and every way you go seems to meet frowning faces set in their own path. By a lucky coincidence, somebody recognizes you and shakes your hand—they are going the same way. What are the odds! “Thank you”, you mutter, “you are a star”.

Through thistles and thorns

Spinal cord injury (SCI) leaves patients isolated from their own bodies with devastating life-long consequences. The limited nature of adult human spinal cord repair is frustrating but understandable from a developmental point of view. If fasciculis gracilis, a tract of the spinal cord responsible for lower limb sensations that relays touch from the foot, gets traumatically disrupted, the dorsal root ganglion (DRG) cell axon would not only have to cross over the site of injury, but also find the correct path all the way up—up to 50 cm in a tall individual. Even if that tremendous task is accomplished, there still remains the challenge of finding the correct second order neuron of the nucleus gracilis that, in turn, connects to the third order thalamic neuron transmitting signals to the part of the somatosensory cortex representing that foot. Not something that can be easily done without help.

On the way to connectivity restoration

It has been noted that:

The three main aims of [axon regeneration] research are: to initiate and maintain axonal growth and elongation; to direct regenerating axons to reconnect with their target neurons; and to reconstitute original circuitry (or the equivalent), which will ultimately lead to functional restoration. Although many interventions have now been reported to promote functional recovery after experimental SCI, none has fulfilled all three of these aims. In fact, only the first aim has been extensively studied and convincingly demonstrated.

Indeed, even though the possibility of axon regrowth in the adult mammalian nervous system has been shown, the evidence supporting neuronal connectivity restoration is rarely convincing. Without careful guidance, aberrant axonal regrowth is a serious obstacle to functional regeneration.

Interestingly, long-distance regeneration of axons is not the only mechanism through which normal spinal cord function can be restored. Injury is known to induce plasticity by several mechanisms including unmasking inactive spared pathways, compensatory collateral sprouting from the intact pathways, or an increase in receptor number due to denervation hypersensitivity.. For example, Hofstetter et al. commented that:

Excessive local sprouting in our present study might have facilitated the formation of [novel corticospinal tract] pathways, although we could not detect a correlation between the amount of local sprouting and motor recovery.

Whilst researchers are trying to find a way to artificially guide axons to their correct targets and induce plasticity, there is a cell type that routinely orchestrates these processes. Astrocytes express a range of axon-attractive and repulsive molecules that are crucial for proper development and adult nervous system plastic reorganisation. They provide a physical adherent substrate for growing neurons, secrete extracellular neuro-attractants like vimentin and repellents like chondroitin sulphate proteoglycans (CSPGs) and semaphorins.

A scar or a star?

The neurocentric paradigm considered astrocytes to be a barrier to healing after the SCI for almost a century despite the lack of evidence that purely neuron-based therapies are sufficient for full regeneration. However, if the derogatory-dubbed astrocytic scar is removed or prevented from forming in the SCI context, not only do axons fail to spontaneously regrow through the lesion, they also become unable to regrow upon delivery of stimulating growth factors that promote growth in the presence of astrocytes.

Anderson et al. show that astrocytes are not responsible for the bulk of the inhibitory CSPG production after the SCI lesion, as hypothesized previously, but instead provide crucial axon-supporting molecules. The injury environment primes reactive astrocytes to re-express developmental synaptogenic molecules such as thrombospondin-1 that result in protective neuroplasticity.

Not all are created equal

Despite these recent discoveries, skepticism towards astrocyte-based therapies prevails. To exacerbate this view, several studies that used neural stem cell (NSC) transplantation in an attempt to repair spinal cord damage observed neuropathic pain development that correlates with the astrocytic differentiation of NSCs.

Nevertheless, it is becoming clear that the astrocytic population is far from homogeneous. Subsets of astrocytes with different permissive and restrictive qualities towards the growth of specific types of axons are found within different regions of the spinal cord that guide region-specific development of sensory and motor axonal tracts. Not surprisingly, certain astrocytic types are more selective towards regeneration and plasticity of specific types of neurons.

Davies at al. have discovered that astrocytes pre-differentiated from embryonic glial-restricted precursors (GRPs) (GRP-derived astrocytes or GDAs) are capable of promoting axonal regrowth alongside functional recovery and prevention of axotomized neuron atrophy upon transplantation in rodents with SCI, where this method supersedes transplantation of undifferentiated neural precursors.

Importantly, the method of astrocytic differentiation of precursor cells plays a crucial role in determining their regenerative capacity. If bone morphogenic protein-4 (BMP4) is used in GDA astrogenesis, the resulting population creates a strongly supportive environment upon transplantation. In contrast, the same GRPs treated with ciliary neurotrophic factor (CNTF) have poor locomotor regenerative properties, but induce active calcitonin-gene-related peptide (CGRP)-positive nociceptive c-fiber sprouting that is associated with allodynia.

This raises the possibility that the inflammatory environment of the injured spinal cord promotes differentiation of endogenous and transplanted astrocytes into the subtype that is not optimal for rubrospinal or dorsal column tract axon restoration, but, in turn, may be selectively supportive to pain-conducting c-fibers.

In addition to transplantation strategies, modification of endogenous astrocyte function can be employed. For example, oral administration of denosomin results in functional recovery in mice after SCI through increases in astrocyte proliferation, survival, and vimentin secretion that promotes locomotor raphespinal axon outgrowth.

Learning from the experts

Finally, it is noteworthy that astrocytic subtypes that promote recovery appear to physically realign local astrocytic processes in a linear fashion. Authors speculate that this linear organization provides more straightforward routes for regenerating axons to follow. Another exciting and unexplored possibility is that these astrocytes help to restore the endogenous astrocytic network function that gets disrupted by the injury, whilst beneficial neuroplasticity is its natural corollary.

Stepping back from the exclusively neurocentric view of SCI may allow for unexpected advances in functional restoration. Ultimately, the bulk of research on the intricacies of axonal guidance and plastic synapse rearrangement is an attempt at recapitulation of normal astrocytic functions. When offered a helping hand, take it, and see where it leads you.

References

Bradbury EJ, McMahon SB. Spinal cord repair strategies: why do they work? Nat Rev Neurosci. 2006;7(8):644–53. doi: 10.1038/nrn1964.

Pernet V, Schwab ME. Lost in the jungle: New hurdles for optic nerve axon regeneration. Vol. 37, Trends in Neurosciences. 2014. p. 381–7. doi: 10.1016/j.tins.2014.05.002.

Smith GM, Falone AE, Frank E. Sensory axon regeneration: Rebuilding functional connections in the spinal cord. Vol. 35, Trends in Neurosciences. 2012. p. 156–63. doi: 10.1016/j.tins.2011.10.006.

Weidner N, Tuszynski MH. Spontaneous plasticity in the injured spinal cord — Implications for repair strategies. Mol Psychiatry. 2002;(7):9–11. doi: 10.1038/sj.mp.4001983.

Hofstetter CP, Holmström N a V, Lilja J a, Schweinhardt P, Hao J, Spenger C, et al. Allodynia limits the usefulness of intraspinal neural stem cell grafts; directed differentiation improves outcome. Nat Neurosci. 2005;8(3):346–53. doi: 10.1038/nn1405.

Allen NJ, Barres BA. Signaling between glia and neurons: Focus on synaptic plasticity. Vol. 15, Current Opinion in Neurobiology. 2005. p. 542–8. doi: 10.1016/j.conb.2005.08.006.

Freeman MR. Sculpting the nervous system: Glial control of neuronal development. Curr Opin Neurobiol. 2006;16(1):119–25. doi: 10.1016/j.conb.2005.12.004.

Fallon JR. Preferential outgrowth of central nervous system neurites on astrocytes and Schwann cells as compared with nonglial cells in vitro. J Cell Biol. 1985;100(1):198–207. PMID: 3880751.

Shigyo M, Tohda C. Extracellular vimentin is a novel axonal growth facilitator for functional recovery in spinal cord-injured mice. Sci Rep. 2016;6(February):28293. doi: 10.1038/srep28293.

Wang H, Katagiri Y, McCann TE, Unsworth E, Goldsmith P, Yu Z-X, et al. Chondroitin-4-sulfation negatively regulates axonal guidance and growth. J Cell Sci. 2008;121(18):3083–91. doi: 10.1242/jcs.032649.

Molofsky A V, Kelley KW, Tsai H-H, Redmond SA, Chang SM, Madireddy L, et al. Astrocyte-encoded positional cues maintain sensorimotor circuit integrity. Nature. 2014;509(7499):189–94. doi: 10.1038/nature13161.

Chu T, Zhou H, Li F, Wang T, Lu L, Feng S. Astrocyte transplantation for spinal cord injury: Current status and perspective. Vol. 107, Brain Research Bulletin. 2014. p. 18–30. doi: 10.1016/j.brainresbull.2014.05.003.

Anderson MA, Burda JE, Ren Y, Ao Y, O’Shea TM, Kawaguchi R, et al. Astrocyte scar formation aids central nervous system axon regeneration. Nature. 2016;0(1):1–20. doi: 10.1038/nature17623.

Tyzack GE, Sitnikov S, Barson D, Adams-Carr KL, Lau NK, Kwok JC, et al. Astrocyte response to motor neuron injury promotes structural synaptic plasticity via STAT3-regulated TSP-1 expression. Nat Commun. 2014;5:4294. doi: 10.1038/ncomms5294.

Macias MY, Syring MB, Pizzi MA, Crowe MJ, Alexanian AR, Kurpad SN. Pain with no gain: Allodynia following neural stem cell transplantation in spinal cord injury. Exp Neurol. 2006;201(2):335–48. doi: 10.1016/j.expneurol.2006.04.035.

Davies JE, Huang C, Proschel C, Noble M, Mayer-Proschel M, Davies SJA. Astrocytes derived from glial-restricted precursors promote spinal cord repair. J Biol. 2006;5(3):7. doi: 10.1186/jbiol35.

Davies JE, Pröschel C, Zhang N, Noble M, Mayer-Pröschel M, Davies SJA. Transplanted astrocytes derived from BMP- or CNTF-treated glial-restricted precursors have opposite effects on recovery and allodynia after spinal cord injury. J Biol. 2008;7(7):24. doi: 10.1186/jbiol85.

Teshigawara K, Kuboyama T, Shigyo M, Nagata A, Sugimoto K, Matsuya Y, et al. A novel compound, denosomin, ameliorates spinal cord injury via axonal growth associated with astrocyte-secreted vimentin. Br J Pharmacol. 2013;168(4):903–19. doi: 10.1111/j.1476-5381.2012.02211.x.

Image via StockSnap/Pixabay.

]]>
http://brainblogger.com/2017/09/09/follow-me-astrocytes-in-spinal-cord-repair/feed/ 0
Projection – When Narcissists Turn the Blame on You http://brainblogger.com/2016/10/16/projection-when-narcissists-turn-the-blame-on-you/ http://brainblogger.com/2016/10/16/projection-when-narcissists-turn-the-blame-on-you/#respond Sun, 16 Oct 2016 11:22:38 +0000 http://brainblogger.com/?p=22217 Ah, projection. The fine art of making me guilty of your vices.

Projection

No one projects better or more frequently than a narcissist. They’ve practiced, honed and refined projection to a fine art.

Whatever they’re up to, by some mental “abracadabra,” suddenly they’re innocent and you’re actually the one up to no-good.

Deep In The Race

Since Eve ate the “apple” and blamed it on the serpent, projection has been a quintessential part of the human race. Since Adam ate the “apple” and blamed it on Eve, men have been projecting onto their wives. Wives have been projecting onto their husbands. Parents have been projecting onto their children. And siblings have been projecting onto each other.

C’mon, you know you’ve done it. I certainly have. And while projection may be elevated to a high art-form by narcissists, we’ve all done it or been tempted to do it. We’ve all got a little corner of narcissism in our souls. That’s why we understand them so well.

As I see it, projection proves how deeply and profoundly all homo sapiens, narcissists or otherwise, not only inherently know the moral code…but believe in it.

Why Project?

Guilt? Maybe.

Envy of other’s innocence? Perhaps.

Avoidance of the result of wrong-doing? Now we’re getting somewhere.

The need to be perfect! Bingo + all of the above.

Gotta protect that fragile little ol’ ego, y’know.

Equality

Projection is the great equalizer. Everyone the narcissist knows is equally guilty.

Their children, especially the one assigned the role of “scapegoat,” suffer the most from being projected upon. Trained to be humble and submissive, brainwashed to feel false guilt, they take on their elder’s vices without critical thinking. Bearing the “sins of the fathers” as a burden like Christian in Pilgrim’s Progress.

Spouses of narcissists suffer from a shit-load of projection too. They are in the unenviable position of being accused o’re and o’re of infidelity. Y’know, the infidelity the narcissists is actually engaging in.

Co-workers are also a dandy target for career projection. After all, fill-in-the-blank is never the narcissist’s fault, yet blame must be assigned.

Projection In Literature

In his charming books on veterinary practice in Yorkshire, James Herriot wrote a fascinating line about his highly, shall we say, eccentric partner, Siegfried Farnon. James is speaking to Tristan, the long-suffering younger Farnon brother.

You know the one thing I can’t stand about your brother, Tris? It’s when he gets patient with you. He gets this saintly look on his face and you know that any moment now he’s going to forgive you. For something he’s just done.

Oh, how I remember that vile, nasty saintly expression of condescending forgiveness for a wrong I didn’t commit. I also remember the hand-written notes left for me by my parent, forgiving me for the abuse I forced them to commit against me. Abuses like throwing a sheaf of my schoolwork across the room and yelling at me for half an hour. My crime: Ending a paragraph with a question, instead of a statement. Isn’t that awful of me!? (sarcasm)

Ah, but I was forgiven by the next morning.

Scapegoat

As it turns out, the concept of a “scapegoat” is thousands of years old, with a rich history.

In Leviticus 16:21 it says, “And Aaron shall lay both his hands upon the head of the live goat, and confess over him all the iniquities of the children of Israel, and all their transgressions in all their sins, putting them upon the head of the goat, and shall send him away…into the wilderness…”

Most ancient religions have some version of this. Projection of sin onto a sacrifice.

Projection makes us the scapegoat, wearing on our heads the iniquities of the narcissist. Our self-esteem and innocence is sacrificed on the altar of their ego, so they can go on their merry way.

Right Runs Deep

I would argue this shows how deeply and profoundly narcissists believe in right VS wrong.

If they don’t know right from wrong, why project?

If they don’t care about right vs wrong, why project?

If they don’t have a functioning conscience, why project?

It could be for expediency. To avoid the ramifications of their actions. To keep the smooth sailing going.

But, how would they know what needed to be projected, if they don’t know right from wrong?

They know, oh, how they know!

And therein is their undoing.

Atonement

I dunno about you, but most of my best ideas happen in the shower. (Or bed.) I’ll never forget the “discovering fire” moment under a hot shower when something went “click.” I finally got it. Let’s hope I can articulate it to you.

The Old Testament concept of the scapegoat comes to full circle in the New Testament concept of atonement.

There is one and only one setting where projection actually works! We get to project our sins onto Christ. It’s okay. Go ahead and project. And in exchange, through the shedding of His innocent blood on the Cross, His perfection becomes ours.

“For he hath made him to be sin for us, who knew no sin; that we might be made the righteousness of God in him”, II Cor. 5:21

That’s why He came. He didn’t just come to be a great moral teacher. As C.S. Lewis wrote in “Lewis’s trilemma” in Mere Christianity:

I am trying here to prevent anyone saying the really foolish thing that people often say about Him: ‘I’m ready to accept Jesus as a great moral teacher, but I don’t accept his claim to be God.’ That is the one thing we must not say.

A man who was merely a man and said the sort of things Jesus said would not be a great moral teacher. He would either be a lunatic — on the level with the man who says he is a poached egg — or else he would be the Devil of Hell. You must make your choice. Either this man was, and is, the Son of God, or else a madman or something worse. You can shut him up for a fool, you can spit at him and kill him as a demon or you can fall at his feet and call him Lord and God, but let us not come with any patronising nonsense about his being a great human teacher. He has not left that open to us. He did not intend to.

Wow!

Profundity

Pretty deep stuff, huh. The age-old battle between Right vs Wrong. Mankind’s deep-seated need to be in the right, even if it means doing wrong (i.e. projection) to maintain the appearance of being in the right. And the profound paradigm of scapegoating, sacrifice and atonement.

Like the pieces of a puzzle, it all holds together. It makes sense. As C.S. Lewis wrote in The Lion, The Witch and the Wardrobe, ”

“Logic!” said the Professor half to himself. “Why don’t they teach logic at these schools?”

Above all, narcissists are logical. And, in a twisted way, projection is also logical. Twisted. Sad. Wrong…but still logical.

This guest article originally appeared on PsychCentral.com: Projection: Narcissists’ Favorite Trick

Image via PublicDomainPictures / Pixabay.

]]>
http://brainblogger.com/2016/10/16/projection-when-narcissists-turn-the-blame-on-you/feed/ 0
A Resident’s Reflections from within the American Board of Psychiatry & Neurology (ABPN) http://brainblogger.com/2016/05/29/a-residents-reflections-from-within-the-american-board-of-psychiatry-neurology-abpn/ http://brainblogger.com/2016/05/29/a-residents-reflections-from-within-the-american-board-of-psychiatry-neurology-abpn/#respond Sun, 29 May 2016 15:00:31 +0000 http://brainblogger.com/?p=21698 Most residents have a very limited understanding of the American Board of Psychiatry and Neurology (ABPN), and understandably so.

I myself thought of the ABPN as a large, bureaucratic, governmental organization that spent most of its time siphoning money from hapless residents in order to administer board certification examinations. I was therefore surprised, and a bit skeptical, when my chairman asked me if I was interested in a three-month administrative fellowship at the ABPN during my final year of neurology residency. Although I had an interest in administration, I was hesitant because I was unfamiliar with the fellowship’s objectives, because it would be the first year it was offered.

Three weeks before my administrative fellowship was to begin, a thick binder arrived containing a detailed, day-by-day schedule and multiple articles including, 10 Most Common Mistakes Made by Administrators and Understanding Financial Statements of Not-for Profits. I also received three books on self-improvement: Drive, Talent is Overrated, and, ironically, Being Wrong. Reviewing the schedule, I was surprised to see that I would be spending, collectively, almost four weeks traveling (including internationally).

The binder clearly spelled out the objectives of the fellowship. I was expected to learn about the mission and structure of the ABPN as a whole, and in particular the fiduciary responsibilities of the board of directors. I was to have scheduled meetings with the senior staff to appreciate their role in the day-to-day workings of the ABPN. In addition, I was expected to complete a research project, suitable for submission for presentation and publication. Finally, I was to have weekly meetings with Dr. Larry Faulkner, the President and CEO of the ABPN. It would be these weekly meetings that I would find most useful, as they provided perhaps the greatest educational value of the entire fellowship.

About the ABPN

Prior to my arrival at the ABPN, I learned that it had been formed by psychiatrists and neurologists in 1934 in order to distinguish qualified specialists from those offering neurological or psychiatric care without adequate experience or training.

Rather than a large, bureaucratic organization, the ABPN is relatively small. It consists of less than 40 staff, of which only one is a salaried physician (Dr. Faulkner). The ABPN sitting directors essentially volunteer their time. I quickly learned that the ABPN does not have members (unlike the American Academy of Neurology (AAN) or the American Psychiatric Association) and is an organization that is primarily responsible to the American public. Its main mission is to ensure the public that ABPN diplomates are competent to practice neurology and psychiatry. It does this by first certifying candidates who have graduated from accredited residency programs and by developing methods to assess that practicing physicians continuously keep up with the rapid pace of medical advancement. Initial certification for neurologists and psychiatrists now consists of a computer based examination.

Interestingly, the ABPN is also a driving force behind residency education. Recently, the Accreditation Council of Graduate Medical Education (ACGME) decided that it would not accredit additional combined training programs. Instead of dissolving these programs (in which almost 200 residents are currently enrolled), the ABPN decided to review and approve these combined training programs which include neurology-internal medicine and neurology-psychiatry. While the ACGME establishes minimal requirements for neurology and psychiatry residency programs, the ABPN establishes the necessary pre-requisites a resident must have in order to be eligible to become board certified. Often the ACGME follows suit. For example, initially there was no ACGME requirement that a graduating neurology resident see a single critical care patient. The ABPN determined that an intensive care unit (ICU) clinical skill examination (CSE) would be required in order to apply for an initial board certification exam. Shortly thereafter, the ACGME adopted the ICU CSE as a requirement for residency accreditation.

A recent focus of the ABPN is supporting education and research activities of academic faculty. Given the increasing clinical demands on faculty, I noted that the ABPN grants for innovative education projects placed particular emphasis on ensuring that faculty had protected time to complete those activities. The ABPN will shortly begin another grant program to support research on issues relevant to its mission. In both of the ABPN grant programs, awardees are selected by panels of neurologists and psychiatrists that includes members from within the academic community with established expertise in education or research.

Crucial Issue Forums

The ABPN has also begun to host a yearly “Crucial Issue Forum”. These Forums focus on pressing issues central to the fields of neurology and psychiatry and are used to obtain feedback from professional organizations and others on those issues. Experts in the field, including program directors, department chairs, representatives of national professional organizations, residents, and fellows are invited.

The most recent Forum focused on residency education, and included a discussion about whether the process of the CSEs should be modified to produce a more meaningful educational experience. A growing body of literature has suggested that the CSEs are not as effective as they might be. These sentiments were echoed by several residents, including myself. After attending this Forum, it became clear to me how seriously the ABPN took this Forum. Had the attendees of the Forum voted for the ABPN to conduct site visits to monitor the CSEs at every institution, it is likely that we would have site visits. Conversely, if a clear consensus had been to abolish the CSEs, it is likely that they would no longer exist.

My fellowship

A requirement of the fellowship is a research project with the expectation of publication. Several opportunities exist towards this end, including use of the ABPN’s wealth of data on their initial certification examinations, maintenance of certification exam and CSEs. Given my preexisting interest in both headache and education I surveyed adult neurology residency program directors and chief residents to determine their views on the appropriate amount of headache education in neurology residency. The goal of this project was to determine if headache education had significantly increased from a decade ago when a similar survey had been done. I had the opportunity to present the results to the senior staff of the ABPN as well as at the American Headache Society Annual Scientific Meeting in June 2015. The manuscript was accepted for publication in Headache, The Journal of Head and Face Pain.

The most memorable moments of my fellowship were spent in Dr. Faulkner’s office for our weekly 10 o’clock meetings. These ‘one on one’ meetings typically lasted between 1-2 hours. Rarely was there a set agenda. We discussed everything from Dr. Faulkner’s top ten rules for financial investment, to the inexact science of hiring employees. We talked about the slim evidence base behind maintenance of certification (MOC) and the impetus to have an MOC program despite the lack of strong evidence. We explored why continuing medical education (CME) has not met the same opposition as MOC Part IV, despite the fact that CME is the most time intensive component of MOC.

Behind the backdrop of the formal curriculum, readings, and scheduled meetings, a large part of the fellowship consisted of informal education. Every moment of downtime with Dr. Faulkner was an opportunity for me to learn about the process of becoming a successful administrator. While we waited for our flights we would often talk about everything from family to how important it is to take care of oneself physically and mentally. As Dr. Faulkner put it, “If you fall apart, everything falls apart. If you’re not healthy, you won’t be able to fulfill your family, social, or work responsibilities.” He impressed upon me the importance of being on the same page as one’s spouse and family. We discussed the value of doing a few tasks, but doing them well. I understand now that the real value of this unique experience truly lay in the in-depth immersion that I had into all things administrative, from the ABPN, to academic departments, to professional organizations, and even to my family.

Finally, the fellowship gave me the opportunity to meet with some of the most influential leaders in neurology and psychiatry. It was eye-opening to see the work that goes on behind the scenes at organizations like the AAN, APA, and ACGME. Despite their different responsibilities, each of these national organizations and their respective leaders had the singular goal of furthering the fields of neurology and psychiatry through focused initiatives. I began to appreciate the extraordinary effort that went into the large annual professional meetings. I spent a day at the AAN in Minneapolis learning about their different sections and the spectrum of resources they provide for their members. It was humbling to realize that I could probably spend my whole life on the AAN website and still not be able to take advantage of all the resources they have to offer.

In the ABPN I found an organization that not only tried to uphold the standards that make our profession credible, but also an organization that was dedicated towards the advancement of neurology and psychiatry education. In Dr. Faulkner I found a leader who tried to be fair. He cultivated the potential of those around him into a kinetic energy that translated into a collective success. Much of his time was spent advocating for the best interests of neurologists and psychiatrists against those who would like to propose greater physician scrutiny and regulation.

The mounting pressures of lower reimbursement in the setting of higher patient volumes, the oft-repeated mantra of ‘Do more with less’, and the overwhelming paperwork often overshadow our initial motivation to become physicians. More than anything else, my time at the ABPN and my interaction with the leaders in neurology and psychiatry have given me hope and optimism that we can find our way through the pressured maze of bureaucracy and increasing scrutiny to an era where we will be able to provide the best care for our patients while seamlessly documenting the quality of our work. There are multiple initiatives towards this end, not the least of which is the commitment and support of leaders in neurology to the AAN Axon Registry. In summary, my experience at the ABPN taught me that our future is in our hands and that our collective involvement and effort will be crucial to effectuate the outcomes we desire.

References

ABPN Awards Program. Faculty innovation in education award. American Board of Psychiatry and Neurology website. Accessed December 21, 2015.

Aminoff, MJ. Faulkner RF. (2012). The American Board of Psychiatry and Neurology, Looking Back and Moving Ahead. Arlington, VA: American Psychiatric Publishing.

Kay, Jerald. (1999(. Administrative Mistakes Handbook of Psychiatric Education and Faculty Development. Washington D.C., American Psychiatric Press.

Schuh, L., London, Z., Neel, R., Brock, C., Kissela, B., Schultz, L., & Gelb, D. (2009). Education Research: Bias and poor interrater reliability in evaluating the neurology clinical skills examination Neurology, 73 (11), 904-908 DOI: 10.1212/WNL.0b013e3181b35212

Image via StartupStockPhotos / Pixabay.

]]>
http://brainblogger.com/2016/05/29/a-residents-reflections-from-within-the-american-board-of-psychiatry-neurology-abpn/feed/ 0
On Mass Murderers http://brainblogger.com/2016/05/11/on-mass-murderers/ http://brainblogger.com/2016/05/11/on-mass-murderers/#respond Wed, 11 May 2016 15:00:40 +0000 http://brainblogger.com/?p=21709 We observe the modern epidemic of mass murder in this country and are shocked. We can’t understand who these (mostly young) men are who take the lives of innocents for no apparent reason. What could possibly drive them to do it?

Seeking reassurance, we search for what makes these murderers different from us. In the wake of yet another horrific mass shooting, we must reassess our understanding of the underlying cause.

We conclude that these killers are mentally ill. Legislators devise laws to prevent people who have been committed to psychiatric hospitals or otherwise judged mentally ill from owning guns. Mental health experts demand more psychiatric services to identify and treat them. Even Dear Abby writes, “The triggers that have led to the plague of mass shootings in this country are the result of individuals with severe psychosis (Bangor Daily News, 11/23/ 2015).” It is satisfying to us to believe that we can identify mentally deranged people who commit these crimes, and that they are not like us.

In Europe and much of the rest of the world, there is another group of slaughterers called Islamic Jihadists. When the recent events in Paris unfolded, the world watched horrified as a small cell of ISIS terrorists indiscriminately gunned down scores of random people. We see this as a political-religious act by radicalized Islamists, not a product of mental illness. But how much difference is there, really, between American mass murderers and foreign jihadist ones?

A recent article in The New Yorker by Malcolm Gladwell (“Thresholds of Violence,” 2015) analyzed the genesis of school shooters in the US. Over the past 20 years, there has been a long series of cases following a similar pattern. One or two young men go into unprotected schools and randomly start shooting unarmed students and teachers. Gladwell points out that since 2012, there have been 140 school shootings in America. Some of these young men, such as Kip Kinkel, had bizarre paranoid fantasies and can be identified as psychotic, but some such as Eric Harris of Columbine fame were more appropriately described as psychopaths. Some came from chaotic and violent homes, but others were loved by their families and un-traumatized. Then there was Adam Lanza. What are we to make of him?

In December, 2012, 19-year-old Adam Lanza shot his mother, then went to Sandy Hook Elementary School where he murdered 20 children and six adults.

Much of what is known about his early life was reported by Adam’s father Peter to Andrew Solomon of The New Yorker. Peter described Adam as exhibiting odd behaviors such as sensory hypersensitivity and social dysfunction from an early age. At age 13, a psychiatrist diagnosed Adam with Asperger’s syndrome and recommended he be home-schooled. In his high school years, he became increasingly isolated and distant from his parents. The only emotion he displayed to them was distress in connection with having to socially engage. Perhaps distracted by the Asperger’s diagnosis and unable to penetrate Adam’s secrecy, neither his parents nor mental health professionals were alert for signs of impending violence.

From the clues he left behind, Adams emotions alternated from anger to despair. Anger may have been the only social emotion he was capable of comprehending. His anger was reflected in his increasing fascination with mass murder, which he expressed only online. In his late teens, he spent much of his time editing entries on mass murderers on Wikipedia. He was aware that he was failing in life and had no future. As Solomon put it, “The more Adam hated himself, the more he hated everyone else.”

It seems reasonable to speculate that his final act was to take the life of his mother, whom he blamed for his problems, and then the lives of children who had the promise he could never realize. If we are to look for causes of Adam’s murderous behavior, they do not lie in Asperger’s or mental illness. It seems clear enough that the key to Adam and the common element behind mass murders is rage.

For the Jihadist, the rage is religious and political. The non-believer is evil and an enemy. He must be destroyed or enslaved. The reward for killing the other is a place in heaven. For a mass murderer like Adam, the rage is interpersonal. It is against an enemy who is, in some way, oppressing or preventing the killer from getting what he deserves. The reward is achievement and fame. In either case, compassion has no place.  

Gladwell’s formulation emphasizes the under-appreciated power of situational or social factors in determining our behavior. He invokes a theory of social thresholds. Each of us has a certain threshold for engaging in various actions, be they violent or benevolent. Take, for instance, a riot. One person in a mob of people who has a very low threshold (perhaps due to a high level of anger) throws the first rock followed by someone with a slightly higher threshold. A social contagion may then set in where individuals who would not have considered rioting get caught up and become participants. If there is sufficient social reinforcement, some people become mass murderers.

One of the most famous social psychology experiments, Zimbardo’s Stanford Prison Experiment in 1971, showed the power of social influence and unchecked authority to turn ordinary people into malevolent prison guards or victimized prisoners. Zimbardo assembled a random group of seemingly normal young men and arbitrarily assigned them to the roles of guards or prisoners. Then, in an elaborate piece of theater, he created an isolated prison environment in which the men were told to follow the rules Zimbardo created. The astounding result was that both groups did not just play-act, but actually became the roles they were simulating. As Zimbardo described it, “the power that the guards assumed each time they donned their military-style uniforms was matched by the powerlessness the prisoners felt when wearing their wrinkled smocks with ID numbers sewn on their fronts.”

Although they stopped short of actual physical abuse, the guards behaved cruelly and with little regard for their prisoner-peers’ humanity. Even the kindly doctor Zimbardo assumed the role of prison supervisor. He was blind to the abusive behavior his social experiment had created until his future wife confronted him from an outsider’s perspective.

What Zimbardo showed was that under the right social circumstances, individuals with generally high thresholds for violent action can become Nazi Gestapo or Abu Ghraib prison guards. Unfortunately, this is the dark side of human social evolution that we have seen played out throughout history. There is nothing unusual in the phenomenon of one group of humans defining outsiders as others who do not warrant compassion or even respect.

The commonality between mass murderers and Islamic Jihadists is that both groups have low thresholds for joining in on unspeakable violence. They then join or are influenced by a social group that glorifies violence. Jihadists operate in response to the social contagion of religious extremism which grows with each atrocity. School shooters and similar murderers are increasingly influenced by a virtual social group and a script laid out by their predecessors such as the Columbine killers. Adam’s social isolation and rage lowered his threshold for joining a virtual group for whom murdering innocents becomes a heroic act.

We do not need to invoke mental illness. A personal sense of rage and social contagion is explanation enough.

References

Gladwell, Malcolm, (Oct. 19, 2015). “Thresholds of violence: How school shootings catch on.” The New Yorker.

Solomon, Andrew. (Mar. 17, 2014). “The reckoning: The father of the Sandy Hook killer searches for answers.” The New Yorker.

Zimbardo, Philip. (2008). The Lucifer Effect: Understanding How Good People Turn Evil. Random House.

Image via geralt / Pixabay.

]]>
http://brainblogger.com/2016/05/11/on-mass-murderers/feed/ 0
Never Say Die – SELF/LESS from Science-Fiction to -Fact http://brainblogger.com/2015/07/10/never-say-die-selfless-from-science-fiction-to-fact/ http://brainblogger.com/2015/07/10/never-say-die-selfless-from-science-fiction-to-fact/#respond Fri, 10 Jul 2015 10:50:01 +0000 http://brainblogger.com/?p=20170 In SELF/LESS, a dying old man (Academy Award winner Ben Kingsley) transfers his consciousness to the body of a healthy young man (Ryan Reynolds). If you’re into immortality, that’s pretty good product packaging, no?

But this thought-provoking psychological thriller also raises fundamental and felicitous ethical questions about extending life beyond its natural boundaries. Postulating the moral and ethical issues that surround mortality have long been defining characteristics of many notable stories within the sci-fi genre. In fact, the Mary Shelley’s age-old novel,  Frankenstein, while having little to no direct plot overlaps [with SELF/LESS], it is considered by many to be among the first examples of the science fiction genre.

Screenwriters and brothers David and Alex Pastor show the timelessness of society’s fascination with immortality. However, their exploration reflects a rapidly growing deviation from the tale’s derivation as it lies within traditional science fiction. This shift can be defined, on the most basic level as the genre losing it’s implied fictitious base. Sure, while we have yet to clone dinosaurs, many core elements of beloved past sic-fi films are growing well within our reach, if not in our present and every-day lives. From Luke Skywalker’s prosthetic hand in Star Wars Episode V: The Empire Strikes Back (1980) to the Matrix Sentinal’s (1999) of our past science fiction films help define our current reality to Will Smith’s bionic arm in I, Robot.

The resulting script of the Pastor brother’s own creative take on the timeless theme, is what grabbed the industry’s attention, after first being ignored and eventually making The 2011 Black List: of best unproduced screenplays.

Director Tarsem Singh had been looking tirelessly for the right thriller and with SELF/LESS he found his match. The result of this collective vision is a great example of a genre’s journey from science-fiction to -fact.

Damian Hale (Kingsley) is a billionaire industrialist dying of cancer. Lucky for him, he hears of a procedure called “shedding,” a newfangled process by which one transfers his consciousness into a different body. Albright (Matthew Goode of THE GOOD WIFE) is the dangerously charismatic and brilliant mind behind the secret organization that, for a dozen million or so, can grant this gift of life to folks like Damian. Albright’s, never say die motto, is an offer hard to refuse and Damian certainly does not. While touring the mysterious medical facility, Albright tells Damian he will be receiving, “the very best of the human experience.” The new body (Reynolds) Albright describes as an “empty vessel,” whose sole purpose is to provide new life — to those who can afford it. Damian is sold.

SELFLESS scanner

Damian goes through his “shedding” procedure, which has a shockingly chilling realism, resembling a super fancy MRI machine. Upon awakening he finds himself in his new body to which he slowly adjusts, after getting over “that new body smell.”

Ryan Reynolds (in foreground) stars as Young Damian and BenKingsley (in background) stars as Damian Hale in Gramercy Pictures'provocative psychological science fiction thriller Self/less, directed byTarsem Singh and written by Alex Pastor & David Pastor.Credit: Alan Markfield / Gramercy Pictures

After a bit of time enjoying his healthy, attractive new body, Damian begins to experience what he is told is a harmless side-effect: hallucinations. What he sees in these episodes – a woman, a young girl, a home, a family – begin to all feel too real. Soon, Damian’s suspicions grow into certainty: these are not random hallucinations; they are images of a past that really happened. In other words, they are memories. But, if the new body was supposed to be an “empty vessel,” whose past is Damian remembering?

Without providing too much of a spoiler (but just in case…SPOILER ALERT!), Damian discovers that his new body was never an “empty vessel” created in a lab. In actuality, his new body had a whole life previous to the procedure. Soon the notion arises, both to Damian and to the viewer: does the life that once owned all these memories, does that man who once had a wife, daughter and an entire life, does he have the change to regain them?

This discovery leads SELF/LESS into the action film realm, which it does quite effectively, complete with shoot-outs, hand-to-hand combat, car chases and yes, even explosions. Like any really good works of science fiction, issues are packaged in an exciting plot buoyed by plausible — albeit futuristic — science.

This is among the reasons SELF/LESS works. It brings up many meaningful issues regarding science and immortality. If people can be saved from disease, age and death will this only be available to those wealthy enough to afford such a procedure? Would there be a selection process where only those deemed “superior” would be given eternal life?

And what would that mean to us all as a society? If Einstein were still alive today, would he have unified gravity with the other forces allowing us all to be traveling around in time machines by now? Would we be receiving iPhone44 by now if the consciousness of Steve Jobs could have been preserved in a healthy body.

The answer to how society may have been affected if anything had gone differently is a definitively impossible question to answer. However, the deeper question, I believe does hold an answer:

Is there an alternative to recycling the genius of the past, and those that we are currently familiar with? Or, can we allow for the possibility that a new genius, perhaps of a mind that will impact society beyond any realm we can currently fathom? Essentially, can we allow new, fresh perspectives in new, never before worn “vessels” to impact even if without the assurance of progress?

For the record I vote for the later. While I admit that is my opinion (as I admit to believing it the the only correct opinion) we must encourage ourselves to all ask these questions, and never be so presumptuous as to think one can ever be fully satisfied with the belief they have discovered every answer or postulated every notion.

Immortality has been the stuff of dreams (and movies and books and plays, etc.,) going far too back to define the exact origin, in addition to the aforementioned FRANKENSTEIN to DRACULA (1931), SLEEPER (1973), and even STAR WARS EPISODE V: THE EMPIRE STRIKES BACK (1980), in which Han Solo is frozen in Carbonite, all have their unique takes on this topic.

In real life, Dr. James Bedford, a psychology professor at the University of California, became the first person to ever be cryogenically preserved on January 12, 1967. He even left money for a steel capsule and liquid nitrogen in his will.

Many of us have heard stories, such as the urban legend that Walt Disney had himself frozen. (By the way, this happens to be only legend so if you were wondering, know it is false).

While there are the famous instances of the known few as well as the even more infamous myths, there have been far more real-life attempts at immortality than many may know of. Perhaps they have gone unnoticed because these events did not deal with world-renowned “geniuses” or hold great wealth and fame. I cannot say anything for certain, but I can share a personal note here.

My father’s cousin, back in 1968, well before I was born, was cryogenically frozen. Steven had been ill all his 25 years, had found an ad for the Cryogenics Society in a science fiction magazine, and, when he died on an operating table, had arranged to be preserved. I won’t go into the details here, but let’s just say it did not work out (turns out it’s pretty expensive to keep a human being frozen, and being frozen after you’ve already died kind of defeats the purpose). But the fact that it freezes a family’s hope that something may happen to bring their loved one back in the future can be more than just a bit cruel. However, the science behind the actions that, in this case proved exceedingly wrong, does not make the discovery behind the science inherently wrong. In fact it is because of this story I believe more than most, that we must engage, and ask questions now, discuss the ethical, moral and pragmatic ramifications now.

I had the opportunity to sit down with some of the cast and crew to discuss the film and some of the issues it raises.

David and Alex Pastor opened up about how their creative process can often be motivated by their own fears, wishes and predictions. David pointed out that the desires present in Damian are feelings that can resonate with everyone.

I feel that everybody can relate to ‘I wish I had more time’. We wanted to write about a powerful character who has everything but whose body is failing him and who then finds that his money might be able to buy him a new life.

Natalie Martinez, who plays Madeline, a crucial character to the story (sorry, can’t tell you why – you’ll have to see for yourself), told me how she enjoyed doing nearly all of her own stunts. I believed her too: she showed me her arm, pointing to her newest bruises accrued in her latest project, WARRIOR, where she plays a female mixed martial arts fighter.

The film made a wise decision in how it represented the technology at the core of the story. This was not a film intended to provide a lesson on the technologies of the future. The filmmakers chose NOT to pack the film with elaborate, made-up scientific explication. Other films, such as 2014’s LUCY, try using data from the real world to explain the premise of their stories, but this typically only shows a complete lack of faith in the film’s storytelling abilities. Thankfully, SELF/LESS doesn’t fall into that trap.

SELF/LESS makes no unnecessary attempt to have its lead character, Damian,, serve as an example of our collective scientific and technological potential. To do so would have been impractical distracting and ethically irresponsible filmmaking. When a film pretends its science is all actual right now — rather than a “science fiction” that takes off from a base facts — it seeks to have its audience believe in its story for reasons other than filmmaking craftsmanship. That leads to serious misconceptions about science. SELF/LESS, while based in scientific fact, doesn’t need to pretend immortality is a current reality: you believe its story anyway.

Dr. Charles Higgins, an associate Professor in of Neuroscience at the University of Arizona and head of the renowned Higgs Lab, when asked whether the concept of transferring one’s consciousness from one body to another is possible, replied:

It is sure to be science future [not fiction]. It’s just a question of whether it’s 30 years or 300 years.

When I asked David and Alex Pastor how they chose to balance the technological realities with their creative vision they responded that although the “key” to the plot and story is:

A revolutionary new technology, we decided we would not get bogged down in technicalities and would keep our story as more of a fable than anything else. It was the moral consequences that interested us. The science fiction that [they] like to write explores moral and ethical issue…ideas tied in to universal themes.

I agree and appreciated hearing that. Whenever a film can instigate thought and raise questions, the result is typically an effective film. But even more so when a film can do that, all within the confines of an action-packed thriller, demanding your visceral attention as well as your active, intellectual engagement. In the end, what makes SELF/LESS a self-aware, unselfish, ethical piece of effective entertainment is that they used action as a device to propel the moral and ethical questions.

]]>
http://brainblogger.com/2015/07/10/never-say-die-selfless-from-science-fiction-to-fact/feed/ 0
Disruptive Intelligent Machines http://brainblogger.com/2015/06/27/disruptive-intelligent-machines/ http://brainblogger.com/2015/06/27/disruptive-intelligent-machines/#respond Sat, 27 Jun 2015 14:00:48 +0000 http://brainblogger.com/?p=19880 Intelligent machines are very likely to become as popular as smartphones. Still, the intelligence of a machine remains a debatable term and the first examples are as yet incipient prototypes. In truth it is difficult to conceive a technology more disruptive than truly intelligent machines.

In some years “intelligent personal assistants” such as Siri and the new Hound, will likely be seen as an archaic pre-intelligent form of technology.

According to Harvard professor Clayton Christensen, some innovations labelled as disruptive can significantly shape networks and markets through displacing older technologies. Christensen makes the industry his focus, speaking of disruption to markets, although of course we can see such technological paradigm shifts as socially and personally disruptive as well.

For example, personal computers replaced typewriters when they became widely available. They not only replaced the existing function of the previous technology but provided a range of entirely new functions such as fast digital text processing, integrated applications and computer programming.

We can see similar upheaval in the nature of the market and the nature of the use of disruptive tech in the replacement of traditional phones by mobiles.

Some authors challenge the notion that one wave of technology disrupts the last, arguing that it is a far more patchy process in which old technologies are not completely displaced by new ones but enhanced, shaped or transformed, continuing to exist in new forms of their own.

A fairly kitsch example of this can be seen in the form of the expensive but stylish USB Typewriter, attached to a monitor and or tablet, for those who still want to feel the click of those old fashioned keys as they write. In fact, old style handsets also exist for use with mobile phones.

Disruptive innovations can be seen across all fields where tech plays a role, such as education, medicine, law and so on, frequently raising ethical issues and question marks.

In the field of robotics and medicine, new discoveries in tele-surgery, virtual reality and surgical simulators might revolutionise the ways surgery is done and enable remote surgery to become a reality.

Truly intelligent machines might even revolutionise the justice system in time, when artificial intelligence (AI) is truly able to construct arguments based on efficient access to (and organisation of) information and is able to embark on a process of learning – one of the greatest challenges in the field of artificial intelligence research.

It is widely expected that in the future intelligent machines will replace the workforce across an increasing number of fields. Therefore their production is likely to become less costly, more efficient and more reliable, posing major challenges both for companies and for society at large.

In 1997 a computer beat a chess champion for the first time in the famous match Kasparov vs. Deep Blue. Computers have reigned supreme in chess ever since. In recent years scientists have taken some major strides closer to achieving artificial intelligence. The ultimate challenge of machine intelligence over human may not be as far into the distant future as we imagine.

References

Christensen, C. (1997) The Innovator´s Dilemma, USA: Harvard Business School Press.

Satava RM (2002). Disruptive visions. Surgical endoscopy, 16 (10), 1403-8 PMID: 12170350

Image via Luisma Tapia / Shutterstock.

]]>
http://brainblogger.com/2015/06/27/disruptive-intelligent-machines/feed/ 0
Top Five – 3D Printed Products http://brainblogger.com/2015/06/13/top-five-3d-printed-products/ http://brainblogger.com/2015/06/13/top-five-3d-printed-products/#respond Sat, 13 Jun 2015 14:00:13 +0000 http://brainblogger.com/?p=19806 3D printing is currently used for a variety of sectors in order to produce items with greater precision, automation and speed. Here I have compiled a list of some of the most interesting items which are already possible to 3D print and which can have important implications for our future.

1. Architecture

One of the areas in which 3D printing has become more relevant is architecture. For many, it seems a dream come true that architects can digitally design something that can become a physical reality just by pressing a button. 3D printing of architectural models is mostly the limit in this industry so far. However, the Chinese have already gone forward and printed of houses for a cost of US$5,000 each.

2.Food.

This is for many sci-fi lovers one of the ultimate 3D printouts that we have been looking forward to for many decades after seeing similar concepts on sci-fi astronaut and futuristic stories, films, TV series and cartoons. Nowadays, it is possible to have fun printing chocolates and sweets in any shape such as stars, snow cones, little houses or faces. 3D printing of food is still far from a reality as a solution for hunger or as food for astronauts. The cost is very high and the technology is still in development.

However, this could have strong implications for our everyday lives in the future since it implies that any source of nutrients that can be liquidised or powdered could be used for mimicking food as we know it. Fancy a steak with a jelly texture made of insects? It is already possible to print mashed food visually resembling the real thing in its original shape, designed for people who have trouble swallowing solids.

3. Weapons

One of the most controversial areas of 3D printing is the printing of guns. Although the cost is very expensive in comparison to the purchase of the real things and the first prototypes are guns’ parts being developed in plastic, this could create readier and unregulated access to guns once the technology becomes mainstream and costs go down.

4. Medicine

3D printing is a very relevant technology in the field of medicine. This technology is already being used to make printouts of tissues with biodegradable gel and human cells in order to create skin, ears, noses and fingers for experimental transplants in order to avoid rejection from the body. It is also possible to make printouts of metal body parts to create prosthetics.

Although this implies a huge step for science it also brings with it philosophical implications related to the nature of man and machine. With technologically improved body parts we may become cyborgs before we know it.

5. 3D Printers

And to answer a classic question regarding 3D printing: Yes, it is possible for a 3D printer to print an iteration of itself. Who knows what the future may hold if we finally reach true AI and sentient machines can reproduce themselves at will.

References

Haraway, Donna J. A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late Twentieth Century. Simians, Cyborgs and Women: The Reinvention of Nature. New York; Routledge, 1991. p.149-181.

Mironov, Vladimir et al. (2003). Organ printing: computer-aided jet-based 3D tissue engineering, Trends in Biotechnology , Volume 21, Issue 4 , 157 – 161.

Image via belekekin / Shutterstock.

]]>
http://brainblogger.com/2015/06/13/top-five-3d-printed-products/feed/ 0
Death of the Screen? http://brainblogger.com/2015/06/08/death-of-the-screen/ http://brainblogger.com/2015/06/08/death-of-the-screen/#respond Mon, 08 Jun 2015 14:00:56 +0000 http://brainblogger.com/?p=19746 In 1995, Sherry Turkle published Life on the Screen, addressing some of the aspects of human interaction through computers. It has been two decades and screens now run the world. The BBC reported recently that the average Briton spends more time looking at a screen than they do sleeping. With all that said, it just might be that the era of the screen is nearing an end.

Miniaturization has been the trend for many years, as digital technology shrinks down and down towards the nanoscale. In the last few years we’ve seen this in the high street with smaller mobile phones and hand-held video consoles appearing on the market.

I can remember how uncomfortable it seemed as a child to watch a TV programme on one of those little old televisions that people used to have in their kitchens or even in their cars. Yet now so many of us are glued to our small phones’ screens not only looking for our contacts’ phone numbers but visiting websites, taking, viewing and sharing photos, watching videos, reading the news, ad nauseum.

In recent years some phones like Samsung and the iPhone have bucked the trend in offering ever bigger screens in order to enhance the visibility of the applications.

Some of the latest models such as the Samsung Galaxy S6 and Apple iPhone 6 are rather slim but have increased their screen size by up to three times those of their earlier models.

The big screen itself – the cinema – has also erupted in size, with huge screens now available designed to enhance the viewers’ experience. Size is not the only thing that has changed, with louder and more surround sound, the 3D experience is getting more and more popular and some cinemas now even offering a 4D experience with jumping and vibrating seats trying to imitate the motion of the film.

Perhaps in part, this is all an effort to combat audiences lost to easy streaming of movies online and to dedicated film channels over digital television. Perhaps it is rather an unease over the medium itself. Cinema will be subject to changes in the nature of technology, arguably faster than the industry itself can innovate. The screen itself may be on the way out.

Here is where augmented reality and virtual reality step in. Augmented reality with its potential to combine visual and physical objects, the real and the virtual juxtaposed in one environment, also stands to significantly transform both our relationship to cinema and to screens as such.

When information can be projected or otherwise overlaid on physical reality, why would we need screens? An interesting example is the Cicret project which aims to create a bracelet projecting a touchscreen onto a person’s arm with all the facilities offered by a phone screen available at the touch of a finger.

Look out for my next article, Stepping Through the Screen for a look at how virtual reality is set to transform cinema and the film industry, given time. When you can step inside the action and view it from any angle, you can truly be said to have transcended the screen.

References

Turkle, S. (1995) Life on the Screen: Identity on the Age of the Internet, New York: Touchstone.

Yuichi, O. and Hideyuki, T. (2014) Mixed Reality: Merging Real and Virtual Worlds, Springer, ISBN:3642875149 9783642875144

Image via dezignor / Shutterstock.

]]>
http://brainblogger.com/2015/06/08/death-of-the-screen/feed/ 0
Generation Z – Replacing the Millennials http://brainblogger.com/2015/05/23/generation-z-replacing-the-millennials/ http://brainblogger.com/2015/05/23/generation-z-replacing-the-millennials/#respond Sat, 23 May 2015 12:00:40 +0000 http://brainblogger.com/?p=19679 The Millennials, also known as Generation Y, are often described as a generation of sociable, multi-tasking and confident people. They are team-oriented, have an advanced use of technology and they are one of the four work forces that collide together with their different ideas, values and behaviors.

There have been a number of labels used to describe different generations based on collections of loose generalizations. According to some authors, the Traditionalists were born in the 20s up to the early 40s, and some of their characteristics include a tendency towards conformity, self-sacrifice, patience and loyalty.

The Baby Boomers, born in the mid-40s up to the early 60s are described as idealists, eager to learn, keen on personal growth, and they tend to be over-achievers.

Generation X, born between the mid-60s and mid-80s, are thought to be more informal, independent and very interested in technology. This generation has been criticised for their apathy and lack of sense of responsibility.

The Millennials were born between 1985 and 1995/2000. Some authors are now referring to the rise of Generation Z, a rather unimaginative term for those born after 1995. Considering the different depictions of the Millennials and Generation Z, the line which divides them is blurry at best.

The Millennials have been in the spotlight of a large number of studies focused on management, marketing and work relationships. They are the first generation born in a world in which access to digital technologies and the internet plays a significant role in providing resources and opportunities.

Some authors consider Gen Z as raised in an environment of uncertainty driven by recession and new security measures brought in after 9/11. They are situated in a global world where differences of race, class and gender are shaped and challenged by strong accounts of inclusion and rapid flows of people, information, technologies, and financial resources.

Naturally, these children are the most technologically advanced generation. They are often intuitive users of digital technologies for communication, multimedia and design and their world is intrinsically connected to the use of the Internet through computers and mobile phones.

Some authors refer to Generation Z as “digital natives”, characterised by their heavy use of social media, their entrepreneurial, multitasking and community oriented spirit. They are used to instantaneous communication, to greater use of messaging than audio conversations and to dependence on technology for their social interactions and contact with the outside world.

The challenges they will face as the latest generation of young people are yet to be mapped out. Some enthusiasts consider this generation to have many positive traits while sceptics think that they are negatively influenced by an overuse of technology.

References

Clare, C. (2009) Generational Differences: Turning Challenges into Opportunities. Journal of Property Management, Vol. 74, No. 5, September/October.

Payment, M. (2008) Millenials: The Emerging Work Force. Career Planning and Adult Development Journal , Vol. 24, No. 3 , Fall.

Image via Aleutie / Shutterstock.

]]>
http://brainblogger.com/2015/05/23/generation-z-replacing-the-millennials/feed/ 0
The Unveiling of Sexism In Academia http://brainblogger.com/2015/05/15/the-unveiling-of-sexism-in-academia/ http://brainblogger.com/2015/05/15/the-unveiling-of-sexism-in-academia/#respond Fri, 15 May 2015 12:00:16 +0000 http://brainblogger.com/?p=19612 I like to think of science as a torchbearer for human evolution. Oh, well… Call me romantic. I am a woman after all, and I can’t help but having oh so many feelings.

I have previously shared my opinion on how there’s something really wrong with science these days, but recent events blew my mind in that regard, showing me how some “scientific minds” are still stuck in the Middle Ages.

Some of you might have come across this story: in a nutshell, an anonymous peer reviewer suggested that an article written by two female researchers should include at least one male author to validate the data. In the reviewer’s own words:

“It would probably be beneficial to find one or two male biologists to work with (or at least obtain internal peer review from, but better yet as active co-authors), in order to serve as a possible check against interpretations that may sometimes be drifting too far away from empirical evidence into ideologically biased assumptions.”

Yeah, this happened.

Fiona Ingleby is a research fellow in evolution, behavior and environment at the University of Sussex in the United Kingdom, and Megan Head is a postdoctoral researcher at the Australian National University; these authors submitted a paper to PLOS ONE that addressed gender differences in the transition from PhD to postdoc, showing through a survey of 244 people with a PhD in biology that there is gender bias in academia.

As Fiona Ingleby told Retraction Watch:

“We found that men finished their PhDs with more other-author papers than women, but no difference in number of first-author publications. Then we found that the number of publications affected how long it took PhD grads to successfully find a postdoc job – but this effect differed between men and women.”

As can be read in Fiona Ingleby’s Twitter feed and on Times Higher Education’s website, the review they received was absolutely blockheaded, unprofessional, and incredibly sexist.

The anonymous reviewer argued that “it could perhaps be the case that 99% of female scientists make a decision in mid-life that spending more time with their children is more important to them than doing everything imaginable to try to get one of the rare positions at the utter pinnacle of their field.” Condescending much?

“Or perhaps it is the case that only some small portion of men (and only men) have the kind of egomaniac personality disorder that drives them on to try to become the chief of the world at the expense of all else in life.” Sarcastic much?

Another incredibly insightful comment: “So perhaps it is not so surprising that on average male doctoral students co-author one more paper than female doctoral students, just as, on average, male doctoral students can probably run a mile race a bit faster than female doctoral students.” Dumb much?

“As unappealing as this may be to consider, another possible explanation would be that on average the first-authored papers of men are published in better journals than those of women, either because of bias at the journal or because the papers are indeed of a better quality, on average… And it might well be that on average men publish in better journals perhaps simply because men, perhaps, on average work more hours per week than women, due to marginally better health and stamina.” Idiotic much?

Maybe it’s just my angry womb speaking, but I’d say that perhaps making remarks that on average are so obtuse may, perhaps, on average be due to the (marginally) worse IQ of this reviewer.

Meanwhile, PLOS ONE has issued an announcement saying: “We have formally removed the review from the record, and have sent the manuscript out to a new editor for re-review. We have also asked the Academic Editor who handled the manuscript to step down from the Editorial Board and we have removed the referee from our reviewer database.”

Most journals use a single-blind review system, with reviewers remaining anonymous while knowing author’s identities, but many have argued for the need of either double-blind reviews or full identity disclosure on both sides for the sake of transparency.

Another appalling detail in this story highlights this need; as Fiona Ingleby told Retraction Watch: “the reviewer acknowledged that they had looked up our websites prior to reading the manuscript (they said so in their review). They used the personal assessment they made from this throughout their review – not just gender, but also patronising comments throughout that suggested the reviewer considered us rather junior. Megan and I are both postdocs, but have about 20 years research experience and 40 published papers between us, so not exactly junior. Besides, it irks me that the review is so clearly influenced by this personal assessment rather than being based on the quality of the manuscript.”

One good thing may come from this mess: PLOS ONE stated that they are reviewing their processes to guarantee that future authors receive fair and unprejudiced reviews. They announced that they are working on new features to make the review process more open and transparent, since evidence suggests that review is more constructive and civil when the reviewers’ identities are known to the authors. Maybe other journals will follow the lead.

The most incredibly ironic outcome of this review is how amidst all its dumbness it manages to illustrate the main point of the article it refutes: that there is indeed blatant gender bias in academia and that it needs to catch up to the 21st century as soon as possible.

References

Else, Holly (30 April 2015). ‘Sexist’ peer review causes storm online. Times Higher Education. Accessed online 7 May, 2015.

amarcus41 (29 April, 2015). It’s a man’s world – for one peer reviewer at least. Retraction Watch. Accessed online 7 May, 2015.

Pattinson, Damien (1 May, 2015). PLOS ONE Update on Peer Review Process. Accessed online 7 May, 2015.

Image via iofoto / Shutterstock.

]]>
http://brainblogger.com/2015/05/15/the-unveiling-of-sexism-in-academia/feed/ 0
Ultrasound to Slow Alzheimer’s Disease? http://brainblogger.com/2015/03/23/ultrasound-to-slow-alzheimers-disease/ http://brainblogger.com/2015/03/23/ultrasound-to-slow-alzheimers-disease/#respond Mon, 23 Mar 2015 15:00:50 +0000 http://brainblogger.com/?p=19098 The media is buzzing after recent publication of an article in the journal Science: Translational Medicine, which suggests that we can treat Alzheimer’s disease non-invasively with ultrasound. As usual though, they may be making more hype than is warranted given a thorough look at the data; or if they would have read the last paragraph, where the authors discuss (some of) the study’s weaknesses.

The study authors, from Queensland Brain Institute in Australia, used repeated scanning ultrasound in conjunction with intravenous injection of microbubbles (a gas core wrapped in a lipid shell) to safely induce blood brain barrier opening in a mouse model of Alzheimer’s disease (AD). The current understanding of how this method works is to vibrate the bubbles using ultrasound, causing them to expand and contract. This action displaces the vessel wall and transiently releases the tight junctions between blood vessel cells (endothelial cells) that keep a tight lock on the barrier.

Interestingly, they found that in doing so, they could reduce the number of the amyloid-beta plaques that are a characteristic (and defining) pathology of the disease. In the Amyloid Cascade Hypothesis, a dogmatic theory that has dominated the field for several decades, the aggregation of the amyloid-beta protein (a metabolite of the Amyloid Precursor Protein) into plaques results in neuronal death. This hypothesis has been continually revised and now blames the pre-aggregated, lower molecular weight amyloid oligomers for the ensuing destruction.

We will first begin by looking at the animal model in use. The APP23 mouse model of AD is widely used, but like any model has its weaknesses. The problem with many of the AD mouse models is that they do not necessarily recapitulate the most common human form of AD: sporadic. These models rely on genetic manipulation of the amyloid precursor protein to generate more amyloid-beta than physiologic levels seen in humans with the disease — basically, way too much. Right off the bat, they are biasing their results.

But let’s say, hypothetically, that the APP23 model is sufficient to answer questions about the causes of AD. The experimental groups consist of a wild type (normal) mouse, APP23 mouse without ultrasound (also called the “sham” group) and APP23 mouse with ultrasound (the treated group). Both of the APP23 groups received the microbubble injection. The authors show visually and biochemically that they can reduce levels of many forms of amyloid-beta with microbubble-coupled ultrasound compared to the sham group at no ancillary risk to the animal.

When they look at the behavioral manifestations of this, however, the effects are not as robust. In fact, it looks as if the sham group may get worse, which is difficult to explain. This is evident when the APP23 mouse (without ultrasound or microbubbles) is compared to the sham group. The treated group does show a slight restoration in the behavioral outcome, but considering the variable overlap with the sham group, it is difficult to interpret this as clinically significant.

Now, how is this working? Enter, microglia — maybe. As the innate immune cells of the brain, they have previously been implicated in the removal of amyloid-beta from the brain (including in my own work). The authors do show, rather convincingly, that microglia are eating up a lot of amyloid-beta in the treated group; almost twice as much as microglia in the sham group. That is a lot of amyloid for microglia, as they have occasionally been accused of inefficiency in this context.

The authors try to tie this in with the opening of the blood brain barrier by suggesting that albumin (the most common fluid-phase blood protein) leaks into the brain and somehow helps microglia gobble up higher levels of amyloid in the treated group. Not only is this ambiguously demonstrated in a cell culture model (in vitro) without any mechanistic evidence, but it sounds more like magic. Thankfully the authors concede that additional work should be completed on this front.

In looking for other mechanisms in which this reduction in amyloid could be occurring, the authors investigated microglial activation state, in which the only altered characteristics were those suggestive of phagocytosis (i.e. microglial cell morphology was more rounded and they expressed higher levels the phagocytic marker CD68). This does lend credence to their amyloid phagocytosis hypothesis, and they suggest this is somehow a direct result from the ultrasound stimulation. This seems a bit specious, so let us look at this scenario a different way.

The results of this study are predicated on opening of the blood brain barrier and the influx of albumin, which magically helps microglia consume more amyloid. Why the authors went with this explanation of amyloid-beta reduction and not the low hanging fruit of infiltrating peripheral monocytes, is beyond me.

It has been shown in many studies that peripheral monocytes (which become macrophages when they enter into a tissue) are much more efficient at engulfing amyloid-beta than endogenous brain microglia. Unfortunately, unless molecularly scrutinized, the differences between microglia and monocytes are nuanced, and the methods used in this study fail to differentiate them. The lack of distinction between the two cell types is a hot topic in neurodegenerative diseases at the moment and it surprises me that the reviewers and editors of the journal overlooked this obvious weakness in the study.

To be fair, the ability to open up the blood brain barrier non-invasively is a huge strength of this study and despite the lack of evidence for how it works to reduce levels of amyloid-beta, there are other important potential uses for this technique. One such example would be to enhance delivery of drugs to the central nervous system, a feat by which the neuroscience, neurology and neurosurgery communities are currently baffled. Once an effective drug is developed, ultrasound could be the means for targeted delivery to the brains of Alzheimer’s patients.

References

Hardy J (2006). Alzheimer’s disease: the amyloid cascade hypothesis: an update and reappraisal. Journal of Alzheimer’s disease : JAD, 9 (3 Suppl), 151-3 PMID: 16914853

Leinenga G, & Götz J (2015). Scanning ultrasound removes amyloid-? and restores memory in an Alzheimer’s disease mouse model. Science translational medicine, 7 (278) PMID: 25761889

Town T, Laouar Y, Pittenger C, Mori T, Szekely CA, Tan J, Duman RS, & Flavell RA (2008). Blocking TGF-beta-Smad2/3 innate immune signaling mitigates Alzheimer-like pathology. Nature medicine, 14 (6), 681-7 PMID: 18516051

Zabel M, Schrag M, Crofton A, Tung S, Beaufond P, Van Ornam J, Dininni A, Vinters HV, Coppola G, & Kirsch WM (2013). A shift in microglial ?-amyloid binding in Alzheimer’s disease is associated with cerebral amyloid angiopathy. Brain pathology (Zurich, Switzerland), 23 (4), 390-401 PMID: 23134465

Image via Akbudak Rimma / Shutterstock.

Shutterstock ID: 65181187

]]>
http://brainblogger.com/2015/03/23/ultrasound-to-slow-alzheimers-disease/feed/ 0
Chappie – Just How Artificial is the Intelligence Behind This Robot? http://brainblogger.com/2015/03/06/chappie-just-how-artificial-is-the-intelligence-behind-this-robot/ http://brainblogger.com/2015/03/06/chappie-just-how-artificial-is-the-intelligence-behind-this-robot/#respond Fri, 06 Mar 2015 17:46:22 +0000 http://brainblogger.com/?p=18942 Columbia Picture’s Chappie, directed by Neil Blomkamp (District 9, Elysium), takes place in Johannesburg, South Africa only a few years in the future. In response to massive crime, the government has begun using robotic police droids called “scouts.” As they are relied upon more and more, the world falls under the thumb of the “protection” provided by this autonomous police force. One police droid, Chappie, is stolen and given new, experimental programming, allowing it to become the first robot with the ability to think, feel and evolve for itself. Some begin to see Chappie as a threat to their interests and a danger to mankind; they vow not to allow anything to get in their way of eliminating Chappie and any like him.

As he did with District 9 and Elysium, Blomkamp creates an intriguing story that forces its audience to question what they believe is possible. This is an action film – a very good one at that – but it has an underlying intelligence and social consciousness that provokes thought. One issue it explores is the idea of a robot with not-so-artificial intelligence: Chappie is sentient, self-aware, and displays a range of human-like characteristics, such as morality.

Is this a good thing?

The character Vincent Moore (Hugh Jackman) doesn’t think so. He believes that a thinking robot carries too many unknowns and could even represent the end of mankind. Victor is a proponent for “Moose”, a much larger, more expensive and far more destructive robot. Moose is the ultimate droid video game: it is controlled by a remote human wearing a helmet with neuro-transmitters that allow the droid to be directed by the human’s thoughts. Moose puts those thoughts into action on the field, leaving the human safely removed from the dangers of battle. This is quite different than Chappie, who can reason independently from any controlling human.

Blomkamp’s film not only provides solid sci-fi/action thrills, it offers a serious contemplation on the implications of the things we’re capable of. That it accomplishes both levels – the sci-fi/action level and the analytical level – unusually well makes this a particularly strong film.

But Chappie’s world is not some far off dream. Chappie is a film where today’s science fiction provides a glimpse into the very near future’s possibilities. In fact, Chappie shows what we are already capable of – or at the very least, what we are on the verge of being capable of.

I spoke with Wolfgang Fink, PhD, a world-renowned expert in robotics, about the technology represented in Chappie. Fink founded the Visual and Autonomous Exploration Systems Research Laboratory at the California Institute of Technology, as well as the Visual and Autonomous Exploration Systems Research Laboratory at the University of Arizona.

Fink explained that the robotics in Chappie are not science fiction. “As far as the robotics (in Chappie) are concerned, that definitely is something that is already here, more or less,” he told me. “For example, we have robots that are bipedal, with the capability to climb, fly, dive, walk, run, etc. As such, the essence of the hardware depicted in the film, that is already a reality.” That said, Fink pointed out that we do not yet have the ability to create fully sentient, self-aware robots.

The robot police scouts in Chappie are an example of something we do have the technology to create. Such technology falls directly into the real-life work of Fink and others. For example, Dennis Hong, PhD, and his team at the robotics department at UCLA have developed a life-sized , humanoid robot named THOR-OP (Tactical Hazardous Operations Robot), designed for disaster response applications. THOR actually looks fairly similar to Chappie, both robots use electric power, both are modular which means you can easily replace parts, both can get into a drive a car, climb ladders and execute a number of useful tasks that would otherwise have to be performed by humans.

The scouts reflect where we currently are in artificial intelligence, which can essentially be classified as rule-based systems. In other words, the scout robots can encounter a certain situation and will act in the specific way they are programmed to act in that circumstance. A scout robot can see that a car is parked incorrectly and tit knows to write it a ticket. A scout can see a person pull a gun, and it knows to then pull its own gun and try to contain the person. These rules can be quite complex, and systems with artificial intelligence have a limited capacity to learn new rules.

While the current technology is not being used for the purpose outlined in the film (i.e a robot police force), we do have systems that follow programmed script,s with limited intelligence (e.g., the ability to learn rules).

The filmmakers intentionally set out to have the scouts (including Chappie) reflect what we are currently capable of. Blomkamp’s goal was r reality, not fantasy. Joe Dunckley, Specialty Props Effects Supervisor of Chappie, said that Blomkamp:

wanted it to be real – he didn’t want it to be over-the-top in its functionality. He couldn’t have laser beams pop out of nowhere. It had to be tough, but it also had to look like something a government could afford in a few year’s time.

Because of its grounding in the real state of current technology, Chappie has a stronger impact than typical sci-fi/action films. Blomkamp says:

The idea was to take something as unhuman as a robot – especially a police robot – and give him complete human characteristics, to the point that he becomes more emotional than the human characters.

The authenticity and impact of Chappie is increased by the fact that he is not a purely CGI creation. Instead, Chappie is played by actor Sharlto Copley, who performs in each scene alongside the other characters. This makes Chappie, the character, feel real. It also helped the other actors, since they were able to interact with a real character, and not a green-screen. In fact, the audience is able to connect with Chappie, as a character, more than the human characters at times.

For example, there is a scene where Chappie is left alone in a sketchy, remote area surrounded by a group of dangerous thugs. The thugs, believing Chappie to be just another police droid, show their hatred of the oppressive police-force: they throw rocks at the robot, yell hateful things, hit him with a pipe, and even throw a Molotov cocktail at him, lighting him on fire. Sure, Chappie physically survives: he’s a robot. However, at this point, Chappie is similar to an adolescent child. In this scene, Chappie is at the very beginning of his evolution. He is impressionable and innocent. Despite his titanium build, the audience has established a true emotional connection to him because of these qualities, just as it would to a human child. As such, it is heart breaking to see Chappie in such terrifying circumstances. You truly grow to feel for him, even though he is a machine.

As Fink explains, while the film does reflect real technological abilities we have today, the full capabilities of Chappie are still just beyond our reach. While we can program a robot to fully function in a well-defined environment, once that robot meets an unknown, they will be much less useful. While we have made tremendous strides in the past 60 years or so in the development of artificial intelligence, we have not reached a point of creating a fully autonomous robot that has the ability to learn and build upon its learning in quite the self-aware way that Chappie does.

It’s just a matter of time, though. In the meantime, films like Chappie encourage us to think and prepare for the inevitable real-life questions that Chappie and Moose represent.

]]>
http://brainblogger.com/2015/03/06/chappie-just-how-artificial-is-the-intelligence-behind-this-robot/feed/ 0