Sajid Surve, DO – Brain Blogger Health and Science Blog Covering Brain Topics Wed, 30 May 2018 15:00:03 +0000 en-US hourly 1 What is Proprioception? Tue, 09 Jun 2009 12:47:24 +0000 Everyone learns in school about the five senses: vision (sight), audition (sound), olfaction (smell), taction (touch), and gustation (taste). These senses are responsible for our interaction with the external world. Additionally, we have several senses that are responsible for our internal functioning. One of the most important internal senses is called proprioception, or position sense. Proprioception affects our lives every moment of every day, and allows us to accomplish complex tasks that would otherwise be impossible. The sense is so fundamental to our functioning that we take its existence for granted.

Proprioception allows humans to control their limbs without directly looking at them. Take, for example, the complex task of driving. We are able to keep our eyes on the road and simultaneously adjust our arms and hands on the steering wheel, and apply the appropriate amount of pressure to the pedals to maintain speed. Talented drivers can also change the radio station, eat small meals, reach for something in the rear seat, or any host of other tasks while maintaining eye contact with the road ahead. If humans had to observe their limbs to successfully accomplish tasks, we would have to look down at our feet every time we wanted to change from gas to brake, or stare at our arms if we wanted to make a right turn. The distraction would make driving nearly impossible.

Human beings do not have a single organ for proprioception. Instead, the sense is processed by the entire nervous system as a whole. Inside every muscle and joint lie tiny meters called muscle spindles and Golgi tendons that constantly measure the amount of tension and degree of contraction. This information travels up a discreet highway in the spinal cord called the spinocerebellar tract, and makes its way to the cerebellum. The cerebellum accepts information from every muscle and joint in the body, and calculates where the limbs must be in space. The system is not perfect, but gives a rough estimate to allow for basic task completion. We can use our vision to confirm limb position for more technically demanding tasks.

Like most physiological processes, proprioception can be improved with challenging practice, and can also be impaired by disease or disuse. A concert pianist can play incredibly complex music with their eyes closed because they have trained the proprioceptive sense of their fingers to be precise enough for the task. If that same concert pianist tried to play a piece they have never seen before, they would have to look at their hands to master a complicated section.

By contrast, patients who suffer from stroke often have difficulty with balance and coordination during their recovery. Proprioception is also impaired by diseases or injuries affecting the musculoskeletal system, like an ankle sprain or diabetic neuropathy. Patients suffering from these types of conditions are predisposed to falls and repeat injuries, which compounds problems. For this reason, physical therapists can work with patients on proprioceptive training to help gain a stronger position sense.

Training usually consists of working on uneven or irregular surfaces, and balancing on affected joints with a blindfold to remove visual confirmation. Although these exercises are demanding, patients can usually see functional benefits within a few weeks. For any readers who have had previous impairments to their proprioception, comments below regarding any of these questions would be much appreciated:

How did you notice that your proprioception was impaired?

What kinds of therapies have you tried to improve your proprioception?

What impact has a lack of proprioception had on your daily life?


Victor, M, Ropper, A et al. Adams and Victor’s Principles of Neurology. Seventh Edition. McGraw-Hill Professional: 2000, Chapter 9. ISBN: 0070674973

]]> 7
Reflections on Plasticity Wed, 03 Jun 2009 12:46:23 +0000 Neuroplasticity is a relatively new concept for researchers. Up until the 1970s, scientists held firm to the belief that once we exit childhood, our neurons are fixed and we are unable to grow any new ones, except for very select areas of the brain such as the hippocampus where memory is processed. Since that time, new research and tools such as functional MRI have suggested that our brains are constantly being molded and shaped by our experiences, and maintain some degree of plasticity throughout life. Indeed, the works of such pivotal researchers as Michael Merzenich, Ph.D. and William Jenkins, Ph.D. have demonstrated that our brains wish to conserve real estate, and will remap unused portions of the brain lost due to injury so that they can process different information. As an example, if a patient lost their eye due to a traumatic accident, over time the area of the visual cortex responsible for processing that eye would convert and begin processing information for the remaining eye.

Current thinking in the field of neuroplasticity holds that our brain development is wholly dependent upon the sum of our experiences. We all begin life as an infant with no particular skill sets or abilities to carry out tasks. As our brains develop we steadily gain more sophisticated control of our bodies, but only in respond to challenges. If a skill set is not required, then the brain has no need to invest any time or energy into developing a pathway for it. Only through steady experience and repetition does our brain lay the groundwork for semi-permanent change. As a result, when our experiences change, so do our brains.

Plasticity does not only apply to the brain, however. In the body, our ability to adapt to new situations is equally staggering. Evolution occurs on a micro scale with our everyday decisions, and our bodies have the capability to remake themselves to meet the demands placed upon them. If our desire is to have a runner’s body, then all we have to do is run. In response to the increased physical demands of that activity, our bodies respond over time by improving cardiovascular output, trimming excess body fat, increasing lean muscle mass, and improving glycogen stores to cope with the changes. Over a short period of time, the body can transform itself to serve its new purpose.

What lesson is to be learned from the study of plasticity? Human beings are very much in the driver’s seat when it comes to our long term physiological outlook. Even small changes in our lifestyle when applied consistently can fundamentally alter not only our way of thinking, but the actual biochemical composition of our bodies and brains. The first question to ask is, “Who do I want to be tomorrow?”

Recommended Viewing

Michael Merzenich on Exploring the re-wiring of the brain from


MERZENICH, M., KAAS, J., WALL, J., NELSON, R., SUR, M., & FELLEMAN, D. (1983). Topographic reorganization of somatosensory cortical areas 3b and 1 in adult monkeys following restricted deafferentation Neuroscience, 8 (1), 33-55 DOI: 10.1016/0306-4522(83)90024-6

MERZENICH, M., KAAS, J., WALL, J., NELSON, R., SUR, M., & FELLEMAN, D. (1983). Topographic reorganization of somatosensory cortical areas 3b and 1 in adult monkeys following restricted deafferentation Neuroscience, 8 (1), 33-55 DOI: 10.1016/0306-4522(83)90024-6

]]> 11
Medical Controversy – When Does Life Begin? Sun, 10 May 2009 13:30:59 +0000 One of the most contested questions in history is a seemingly simple one: When does life begin? Different cultures and societies have battled to answer this question, and to date no consensus has been reached. Of course, the answer to this question has profound ethical, legal, moral, and philosophical implications. As the United States debates the merits and pitfalls of topics like embryonic stem cell research and abortion, the arguments for the beginnings of life have found themselves renewed. Along the timeline from preconception through birth and beyond, there are several stops where one group or another has drawn a line in the sand and proclaimed that life has officially begun. In the interest of providing some clarity on this issue, let us examine the rationale behind why these groups picked their points. As a reference, a textbook on developmental biology will provide some framework.


The earliest stopping point is held by many members of the Catholic Church, with their proclamation that “every sperm is sacred.” The held rationale is that every sperm has the possibility to fertilize an egg, become implanted, and eventually grow into a human being. Since God’s charge is to go forth and procreate, any type of hindrance to that process such as the use of condoms or birth control pills are interfering with God’s plan and therefore not allowed.


The greater religious community generally view the “moment of conception” as the standard for when life begins. However, the definition of conception is subject to variability. Some take the word conception to actually mean the act of ejaculation. Others consider conception to be the process of fertilization. Still others consider the fusion of genetic material into a new set of chromosomes to be meant by conception. The problem with any of these definitions is that the process is not instantaneous. From the time of ejaculation, sperm take 7 hours before they become active and able to fertilize an egg. Once the sperm meets the egg, a chemical cascade begins and the sperm begins to bore its way through the egg, which may take up to an hour. Once the sperm actually enters the egg, it’s another 12 hours before the sperm DNA makes its way to the egg’s DNA, and then another 24 hours for the restructuring and packaging process of new chromosomes. All told, the “moment of conception” could take anywhere from 2-3 days to complete.


Another argument that is raised against the “moment of conception” line of thinking is the twinning argument. Once the genetic material is completely packaged together, a new individual is created. However, for as long as 12-14 days afterward, the embryo can split into twins or more multiples. That process would create more than one individual with identical genetic material from the same moment of conception. To account for this discrepancy, some argue that life begins at gastrulation, which is when the window has closed, the embryo has implanted in the uterus, and is now committed to grow into one human being. Supporters of this theory would therefore support stem cell research, which harvests embryos that have neither the intention nor ability to be implanted into a uterus.

Week 8

The eighth week of pregnancy is a special one, because at this point the precursors to all organs have been formed. Philosophers therefore argue that with the beginnings of a brain, the fetus now has the ability to think and react, and that marks the onset of life. Opponents argue that the rudimentary nervous system is not functional at 8 weeks, and the fetus cannot process information or move in response to a stimulus, therefore not making the fetus alive.


Those same groups which argue against the week 8 model suggest that life begins with the “quickening,” which is when the fetus begins to exhibit voluntary movement inside the womb, usually around 14-16 weeks. At this point the fetus is able to react to external stimuli, which is held as the standard for life.

Week 20

Although the fetus can move at week 14, the movements are little more than jerky reflexes. They are not driven by higher cortical functioning. Therefore, another school of thought is that life begins at week 20, when the thalamus is completely formed. The thalamus is the relay center of the brain, and helps to connect the cerebral cortex to the spinal cord and peripheral nerves.

Week 25

A sizable contingent would assert that life begins at 25 weeks. The rationale for this starting point is based on our definition of death. The definition of death is not disputed, and is considered the time when electroencephalography (EEG) activity ceases. EEG measures brain activity and must demonstrate regular wave patterns to be considered valid. Therefore, by this rule the onset of life would be the time when fetal brain activity begins to exhibit regular wave patterns, which occurs fairly consistently around week 25. Previous to that time, the EEG only shows small bursts of activity without sustained firing of neurons.


Perhaps the second-most frequently held conviction is that life begins at the time of child birth. In Jewish Talmudic Law, for example, the writing states that once the head of the child is delivered it cannot be touched and is granted equal rights to life as the mother. Other religious groups maintain that the soul is delivered to the newborn with their first breath of air.


A minor group of philosophers maintain that the criterion for human life is self-consciousness, or self-awareness, which does not occur until well into childhood. This group believes that abortion is morally equivalent to infanticide, and that both are condonable under certain circumstances. Their viewpoint is extreme, and has generally been rejected by mainstream ethicists and theologians.

While this accounting is by no means comprehensive, and perhaps oversimplifies some concepts for the purpose of clarity, let it serve as a starting point for obtaining more information. With debate on this topic wide open, and no clear answers in sight, the best hope is to understand all viewpoints and draw an informed conclusion as to when life begins.


Gilbert, Scott F. DevBio, a Companion to Developmental Biology, Eighth Edition. Sinauer Associates Inc., March 2006. Chapter 2, subsection 1.

]]> 8
Are Humans Hard-Wired to Torture? Mon, 04 May 2009 11:38:07 +0000 With the reign of the Bush administration at an end, one issue that has plagued his legacy is the government-sanctioned acts of torture. The United States government was involved with several controversial actions ranging from the indefinite detention of so-called enemy combatants at Guantanamo Bay, to the outright abuses and torture at Abu Ghraib. The almost universal response of the purveyors of torture was, “I was just following orders.” Most citizens have difficulty accepting this argument as legitimate, and demand that the torturer be held accountable for their actions as criminal accomplices. We are quick to demonize these individuals as horrible outliers of our society, an unsavory fringe who are clearly well out-of-bounds with the norms of human behavior. However, research into the psychology of torture and obedience tells quite a different story.

In 1963, a landmark study called “Behavioral Study of Obedience” was published in the Journal of Abnormal and Social Psychology. Lead investigator Dr. Stanley Milgram carried out an experiment where subjects were recruited to administer a learning test to a volunteer, while an experimenter observed. The experimenter and subject were kept in a separate room from the volunteer taking the test, and the rooms were connected with an intercom. In reality, the “volunteer” was actually a taped recording of an actor responding to the questions, so that each subject encountered the same situation at each stage. As the subject began administering the test, the experimenter instruct them to punish the volunteer for every wrong answer by using a shock generator. As the test progressed, the voltage steadily increased from 15 to 450 volts and the volunteer began to complain more and more that the shocks were hurting, until eventually they were screaming and pleading to make the shocks stop, and finally fell silent for the last series of shocks. If the subject refused to administer the shocks at any point in the test, the experimenter would remind them that the test required their full participation, and that they were not responsible for what happened to the volunteer.

Amazingly, roughly 60% of the subjects followed the experiment to the end, delivering shocks despite the screams of pain by the recipient. The study notes that most of the test subjects became very highly distressed by the experiment, with reactions such as profuse sweating, shaking, stuttering, and oddly enough, uncontrollable laughter.

The Milgram experiment was repeated numerous times around the world during the 1960s and early 1970s, and uniformly the compliance rate was around 60-65%. Variations of the study were attempted such as substituting the volunteer for a puppy, yet compliance remained the same. The factors that lowered compliance were moving the volunteer into the same room as the subject, or moving the experimenter out of the room. In these cases compliance dropped to 20-30%. Regardless of variation, because the experimental model was so distressing to the test subjects it was deemed unethical in the mid-1970s and further study was forced to stop.

For the first time in over 30 years, a scientist named Dr. Jerry Burger managed to obtain approval for a study partially reproducing the Milgram experiment, and in 2009 he published his findings in the journal American Psychologist. In Dr. Burger’s model, the subjects only administered the test up to the 150-volt mark, when the experimenter stopped them from going further. As a twist, Dr. Burger also had some subjects witness a planted tester who refused to administer the test. He hypothesized that seeing a prior refusal might embolden test subjects to also refuse. Nevertheless, Dr. Burger’s results were comparable to Dr. Milgram’s results, and having a witnessed refusal did not significantly change anything.

Whatever moral compass human beings claim to possess, this research suggests that when presented with a perceived authority figure, the majority will override that compass in favor of obedience. The only possible conclusion, then, is that most human beings are in fact hard-wired to torture.


Burger, J. (2009). Replicating Milgram: Would people still obey today? American Psychologist, 64 (1), 1-11 DOI: 10.1037/a0010932

Milgram, S. (1963). Behavioral Study of obedience. The Journal of Abnormal and Social Psychology, 67 (4), 371-378 DOI: 10.1037/h0040525

Milgram, Stanley. Obedience to Authority; An Experimental View. Harper-Collins, 1974. ISBN 0-06-131983-X.

]]> 22
Curry in a Hurry – The Health Benefits of Turmeric Sat, 28 Mar 2009 14:00:33 +0000 Alternative Medicine CategoryOver the past few decades, the emergence of India as an international superpower has been slow and steady. In the United States, this has translated into a simmering public awareness of Indian culture and an ever-increasing importation of Indian products, whether it be Bollywood movies or Indian cuisine. One of the main staples of Indian cooking, turmeric, has been getting a lot of press recently not for its flavoring prowess, but rather for its health benefits. Presented here is some of the current research regarding this amazing yellow spice.


SpicesThe active ingredient in turmeric is called curcumin. Antioxidant and free radical scavenging capacity of curcumin is on par with vitamin C and E, and several animal studies have demonstrated the ability to prevent oxidative damage to heart cells, blood vessels, kidney cells, liver cells, and lipid degradation. Rats pre-treated with curcumin fared better when strokes and heart attacks were induced.


An oil produced from the turmeric plant has been studied as an antimicrobial, and was shown to be effective in killing E. coli, Pseudomonas aeruginosa, Staph aureus, and several Bacillus strains. Other culture studies have also suggested that turmeric may work as an antifungal and antiviral agent.

Wound Healing

Turmeric has long been considered a home remedy in India for surface wounds. Scientific research into the subject has surprisingly elucidated this practice. Wounds treated with curcumin had faster healing times, showed increased collagen synthesis, higher levels of transformation growth factor, and increased neovascularization. Studies looking at the ingestion of turmeric for treatment of gastric ulcers also showed preserved epithelial cells and decreased ulcer recovery times. Some have suggested that curcumin may be a helpful pre-treatment for patients who require exposure to radiation for treatment of cancers, to prevent skin damage.

Cancer Prevention

Curcumin has been shown to induce apoptosis of certain cancer cell lines in vitro including prostate and breast cancer. Animal models have demonstrated a protective effect of curcumin for a wide range of cancers including colorectal, stomach, skin, liver, oral, and breast. Pro-inflammatory cytokines such as tumor necrosis factor alpha and interleukin 1-beta have also been shown to become downregulated in the presence of curcumin.

Angiogenesis Regulation

The process of angiogenesis is responsible for the creation of new blood vessels. Under physiologic conditions, this process is necessary for growth, repair, and embryonic development. When left unchecked, angiogenesis can go awry causing conditions like diabetic retinopathy, rheumatoid arthritis, hemangiomas, and also may be responsible for the metastasis of tumors to distant sites. Curcumin seems to help regulate the process of angiogenesis, and prevent damage in animal models. This mechanism is not clearly understood yet, but further studies are currently underway.

As is the case with all dietary intake, moderation is the key. While no recommended daily allowance has been established for turmeric intake, the general consensus is that beneficial properties are conferred at doses ranging from 1-2 grams per day. Exceeding that range could lead to complications and side effects. Also, how curcumin interacts with exogenous substances like prescription medications is not clear, so caution is advised for higher intake. Regardless, the new found surge of popularity for Indian cuisine and culture has introduced Western civilization to turmeric, and the spice has certainly earned its public scrutiny as a possible superfood.


Mann, C., Neal, C., Garcea, G., Manson, M., Dennison, A., & Berry, D. (2009). Phytochemicals as potential chemopreventive and chemotherapeutic agents in hepatocarcinogenesis European Journal of Cancer Prevention, 18 (1), 13-25 DOI: 10.1097/CEJ.0b013e3282f0c090

]]> 6
Lithium as a Neuroprotectant? Thu, 05 Mar 2009 17:52:07 +0000 Drugs and Clinical Trials CategoryLithium is an elemental chemical found naturally as an alkali metal. The ionic form can be used to form salts which are readily available for medicinal purposes. Although the exact mechanism of action for lithium is unknown, it is believed to function by simultaneously raising serotonin levels and lowering norepinephrine levels. Lithium is used primarily as a mood stabilizer, as it has the unique ability to treat both mania and depression. Unfortunately, the downside of lithium is that it has a very narrow therapeutic window, meaning the blood levels which provide benefits are very close to the blood levels which cause side effects and toxicity. Because of the need for such close monitoring of blood levels, the medication has largely fallen out of favor except for certain diseases like bipolar disorder.

LithiumOne of the curious things about lithium is its effect on the brain. Patients who have bipolar disorder and are being treated with lithium often have imaging studies of their brain as part of their routine management. Radiologists noticed that patients on lithium develop hypertrophy of their brain, meaning that their brains are denser than average. Scientists began to wonder if somehow the lithium was causing new brain cells to grow, and thought there might be an application for diseases that cause cell death in the central nervous system.

One of the more severe examples of such a disease is amyotrophic lateral sclerosis (ALS), or Lou Gehrig’s Disease. ALS produces a progressive degeneration of the nerve cells in the central nervous system that supply the muscles of the body, leading to progressive weakness, paralysis, and eventually death. The causes of ALS are largely unknown, and there is no known cure. The average span from time of diagnosis to death is 3 to 5 years, making it a devastating disease. The only current treatment option for ALS is a medication called riluzole, which functions as a neuroprotectant meaning it delays the degenerative process. Even this medication only adds a few months of time, and slightly prolongs the time to requiring a ventilator to breathe.

Because the prognosis for ALS is so poor, many trials are underway for substances that may serve as a better neuroprotective agent. Lithium is one of the chemicals that holds a fair amount of promise. In February 2008, scientists from Italy published a study in the Proceedings of the National Academy of Sciences where they took 44 patients with ALS and divided them into two groups: one receiving riluzole, and the other receiving riluzole plus lithium. The two groups were case matched, meaning the researchers tried to make them as equal as possible in terms of severity of disease. The groups were tested for their functional level at the beginning of the trial, and then followed up 15 months later to repeat their testing. Of the patients who received riluzole, about a third of them had died, and the remainder had significant functional decline during the interim. By contrast, of the patients receiving riluzole and lithium, not a single patient had died, and the entire group had only a mild decrease in their functional level.

The results of this study were astonishing, and scientists are now hopeful that lithium may play a role in many neurodegenerative diseases. Clinical trials are already underway for the use of lithium in diseases like Parkinson’s disease, Huntington’s disease, spinal cord injury, and even Alzheimer’s dementia. If the results of the 2008 study can be reproduced and translated into additional disease states, this old medication may be able to provide millions of patients with neurodegenerative disorders with a new lease on life.


F. Fornai, P. Longone, L. Cafaro, O. Kastsiuchenka, M. Ferrucci, M. L. Manca, G. Lazzeri, A. Spalloni, N. Bellio, P. Lenzi, N. Modugno, G. Siciliano, C. Isidoro, L. Murri, S. Ruggieri, A. Paparelli (2008). Lithium delays progression of amyotrophic lateral sclerosis Proceedings of the National Academy of Sciences, 105 (6), 2052-2057 DOI: 10.1073/pnas.0708022105

]]> 8
Logistical Barriers to Stem Cell Research Sun, 01 Mar 2009 19:39:22 +0000 I recently attended a lecture by Wise Young, MD, PhD, who is a prolific researcher in the fields of spinal cord injury and stem cells. The lecture was fascinating, and part of the discussion was the current model for stem cell research. Currently we have two types of stem cells which are available for study: embryonic stem cells and induced pluripotent stem cells (iPSCs). As is well known, embryonic stem cells are derived from fertilized embryos that are allowed to multiply briefly before the cells are harvested. iPSCs are precursor cells found in adults, which are then chemically treated to revert them back to a stem cell.

To better understand this topic, one needs a little background in embryology. Once an embryo is created, the cells begin to multiply and differentiate into all the other types of cells that we have in our bodies. At the very early stages, the cells are lumped into 3 layers, called the ectoderm, mesoderm, and endoderm. Cells in each of these layers then move on to further differentiate into precursor cells, which then differentiate further into our final cell types. The ectoderm is what eventually differentiates into our skin, as well as our brain and spinal cord. The mesoderm differentiates into bone, muscle, and cartilage, as well as our heart, blood, blood vessels, and reproductive organs. The endoderm is responsible for forming our internal organs. Generally speaking, once a cell is designated into one of these 3 layers, it is committed to becoming a cell within those parameters. A mesodermal cell could not become a hair follicle, and an ectodermal cell could not become part of the liver.

Once we have been fully differentiated and are functional human beings, the stem cells do not disappear. They stay in small quantities around the structures they formed. Ectodermal stem cells are still present in the skin and brain. Mesodermal stem cells are still present circulating in the blood and bone marrow. Endodermal stem cells reside in the internal organs. They continue to produce precursor cells, which are then useful in tissue repair. As an example, the mesodermal stem cell can produce fibroblasts, which are precursor cells that lay down collagen for use in bones and connective tissue.

Currently, stem cell researchers are only able to control ectodermal and mesodermal cells. Science has not yet devised a way to control the differentiation of endodermal cells. This fact means that organ diseases such as diabetes or hepatitis are not being studied for stem cell cures. However, through chemical treatment, researchers have found a way to convert a mesodermal stem cell into an ectodermic stem cell and vice versa. They have also found a way to take precursor cells and revert them back to stem cells. In a practical example, this is significant because now scientists could take a skin cell, chemically revert it back into an ectodermal stem cell, convert it into a mesodermal stem cell, then redifferentiate it into a blood precursor to treat blood diseases. This process is what is meant by iPSCs.

Embryonic stem cells are a difficult thing to maintain. They are built to differentiate, not to remain as stem cells. The process to take an embryonic stem cell and ensure that it remains the same is tedious and laborious, requiring both manpower and money. If the process is not carried out perfectly, then all the stem cells will convert and the researcher is left with nothing. By contrast, iPSCs are produced on demand. If some mesodermal stem cells are required, they can be harvested out of the blood or bone marrow, or chemically produced from skin cells as outlined above. The downside is that the technology is fairly new and not able to be done except in a few highly specialized labs. That means that for now, most research is being carried out on embryonic stem cells until the iPSCs are easier to produce.

In terms of availability, the majority of current embryonic stem cells are being harvested in blood banks and umbilical cord blood banks. The problem is that the vast majority of umbilical cord blood banks are private, meaning the blood is frozen and maintained separately, not available to researchers. Of all the blood that resides in private cord blood banks, less than 1% is ever used. Cord blood samples are so rarely used or tested, in fact, that it is unclear how long cord blood can be maintained in a frozen state, or if it will be viable when the thawing process is complete. Private blood samples are also not indexed or released to the national stem cell databanks, meaning that a potential stem cell recipient would never know that they had a perfect match sitting in a private bank.

In the near future, Dr. Young predicted that the way we process, access, and research stem cells will radically change. Once a commercially viable process to produce iPSCs is perfected, the need for embryonic stem cells will disappear. People could submit a swab from the inside of their cheek, and a week or two later receive a plastic bag filled with ectodermal or mesodermal stem cells for use in transplantation or research. Cracking the code of endodermal stem cells will also dramatically increase the range of diseases that stem cells have the ability to treat. Until that time, we have no choice but to look to the future with optimism and wait patiently.

]]> 8
Doctors as Mirrors – A Reflection on the Doctor-Patient Relationship Fri, 27 Feb 2009 14:12:33 +0000 Health and Healthcare CategoryHas this scenario ever happened to either you or somebody you know? You get a raving endorsement from a friend or family member about the doctor they saw, and how they’re the best thing ever, and how you absolutely MUST go see them immediately? How many times has somebody actually gone to see a doctor based on that recommendation, only to be underwhelmed and confused about the beaming review?

These are cases where people fell victim to the cult of personality. The doctor-patient relationship, just like any other relationship, takes on many forms. What works for one person may not jive for another. Because medicine in the United States is an essentially free market, doctors tend to carry a kind of gravity about them. Patients that like their doctor’s style stick around and recommend others, while the relationships that don’t work fall apart and the patients end up drifting away to another doctor that better suits them. The end result is that a doctor’s practice and patient base becomes a direct reflection of their personality.

DoctorOf course every relationship is different and nuanced, but doctoring styles tend to lean towards one of two broad flavors:


I’m the doctor, you’re the patient.

Docs of this tradition believe that they are the absolute stewards of the human body, and that patients come to doctors to be told how to improve their health and treat their disease. In an encounter there is generally not much room for questions or discussion on the part of the patient. Furthermore, doctors who practice this way may get offended or disdainful of patients who second-guess their decisions or ask for other opinions.


Doctors of this ilk view themselves more as guides, or consultants. They have spent the time studying and honing their skills to become experts in the human condition, and believe the patient is the ultimate steward of their own body. Patients come to physicians for their expertise, in order to gain some insight into what may be happening to them, and information required to make their own informed decisions about health management. Doctors who subscribe to this philosophy tend to push a lot onto their patients, which they may or may not be in a position to handle.

Needless to say, the potential for conflicts between doctor and patient personalities and their expectations of the doctor-patient relationship can happen very frequently. So what kind of doctor do you have?

]]> 2
The Hidden Dangers of Soy Tue, 24 Feb 2009 17:14:27 +0000 Alternative Medicine CategoryAs the American diet turns towards the perception of more natural and “healthy” foods, a surge in soy products has taken hold in the market. Soy is touted as a superfood full of protein, fiber, and antioxidants, and able to do everything from lower cholesterol to fight cancer. However, not all the physiological properties of soybeans are positive. Compared to the number of outlets proclaiming the benefits of soy, relatively few sources discuss the side effects of a soy-rich diet.

Unfermented soy products like soy milk and tofu form the bulk of American soy consumption. In Asian countries, which consume higher amounts of soy overall, the predominant form is fermented as found in soy sauce and miso. The fermentation process is significant because it removes much of the biologically active phylates and isoflavones from the soy.

SoyPhylates are a form of insoluble fiber, and in low quantities are beneficial because they can absorb low density lipoprotiens and therefore lower bad cholesterol levels. They are also known to reduce risks of gastrointestinal cancers. However, in higher quantities they can also block the absorption of other nutrients, leading to vitamin and mineral deficiencies. In particular, calcium, magnesium, copper, and zinc are prone to blockage by phylates, and this can paradoxically increase the risks of osteoporosis that soy is supposed to prevent.

Soy is purported to prevent osteoporosis because of its isoflavone content. Isoflavones are a collection of complex chemical compounds which are manufactured by plants, but physiologically similar to human proteins. One type of isoflavone found in large quantities in soy is called genistein, and in humans this protein binds with some affinity to estrogen receptors. This effect is what gives soy protein the ability to fight menopausal and premenstrual symptoms, and potentially prevent osteoporosis as mentioned above. On the contrary, overconsumption of genistein can lead to the same problems associated with high estrogen levels, namely increased risks of breast, uterine, and ovarian cancer. In males, the weak estrogen activity has translated into a reported reduction in the risk of prostate cancer. High intake has been suggested to suppress testicular cell lines, although this has not been proven. Perhaps the most widely accepted consequence of a high soy diet is an impact on thyroid function. Hypothyroidism and autoimmune disorders such as Hashimoto’s thyroiditis have been linked to excess soy intake in several animal studies.

Although the benefits of soy protein are well documented, no substance can be considered a panacea. Several problems can arise when soy is ingested in excess. Most dietitians recommend 25 grams of soy protein per day, and even the American Heart Association does not recommend exceeding 50 grams per day. The public would do well to adhere to these guidelines, to avoid any potential complications of soy toxicity.


R DIVI, H CHANG, D DOERGE (1997). Anti-Thyroid Isoflavones from Soybean Biochemical Pharmacology, 54 (10), 1087-1096 DOI: 10.1016/S0006-2952(97)00301-8

Kumi-Diaka J, Rodriguez R, Goudaze G. Influence of genistein (4′,5,7-trihydroxyisoflavone) on the growth and proliferation of testicular cell lines. Biol Cell. 1998 Jul;90(4):349-54.

M. Lakshman, L. Xu, V. Ananthanarayanan, J. Cooper, C. H. Takimoto, I. Helenowski, J. C. Pelling, R. C. Bergan (2008). Dietary Genistein Inhibits Metastasis of Human Prostate Cancer in Mice Cancer Research, 68 (6), 2024-2032 DOI: 10.1158/0008-5472.CAN-07-1246

Vucenik I, Shamsuddin AM. Cancer inhibition by inositol hexaphosphate (IP6) and inositol: from laboratory to clinic. J Nutr. 2003 Nov;133(11 Suppl 1):3778S-3784S.

W.-F. Chen (2004). Genistein Enhances Insulin-Like Growth Factor Signaling Pathway in Human Breast Cancer (MCF-7) Cells Journal of Clinical Endocrinology & Metabolism, 89 (5), 2351-2359 DOI: 10.1210/jc.2003-032065

]]> 15
Topical Morphine – An Experimental Approach to Chronic Pain Sun, 22 Feb 2009 15:20:00 +0000 Drugs and Clinical Trials CategoryPractitioners who treat patients with chronic pain understand just how difficult management of that population can be. As of today, the evaluation of pain remains completely subjective; no lab tests or imaging studies can provide any meaningful insight into how much pain a patient feels. When the pain is severe and intractable, the only option becomes opioid analgesics, which have high potential for abuse and are laden with side effects ranging from constipation and sedation to respiratory depression, testosterone deficiency, and immuno-modulation. The trouble with opioids is that they must be ingested in an oral form with systemic absorption regardless of where the actual pain lies. Over the past few years, a quiet murmur has been rippling through the pain practitioner community about the possibility of dispensing morphine as a topical agent. Small scale clinical trials and animal models have shown some promise, and also raised some questions about this experimental approach.

Poppy FieldConverting systemic agents to topical agents is not a new phenomenon. A few topical agents already exist for the treatment of pain. For example, aspirin-type products exist in topical form as a cream which is available over the counter. Similarly, topical capsaicin cream and gel is commercially available for the treatment of pain. In the prescription category, there has been recent approval of topical lidocaine (same as the novocaine used for dental anesthesia) for the use of chronic pain associated with diabetic peripheral neuropathy and post-herpetic neuralgia (also known as shingles). More recently, there has been FDA approval for topical diclofenac, an anti-inflammatory similar to ibuprofen, as both a patch and a gel for acute musculo-skeletal conditions and pain associated with arthritis.

In a recent issue of Practical Pain Management, a journal for pain practitioners, Dr. Forest Tennant detailed his practice-based clinical trial on a homemade formulation of topical morphine for patients who had failed all other management. The physician simply crushed up immediate-release morphine tablets and mixed it into a moisturizing cold cream which the patient supplied. The patients were instructed to apply the cream to painful areas as needed. For his observation study, he noted that almost all patients noted significant relief of their pain symptoms, which lasted for hours. Their side effects were negligible. It’s important to bear in mind that these are patients who had severe pain which failed to respond to any other conventional treatment. Results of this magnitude are profound in such a refractory population.

Not all the reviews have been positive, however. Some animal models have shown that topical morphine used on painful skin ulcers are associated with delayed wound healing. Also, while topical diclofenac shows 1-6% systemic absorption of medication depending on preparation, there are no convincing studies as of yet to measure how much topical morphine makes its way into the bloodstream. If that number is too high, then patients run the risk of all the adverse effects associated with opioids. Regardless, for patients with intractable pain limited to small areas, topical morphine holds promise as a useful agent when all else has failed.


Tennant, F. Topical Use of Morphine. Practical Pain Management. October 2008.

Rook JM, Hasan W, McCarson KE (2008). Temporal effects of topical morphine application on cutaneous wound healing. Anesthesiology, 109 (1), 130-136.

]]> 10
A Primer on Acupuncture Tue, 03 Feb 2009 15:19:09 +0000 Alternative Medicine CategoryTraditional acupuncture is an ancient chinese art which was passed down from master to apprentice for over 4,000 years, based on anecdotal evidence, trial and error, and an Eastern philosophy of the universe. Its practice was outlawed in China after the Revolution of 1911 in favor of allopathic medicine, during a time when China wanted to appeal to Western civilization. However, in the 1950s Chairman Mao ordered a reorganization and integration of the two philosophies, and the resultant consensus became what is referred to today as “Traditional Chinese Medicine” or TCM. TCM is an entire system of medical practice, with the primary focus being on herbology. Other elements include acupuncture, moxibustion, cupping, scraping, coining, and some bone-setting, all of which are meant to support the action of the herbs.

By contrast, medical acupuncture is a system that focuses on acupuncture as a complete modality in and of itself, and is modeled more after the French style of acupuncture energetics. For a full discourse on the origins and rationale behind medical acupuncture, I cannot do better than my teacher, Dr. Joseph Helms, who is the founder of American medical acupuncture. Dr. Helms played a huge role in the import of acupuncture into the United States in the 1970s.

The basic premise behind the Eastern worldview is that the whole universe is filled with and governed by qi (pronounced “chee”). Qi is a sort of vital energy, an essence, an ideal. It has gone by several names throughout civilizations, like prana in yoga concepts, Ruah in ancient Hebrew scripts, mana in polynesian folklore, Holy Spirit in Christianity, and miwi to the Aborigines. The literal derivative of the symbol for Qi is “the vapor formed from the consumption of rice.” Perhaps that gives some insight into the energy implied. In Eastern philosophy, everything has qi, both animate and inanimate objects.

Yin YangThe expression of all of the universe’s qi can be summed up as a balance between two opposites, the yin and yang. This part can sometimes become confusing, because Eastern philosophy uses yin and yang to describe ALL opposites. Yin is down, and yang is up. Yin is solid, and yang is liquid. Yin is front and yang is back. Yin is cold and yang is hot. In the case of the classic image of yin and yang, yin is black and yang is white. At the center of yin is a little yang, and vice versa, as in Eastern philosophy nothing is 100% one or the other; instead all are considered shades of gray. As an example, consider yin as down and yang as up. A bird flying in the sky is yang with respect to the ground, but it is also yin with respect to the clouds. In that way the bird is both yin and yang, depending on perspective. Obviously though, if the bird is flying only a few feet above the ground, it is expressing much more yin than if it were flying as high as it could. As such our actions, emotions, and state of mind determine how much yin and yang we express at any given time.

In the body, yin and yang expressions of qi travel down defined channels called meridians. We have 20 main meridians which are the primary movers of qi, and hundreds of subdivisions which are able to shunt, bypass, and connect the various meridians. 8 of the meridians, called the curious meridians, provide the body with polarity and help to define direction for the other meridians to travel. They also have a degree of influence over the qi in other meridians which flow through their domain. The remaining 12 meridians are called the principal meridians, and their qi originates from one of the 12 vital organs in the body (kidney, heart, small intestine, bladder, spleen, stomach, lung, large intestine, liver, gallbladder, “master of the heart” or “pericardium,” and “triple heater” or “san jiao”). These organs produce their own unique qi , which has its own purpose and sphere of influence in our body, mind, and spirit.

Disease, then, is the disruption of the flow of qi through the meridians, or the over/underexpression of one particular type of qi. Acupuncture, in the simplest of terms, is the diagnosis of which meridians are affected and how they are affected, and then using needles (or other tools) to access the qi of the meridians to help restore balance of flow.

]]> 11
Deep Brain Stimulation for Pleasure Thu, 08 Jan 2009 20:38:58 +0000 Neuroscience and Neurology CategoryScientists out of Oxford University have developed a deep brain stimulation protocol for the orbitofrontal cortex of the brain, a small center behind the eyes which is believed to have a role in our perception of pleasure associated with food and sex. Dr. Tipu Aziz, a professor of neurosurgery at Oxford remarks, “A few years ago, a scientist implanted such a device into the brain of a woman with a low sex drive and turned her into a very sexually active woman. She didn’t like the sudden change, so the wiring in her head was removed.” The doctor further comments that a “sex chip” utilizing this technology could be available within 10 years.

PleasureDeep brain stimulation is an area of ongoing research whereby electrodes are surgically inserted into areas of the brain and a pacemaker is pulsed to activate that area. Promising developments have been made for conditions such as Parkinson’s disease, Tourette’s Syndrome, phantom limb pain, or refractory major depression utilizing deep brain stimulation. In each of these conditions, deficiencies of a very small control region of the brain are linked to the associated symptoms, which is ideal for this type of modality. In the case of libido or pleasure, scientists have only recently begun to map out brain processes for controls. The orbitofrontal cortex in particular has come under significant scrutiny, as previously very little was known about its function. Dr. Morten Kringelbach, an Oxford psychiatrist, has a small body of research that is beginning to shed light on this area of the brain, and its links to our concepts of pleasure and reward.

With so much attention paid to male erectile dysfunction due to Viagra and its predecessors, one would think that female sexual dysfunction would also be studied aggressively. Unfortunately, research and answers have proven difficult to obtain, as sexual arousal appears to be much more multifactoral in females than males. The deep brain stimulation studies serve to offer some insight into this complicated problem, and should trigger further investigation into neural controls for sexual response. Hopefully, the research being carried out at Oxford will help to bring some novel therapies to the millions of women who struggle daily with sexual dysfunction.


Morten L. Kringelbach (2005). The human orbitofrontal cortex: linking reward to hedonic experience Nature Reviews Neuroscience, 6 (9), 691-702 DOI: 10.1038/nrn1747

Kent C. Berridge, Morten L. Kringelbach (2008). Affective neuroscience of pleasure: reward in humans and animals Psychopharmacology, 199 (3), 457-480 DOI: 10.1007/s00213-008-1099-6

]]> 6
Why a Smartphone is a Dumb Idea Wed, 31 Dec 2008 14:34:38 +0000 Opinion CategoryA week’s worth of New York Times newspapers contains more information and knowledge than the average person in medieval times saw in their entire life. In our current golden age of technology, we as human beings have come to embrace the notion of computers, and the idea that information is a commodity that must be available immediately. Products on the market in the technology sector are increasingly complex in scope and connectivity, and give us unprecedented access to an enormous yet speedily growing body of information. Nowhere is this trend more glaringly apparent than the recent developments in mobile phone technology.

Mainstream mobile phones have only been in existence for 25 years, and have steadily gained in popularity according to the CTIA. Since that time, mobile phones have been upgraded to “smartphones” which include everything from cameras, to email, web browsing, GPS navigation, and music and video playback. Also, in the past 5-10 years there has been a meteoric rise in the advent of text messaging, with 2007 data showing the number of texts messages sent per day at over 1 billion. Why would having so much available connectivity be a bad idea?

SmartphoneThe answer is interruptions. Despite our newfound love of multitasking, the human brain is not very good at it. In 1976, the average adult had an attention span of roughly 15-20 minutes, according to Johnstone and Percival. According to most accounts that number has been steadily shrinking, by some studies down to a paltry 7 minutes. As we are interrupted, time is lost in something called a “context switch” in computing terms, when we have to re-orient our thought process to the new stimulus being presented, and then a second context switch has to occur to return back to the original project at hand. As the brain returns to the original project, however, it takes even more time to commit back to full focus, because our attention has now been split between the project and the interruption. Anybody who has had a deep train of thought interrupted by a phone call knows how diffucult it can be to regain that idea once the phone call is over. Interruptions therefore present a formidable challenge to productivity, as has been confirmed by this Basex study, which estimates that 28% of our work day is spent handling interruptions, resulting in almost $600 billion worth of lost wages. The more interruptions we have during the day, the less productive we can be, and the less money we can make.

From this framepoint, a device that buzzes or makes noise every time an email, text, voicemail, or picture message arrives is essentially an interruption machine. Establishing and maintaining a steady productive focus is almost impossible under these circumstances. Most interruptions are not necessary, either. Standard email etiquette dictates that messages should be sent a reply within 24 hours. Simply taking 10 minutes twice a day to check, manage, and respond to emails falls well within that parameter. Also, if a project involves deep concentration and focus, then the interruption machine must be turned off, and potential distractors like web surfing need to be limited as best as possible. Operating in this manner could help to boost productivity and decrease the constant stress of worrying about all things digital.

For the record, mobile phones are not a bad tool. Being able to make and receive an important phone call in a time of need is definitely advantageous. Some of the other functionality of modern phones, such as the ability to play music or take pictures, are excellent from a recreational standpoint. However, when it comes to work and productivity, a smartphone is undoubtedly a dumb idea.

Recommended Reading

Bittman M. I Need a Virtual Break. No, Really. New York Times. March 2, 2008.

Richtel M. Lost in E-Mail, Tech Firms Face Self-Made Beast. New York Times. June 14, 2008.

]]> 5
Beating the Biological Clock – Clinical Trials of Tasimelteon Tue, 09 Dec 2008 16:07:21 +0000 The Lancet recently published clinical trial data from a Harvard study which compares the experimental new drug tasimelteon to placebo in treating jet lag. The medication works by binding to the same receptor as melatonin, and activating it as a direct agonist.

Melatonin is a neurotransmitter produced by the brain that is believed to play a pivotal role in the regulation of our “biological clock” or circadian rhythm. Melatonin levels fluctuate in the bloodstream throughout the day, and surge during the nighttime hours when it is dark outside and our bodies are inclined to sleep. The trouble with melatonin is that it falls under the FDA category of supplements and nutraceuticals, and is therefore largely unregulated in terms of potency, manufacturing process, and quality of ingredients. Studies are hard to conduct and successfully repeat for this reason as well.

SleepTasimelteon was studied in a multicenter, randomized, placebo-controlled clinical trial for its phase III testing. The study recruited 411 healthy volunteers aged 21-50, who were randomized to receive either a placebo, or tasimelteon at 20 mg, 50 mg, or 100 mg. The volunteers were observed in a sleep clinic for 7 nights. The first 3 nights they went to sleep at their normal bedtime and did not receive any medication. Measurements were taken regarding the time to fall asleep, quality of sleep, and duration of sleep before waking. For the next 3 nights, the volunteers attempted to fall asleep 5 hours prior to their established bedtime to simulate jet lag, and were given their respective medication/placebo dose 30 minutes prior to their attempt. The same measurements were taken. On the 7th day, volunteers returned to their established bedtime but also received their medication/placebo 30 minutes prior. Measurements again were taken.

The results of the trial were that subjects who received tasimelteon fell asleep quicker, had a deeper and more efficient quality of sleep, and slept longer than subjects who received the placebo. The benefits were also dose-dependent, meaning the benefits improved as the dosage went from 20 mg to 50 mg to 100 mg. Side effect profiles were similar to placebo across all groups. Given this strong phase III data, the medication may be approved by the FDA and available to the public within the next 2-3 years.

Two to three years may be too long for airline pilots, shift workers, or any other frequent travelers who deal with jet lag on a regular basis. Current options for the management of jet lag are lackluster. Benzodiazepines are the main option, can be habit-forming, and have several untoward side effects. Hypnotics are also habit-forming, and must be timed appropriately because one cannot operate machinery or drive for several hours after being taken. Melatonin has no conclusive clinical data proving that it works, and as mentioned above, obtaining a quality product can be difficult. A medication like tasimelteon would be a welcome addition as it appears to have very few side effects, is not habit-forming, and works by altering circadian rhythms to allow a patient to fall asleep and stay asleep naturally.


S RAJARATNAM, M POLYMEROPOULOS, D FISHER, T ROTH, C SCOTT, G BIRZNIEKS, E KLERMAN (2008). Melatonin agonist tasimelteon (VEC-162) for transient insomnia after sleep-time shift: two randomised controlled multicentre trials The Lancet DOI: 10.1016/S0140-6736(08)61812-7

]]> 1
Should Doctors Engage in Racial Profiling? Thu, 27 Nov 2008 14:07:32 +0000 The time was June 2000. Scientists with the Celera Genomics Corporation, in conjunction with the international Human Genome Project, announced that they had successfully derived the entire sequence of the human genome. Furthermore, they noted that humans share 99.9% of their genetic code with one another. This discovery served as the platform for the medical community to declare that there was no genetic foundation for the notion of race, and we were all just human beings.

The problem with this assertion is that the human genome is comprised of over 3 billion base pairs. Therefore, a 0.1% difference between individuals amounts to 3 million base pairs. A mutation in a single base pair can mean the difference between having a disease and not having a disease, as in the case of thalassemia, cystic fibrosis, muscular dystrophy, or hemophilia. One can easily see that 3 million is far from insignificant in terms of differentiating who we are. If nuclear families are closely related genetically, and are more prone to certain diseases, why wouldn’t groups of people with common ancestral lines share this same propensity?

DiversityInterestingly, in September 1999, right around the same time as the Human Genome Project a retrospective study was published in the Journal of Cardiac Failure by Carson et al titled, “Racial differences in response to therapy for heart failure: Analysis of the vasodilator-heart failure trials.” In this study, they analyzed data from two previous trials studying the use of ACE inhibitors, isosorbide, and hydralazine in the treatment of congestive heart failure. When the data was split apart by race, there was a statistically significant decrease in the response of African American patients to the drug enalapril, and a statistically significant decrease in the effectiveness of hydralazine/isosorbide in white patients. The conclusion of the analysis was that certain drugs respond better for some races than others. In response to this study, the drug carvediol was studied specifically in the African American and white populations to say conclusively that it benefited both groups equally, and this became a marketing point for the pharmaceutical company that manufactures carvediol.

Should race play a factor in treating hypertension? According to the Joint National Committee on the Prevention, Detection, Evaluation, and Treatment of High Blood Pressure (JNC 7), which is the gold standard for treatment, the answer is yes. The end of the report includes a section on “special considerations” in medication selection, and suggests a different course for African American patients or other minorities as opposed to white patients. The implications of this recommendation are obvious. If the medical profession is willing to admit that there are biochemical differences between races, then the ethical line in the sand regarding racial profiling and has been blurred. While the role of racial considerations in medicine is undoubtedly a controversial one, make no mistake that doctors are being advised to treat differently based on race.


P CARSON, S ZIESCHE, G JOHNSON, J COHN, FORTHEVASODILATORHEARTFAILURE (1999). Racial differences in response to therapy for heart failure: Analysis of the vasodilator-heart failure trials1, 2 Journal of Cardiac Failure, 5 (3), 178-187 DOI: 10.1016/S1071-9164(99)90001-5

A. V. Chobanian (2003). Seventh Report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure Hypertension, 42 (6), 1206-1252 DOI: 10.1161/01.HYP.0000107251.49515.c2

]]> 0