Don Simborg, MD – Brain Blogger Health and Science Blog Covering Brain Topics Mon, 09 Apr 2018 12:00:43 +0000 en-US hourly 1 Race and Genetics Wed, 28 Feb 2018 13:00:57 +0000 Dare I venture into this politically and emotionally charged issue? When researching the questions regarding our evolutionary biology as I do, ignoring the question of race as it relates to species would be negligent. On the other hand, trying to discuss it in a short blog such as this could be considered foolhardy. This sounds like a lose-lose situation. I’ll let you the reader be the judge.

Humans have existed for about two million years. In that time, there have been many human species. Homo sapiens emerged about 300,000 years ago and we have been the only human species still living for about 37,000 years. There is little debate today among taxonomists, evolutionary biologists and all the other “ists” that have an opinion on this subject: today all seven billion plus of us belong to one species, regardless of racial, geographic, ethnic or any other classification.

Why do we say that? There are many conflicting definitions of species—referred to in the literature as the “species problem.” There might be some room to argue that the different groups of today’s Homo sapiens that we call “races” could fit one or more of the definitions of species. Even more problematic is the definition of “subspecies”. If the different races don’t qualify as separate species, could they at least qualify as subspecies?

The answer is NO and NO.

A species consists of a group of organisms with a definable set of genetic characteristics or common gene pool that evolves independently of all other groups of organisms. A common gene pool is not a precise nucleotide-by-nucleotide definition of a set of genes. Rather, it a set of genes that perform all the same functions. There will be great variation within these genes among the members of the same species. The “evolving separately” component of the definition implies that there is some barrier to interbreeding of a species with other species such that when new genetic variants enter the gene pool, they are not intermingled with other species to a large extent. This does not mean that the barrier to interbreeding is absolute. Many species today interbreed with other species to some extent, but by and large over time they continue to evolve independently. For example, we now know that Homo sapiens interbred in the past with at least two other human species. With today’s human mobility and facile intermixing of genes among all ethnicities and localities, clearly there is not a separately evolving subgroup among us. That is particularly true of the large groupings that we call races.

The notion of subspecies is even more vague and difficult to define. The subspecies level is sometimes equated with “races”. In taxonomy, subspecies are designated with three Latin terms rather than the two that designate a species. There is only one subspecies of Homo sapiens alive today, called Homo sapiens sapiens, and it includes all present day humans. The only other subspecies of Homo sapiens, called Homo sapiens idaltu, is assigned to an extinct group of fossils thought possibly to represent the immediate precursor to today’s modern humans.

With that admittedly superficial background, let’s consider human races. If not separate species or subspecies, is there any genetic basis for categorizing people as African, Caucasian, or any other racial designation? That is, is there any genetic basis for race? One can find virtually any opinion on this subject in the legitimate scientific literature. In a publication in the New England Journal of Medicine, Robert Schwartz states that “race is a social construct, not a scientific classification” and that race is a “pseudoscience” that is “biologically meaningless.”

On the other hand, in the same journal, Neil Risch states that today’s humans cluster genetically into five continent-based grouping that are biologically and medially meaningful.

Are these two points of view really different answers to the same question about genetics and race, or are they answers to different questions? Specifically, can one state that there is no genetic basis for race and, at the same time, state that there are some genetically measurable differences between self-identified racial categories? I think the answer is yes.

Let’s take, for example, the sickle cell trait, which is much more prevalent in people who consider themselves African compared to those who consider themselves Caucasian. Yet the sickle cell trait exists in all races and one could not use it to define African vs. non-African people. In fact, when one looks at the genetic variation within any racial category, it exceeds the variation between racial categories. There is no genetic profile that can define any race.

Are there clusters of genetic traits that have higher probabilities in one race or another? Certainly. That would be true of other classifications of humans as well, such as classification by size, athleticism, or musical ability. Yes, certainly those who consider themselves African have, on average, darker skin than those that consider themselves Caucasian, but the variation in skin color is great in both groups. For example, the paleogenomic profile of the earliest human fossil found in Great Britain shows that it had dark skin in a geographic area that today consists primarily of Caucasians.

This comes back to the question of species. Aren’t there great variations within species as well? Yes, but they are far less than the variations between species. That is, today’s genomic variation between the various racial groups is less than the variation between Homo sapiens and Homo neanderthalensis. All of today’s human races, no matter how you define them, are clearly Homo sapiens and not Homo neanderthalensis.

This brings me to one final point that can either further clarify or further muddy this entire discussion of race and genetics. Generally, when we talk about genetic comparisons, we have been talking about comparing classical “genes” which are the DNA sequences that code for proteins (e.g., the hemoglobin protein coded by the sickle cell trait). It is only in the past decade or so that we have learned that much of the 98% of the human genome that does not code for proteins has a profound effect on our phenotype. That is the epigenome, which regulates the expression of classical genes.

One of the things we have learned about the epigenome is that it can change during the lifetime of an individual based on environmental factors such as diet, stress, toxins, and other factors. These changes do not affect the DNA sequence of genes, but they do affect the expression of those genes. More significantly, some of these epigenomic changes are passed on to offspring and can effect generations into the future.

This raises the question of environmental factors related to racial groupings and their impact on genetics. There is evidence, for example, that African-American descendants of slaves have lower birth weight children than African-American descendants of non-slaves, perhaps related to epigenetic factors of stress and diet during slavery. One can imagine many social-cultural factors that may vary by race that could impact the epigenome. Perhaps, when we have the ability to look at the full genome variation among racial groups, our knowledge of genetics and race will change.


Schwartz, Racial Profiling in Medical Research, New England Journal of Medicine 344 (2001): 1392.

Burchard, E. Ziv, N. Coyle, et. al., The Importance of Race and Ethnic Background in Biomedical Research and Clinical Practice,” New England Journal of Medicine 348 (2003): 1170.

Lotzof, Cheddar Man: Mesolithic Britain’s Blue-eyed Boy, Natural History Museum website, Feb. 7, 2018,

M. Meloni, Race in an Epigenetic Time: Thinking Biology in the Plural, The British Journal of Sociology 68 (2017): 389.

Image via pixel2013/Pixabay.

]]> 0
Artificial General Intelligence — Is the Turing Test Useless? Fri, 02 Feb 2018 16:30:39 +0000 Artificial intelligence (AI) is all the rage today. It permeates our lives in ways obvious to us and in ways not so obvious. Some obvious ways are in our search engines, game playing, Siri, Alexa, driving cars, ad selection, and speech recognition. Some not-so-obvious ways are finding new patterns in big data research, solving complex mathematical equations, creating and defeating encryption methodologies, and designing the next generation weapons.

Yet AI remains artificial, not human. No AI computer has yet passed the Turing Test. AI far exceeds human intelligence in some cognitive tasks like calculating and game playing. AI even exceeds humans in cognitive tasks requiring extensive human training like interpreting certain x-rays and pathology slides. Generally, its achievements, while amazing, are still somewhat narrow. They are getting broader particularly in hitherto exclusively human capabilities like facial recognition. But we have not yet achieved what is called artificial general intelligence, or AGI.

AGI is defined as the point where a computer’s intelligence is equal to and indistinguishable from human intelligence. It defines a point toward which AI is supposedly heading. There is considerable debate as to how long it will take to reach AGI and even more debate whether that will be a good thing or an existential threat to humans.

Here are my conclusions:

  1. AGI will never be achieved.
  2. The existential threat still exists.

AGI will never be achieved for two reasons. First, we will never agree on a working definition of AGI that could be measured unambiguously. Second we don’t really want to achieve it and therefore won’t really try.

We cannot define AGI because we cannot define human intelligence—or more precisely, our definitions will leave too much room for ambiguity in measurement. Intelligence is generally defined as the ability to reason, understand and learn. AI computers already do this depending on how one defines these terms. More precise definitions attempt to identify those unique characteristics of human intelligence, including the ability to create and communicate memes, reflective consciousness, fictive thinking and communicating, common sense, and shared intentionality.

Even if we could define all of these characteristics, it seems inconceivable we will agree on a method of measuring their combined capabilities in any unambiguous manner. It is even more inconceivable that we will ever achieve all of those characteristics in a computer.

More importantly, we won’t try. Human intelligence includes many functions that don’t seem necessary to achieve the future goals of AI. The human brain has evolved over millions of years and includes functions that are tightly integrated into our cognitive behaviors that seem unnecessary, even unwanted, to build into future AI systems.

Emotions, dreams, sleep, control of breathing, heart rate, monitoring and control of hormone levels, and many other physiological functions are inextricably built into all brain activities. Do we need an angry computer? Why would we waste time trying to include those functions in future AIs? Emulating human intelligence is not the correct goal. Human intelligence makes a lot of mistakes because of human biases. Our goal is to improve on human intelligence—not emulate it.

The more likely path to future AI is NOT to fully emulate the human brain, but rather to model the brain where that is helpful—like the parallel processing of deep neural networks and self learning—but create non-human computer-based approaches to problem solving, learning, pattern recognition, and other useful functions that will assist humans. The end result will not be an AI that is indistinguishable from human intelligence by any test. Yet it will still be “smarter” in many obvious and measurable ways. The Turing Test is irrelevant.

If that is true, why would AI still be an existential threat? The concerns of people like Elon Musk, Stephen Hawking, Nick Bostrom, and many other eminent scientists is that there will come a time when the self-learning and self programming AI systems will reach a “cross-over” point where they will rapidly exceed human intelligence and become what is called artificial superintelligence or ASI. The fear is that we will then lose control of an ASI in unpredictable ways. One possibility is that an ASI will treat humans similarly to the way we treat other species and eliminate us either intentionally or unintentionally as we eliminate thousands and even millions of other species today.

There is no reason that a future ASI must go through an AGI stage to achieve this potential threat. It could still be uncontrollable by us, unfriendly to us, and never have passed the Turning Test or any other measure of human intelligence.


P. Saygin, I. Cicekli, V. Akman, Turing Test: 50 Years Later, Minds and Machines 10:463 (2000)

Musk and Zuckerberg bicker over the future of AI (Engadget, July 25 2017)

Simborg, DW, What Comes After Homo Sapiens?, (DWS Publishing, September, 2017)

N. Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford: Oxford University Press, 2014)

]]> 0
Lamarckian Evolution is Making a Comeback Mon, 08 Jan 2018 16:30:41 +0000 When scientists see the term “Lamarckian evolution”, the usual reaction is that it references a long-debunked theory. But that might be changing.

Lamarck was an accomplished biologist living in the late 18th and early 19th centuries. He was an expert on the taxonomy of invertebrates and was widely regarded as a botanist. He also wrote about physics, chemistry, and meteorology.

He is best remembered for his publication of Philosophie Zoologique in 1809 in which he lays out his theory of evolution. He describes two laws of nature. The first is that animals develop or lose physical traits depending on usage of those traits. For example, giraffes got their long necks because they constantly stretched to reach high leaves in trees during their lifetime. The second law states that these acquired changes during a lifetime are passed on to offspring, i.e., inherited. These two laws explain how species evolve by continual adaptation to their environment and eventually branch off into new species once the changes become large enough—so-called Lamarckian evolution.

There were other interesting aspects of his theories. He believed that there was some natural force that drove organisms toward increased complexity that was set apart from the usage law. The wide variety of organisms found in nature was because different life forms appeared spontaneously at different times.  Thus they do not all evolve from a common ancestor.  When gaps seemed to appear in the fossil record in certain lineages, he attributed that to a failure in finding all the relevant fossils. His theory clearly assumed gradual and continual evolution, but that evolution was always driven toward greater complexity.

Lamarckian evolution was largely debunked when the works of Gregor Mendel and others later demonstrated that inheritance occurred according to discreet rules of dominant and recessive inheritance rather than through acquired characteristics. Further discoveries in genetics during the 20th century further put the notion of inheritance through acquired characteristics to rest.

BUT, Lamarck has gotten a bit of a reprieve in the 21st century. By 2003, we had completed the Human Genome Project, which told us a lot about our genome and genes, but little about the epigenome. Since then, we’ve learned a lot. The epigenome refers to the 98% of our genome that does not code for proteins (what we traditionally call genes.) Instead, much of that huge portion of our genome has to do the regulation of genes, largely through the coding of various types of RNA. We have between 20,000 and 25,000 protein-coding genes.  That’s about the same number as a mouse or even a worm. And many if not most of these genes do pretty much the same thing across a wide spectrum of animals. What makes us different from a mouse or a worm is largely controlled by the epigenome.

It turns out that the epigenome responds to various factors in our environment like diet and toxins. These factors do cause changes in the epigenome during one’s lifetime, which, in turn, cause changes in the expression of various genes. The epigenome does not ever change the DNA sequence of a gene.  The remarkable fact is that some of the epigenomic changes acquired during a lifetime are passed on to progeny through the sperm and egg! Although it is not through the usage of parts of the body as Lamarck proposed, there is evidence of inheritance of traits acquired during a lifetime. One could call that Lamarckian.

Another way that acquired traits could be passed on to progeny in the future will be through germline genetic engineering when and if that becomes acceptable. So perhaps Lamarck was more prescient than we give him credit for.

Lamarck was extremely accomplished and well ahead of his time. He lived long before we understood genetics and his evolutionary theories preceded those of Darwin. To some extent, he has been given a bit of a bum rap. He got some things right and some things wrong. You can say that about a lot of our great scientists. He did recognize that something changes in an individual through generations and those changes interacted with the environment. Darwin also theorized that individuals change from generation to generation. Neither understood that these changes first require random genetic changes. Both knew that the environment played a large role in evolution, although Darwin’s natural selection is what is generally accepted today as the driving environmental force rather than usage of body components. He was wrong about the multiple spontaneous emergences of different life forms at different times, but he was correct about any apparent gaps in evolutionary lines reflecting an incomplete fossil record.

Let’s give Jean-Baptiste Lamarck his due.


Carey, N. (2012). The Epigenetics Revolution: How Modern Biology Is Rewriting Our Understanding of Genetics, Disease, and Inheritance (1st ed.). Columbia University Press.

Image via Sponchia/Pixabay.

]]> 0
Do Today’s Technological Advances Threaten Our Species? Mon, 18 Dec 2017 16:30:21 +0000 A lot of public back-and-forth banter has been going on lately between two giant tech personalities: Elon Musk and Mark Zuckerberg. Their public debate centers on whether or not artificial intelligence (AI) represents an existential threat to humanity.

For example, Elon Musk, when speaking at the National Governors Association in July said:

AI is a fundamental existential risk for human civilization, and I don’t think people fully appreciate that…It’s the greatest risk we face as a civilization [that will cause World War III]

Mark Zuckerberg on the other hand, touts the benefits of AI and says that Elon Musk’s doomsday predictions about AI are “pretty irresponsible.”

This prompted Elon Musk to fire back that Mark Zuckerberg’s understanding of AI is “pretty limited.”

So who is right? Only time will tell of course, but by my science-based speculation, I would say the evidence favors Musk. And greater brains than my own are telling us that artificial intelligence could be the end of Homo sapiens or any other Homo that follows, including Bill Joy, Stephen Hawking, Vernor Vinge, Shane Legg, Stuart Russell, Max Tegmark, Nick Bostrom, James Barrat, Michael Anissimov, and Irving Good. Brilliant minds, Nobel Prize winners, renowned inventors, and IT pioneers are all on record giving us warnings.

Of course other existential threats to Homo sapiens are possible and could come in the form of another bolide impact like the one that doomed the dinosaurs 66 million years ago or a supervolcano leading to extreme global weather events, a phenomena that also affected early species. Unlike the relentless human pursuit of technologies that could alter, if not eliminate, our species, these threats are essentially out of our control.

Genetic engineering, especially if aided by AI, could lead to the future speciation of Homo sapiens and pose yet another existential threat. Lee Silvers, in his book Remaking Eden, envisions a future society practicing an extreme form of behavioral isolation based on genetic engineering. In this society, only a small portion of the population, which he calls the GenRich, have the financial means to genetically enhance their children.

Over decades, the GenRich use genetic engineering techniques to optimize a variety of human traits—such as intelligence, athletic skill, physical appearance, and creativity—that give them a controlling position in society. Over time, cultural disparity between this GenRich minority population and the remaining “naturals” becomes so great that there’s little interbreeding between the two groups. Such a scenario could lead to the genetic development of a postzygotic reproductive barrier.

In other words, genetic engineering could eventually lead to a new species of humans. Once this occurs, the long-term results are unpredictable. This new species—I call it Homo nouveau—like the GenRich, may not be an existential threat, at least in the early centuries or millennia.

It’s uncertain what could happen when two human species try to coexist. We know things didn’t work out very well for the Neanderthals after Homo sapiens arrived. In fact, the same is true for Homo heidelbergensisHomo erectus, Homo denisova, and every other Homo species that may have coexisted with Homo sapiens.

In considering all the possible existential threats to us humans, genetic engineering is a possibility in the not too distant future—say in the next two to four centuries. However, if Elon Musk is right AI could supersede that in one or two centuries if we’re unsuccessful in controlling it. Then again, at any time we could be hit by a bolide. None of this bodes well for us.


L. Grossman, “2045: The Year Man Becomes Immortal”, Time Magazine, February 10, 2011. Access here.

Hawking, S., Tegmark, T., Russell, S. (2017). Transcending Complacency on Superintelligent Machines. Huffington Post. Access here.

Bostrom, N. Superintelligence: Paths, Danger, Strategies, Oxford: Oxford University Press, 2014.

F. Heylighen, “Return to Eden? Promises and Perils on the Road to Global Superintelligence,” in The End of the Beginning: Life, Society and Economy on the Brink of the Singularity, Ben and Ted Goerzel, eds., Humanity + Press, 2015.

Image via frolicsomepl/Pixabay.

]]> 0