Daniel Albright, MA, PhD (c) – Brain Blogger http://brainblogger.com Health and Science Blog Covering Brain Topics Wed, 30 May 2018 15:00:03 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.6 Why Take a Pill When You Can Get a Brain Injection Instead? http://brainblogger.com/2014/04/01/why-take-a-pill-when-you-can-get-a-brain-injection-instead/ http://brainblogger.com/2014/04/01/why-take-a-pill-when-you-can-get-a-brain-injection-instead/#comments Tue, 01 Apr 2014 11:30:35 +0000 http://brainblogger.com/?p=16102 Everyone knows that pills are the most common way of administering medicines: we have pills for just about everything. But a company called MRI Interventions, Inc. might be set to change that.

There are a number of reasons why administering pharmaceutical interventions orally is a good idea. First of all, it’s easy. You just tell a patient to swallow a pill. Plus, pills aren’t very expensive to produce, and they’re easy to package and distribute. Most importantly, they work. At least most of the time. While oral medicines are effective in treating a number of disorders in various parts of the body, they haven’t proven to be as effective when dealing with problems based in the brain.

The reason is the blood-brain barrier (BBB), which is formed by a layer of cells that keep molecules in the blood from entering the brain. This helps keep a lot of things that shouldn’t be in the brain out of it, but it also makes it difficult to design drugs that correctly enter the brain. The BBB is extremely selective, and it’s very good at what it does, meaning that a lot of drugs don’t get past it. And if they do, they have the potential to affect the entire brain instead of the just the area that’s exhibiting a pathology.

In an effort to get around this problem, MRI Interventions, Inc. designed the ClearPoint system, which allows neurosurgeons to deliver controlled doses of drugs to very specific areas in the brain by using a computer-guided catheter and a device called the SmartFrame trajectory guide, which allows the surgeon to see exactly where the catheter is being placed in the brain. The entire process takes place in an MRI machine, and the trajectory guide is filled with a liquid that shows up on the MRI, meaning that the surgeon can see exactly where the device is placed, how it is oriented, and where it’s pointing.

The ClearPoint software assists the neurosurgeon by monitoring the exact location in space of the trajectory guide to ensure that the catheter is placed at precisely the right point within the brain. And it’s incredibly accurate. The president of MRI Interventions, Inc. states that the system can deliver medicine to a specified point the size of a sesame seed anywhere within the brain of a living subject. That’s significantly more accurate than any other system like this that’s been tested in the past.

Of course, further testing is required before this sort of system might be widely adopted. Like with any operation that involves the brain — even a very minor one — there are a number of potential adverse effects. Any bleeding within the brain can be life-threatening, and extreme care has to be taken to ensure that no infectious agents are present. There is little discussion of risk on the MRI Interventions website, though a related article reports the chief of stereotactic and radiosurgery at the University of California San Diego saying that the pinpoint delivery minimizes risk. We will eagerly await more publicized results.

What do you think? Is this the next wave of treatments in brain disorders? Or will it be quickly surpassed by the next up-and-coming technology? Would you be willing to try it?

References

Hock, L. (25 February, 2014) Safer drug delivery to the brain. D Magazine. 

Image via Spectral-Design / Shutterstock.

]]>
http://brainblogger.com/2014/04/01/why-take-a-pill-when-you-can-get-a-brain-injection-instead/feed/ 2
Encouraging Women to Enter Neuroscience http://brainblogger.com/2014/03/10/encouraging-women-to-enter-neuroscience/ http://brainblogger.com/2014/03/10/encouraging-women-to-enter-neuroscience/#comments Mon, 10 Mar 2014 22:56:44 +0000 http://brainblogger.com/?p=16154 If you read a lot of neuroscience articles, or even just news about the brain, you’ll likely notice that there’s a significant gender imbalance: almost all of the big names are men. But a 17-year-old girl from Denver is trying to change that.

Grace Greenwald founded The Synapse Project to connect young women with professors and scientists to establish mentor relationships that will help them learn about, develop a passion for, and enter the field of neuroscience. In her own words, the site seeks to “ignite interest in the brain and help the next generation succeed in this rewarding career.”

These mentoring relationships not only create interest in the topic, but provide crucial support and advice that young women need to succeed in pursuing careers in the field. The Project’s website includes information on grants, awards, volunteering opportunities, and jobs that might appeal to young women, further helping them identify potential pathways into neuroscience.

Grace Greenwald is the granddaughter of Glenda Greewald, the founder of the Aspen Brain Forum, an annual conference that seeks to bring together some of the brightest minds in neurosciences and other fields to help progress their fields as well as those of technology, education, and medicine. Her grandmother’s passion for the study of the brain inspired Grace to start looking for opportunities to learn more, but she quickly found that there were limited opportunities at her high school.

So she founded The Synapse Project, seeking to encourage more girls to go into neuroscience and to call for more support for the subject in high schools, which often don’t teach it, fearing that it would be too advanced for young minds.

Also included on the site are recordings of virtual field trips, which allow a high school classroom to connect with a neuroscience lab via online communication and presentations, giving students a rare and valuable opportunity to look into the day-to-day lives of neuroscientists around the world. (You can organize a virtual field trip for your own classroom by getting in touch with The Synapse Project.)

The Synapse Project is guided by an advisory board of women in neuroscience, including professors at universities such as Harvard, Princeton, Columbia, and Berkeley, among others.

Gender imbalance in science is something that’s often not discussed by members of the field—and although the disparity between male and female researchers and professors isn’t as drastic as it is in the technology field, it’s still something that’s omni-present. A website dedicated to gender balance in Norwegian science states that about 40% of academic positions are held by women, but only about 20% of full professors are women.

And while these statistics aren’t specific to neuroscience, it’s clear that there’s a gap between genders when it comes to academia. The site doesn’t provide many statistics about research positions outside of academia, but I’d be willing to get that the gender gap is the same, or possible even wider, in the private and government sectors. And it’s certainly not limited to Norway.

The Synapse Project seeks to help close the gap between the genders by getting more girls interested in neuroscience. Other organizations have sought to do the same thing, but they’re often limited to issuing statements that boil down to “hey, we need to hire more women as researchers.” (See, for example, the European Commission’s Gender and Research policy initiative.) This initiative takes a proactive attitude toward putting young women in touch with neuroscientists, getting them interested in the field, and helping them find opportunities to progress.

What do you think? Will the Synapse Project help address the gender balance in neuroscience? Or will the prevailing attitudes of academia stifle potential growth that’s created by the initiative?

Resources

The Synapse Project

Gender Balance in Norway—Research, Statistics

Image via luchschen / Shutterstock.

]]>
http://brainblogger.com/2014/03/10/encouraging-women-to-enter-neuroscience/feed/ 1
Exploring the Next Frontier – The Human Brain Project http://brainblogger.com/2014/02/23/exploring-the-next-frontier-the-human-brain-project/ http://brainblogger.com/2014/02/23/exploring-the-next-frontier-the-human-brain-project/#comments Sun, 23 Feb 2014 12:00:22 +0000 http://brainblogger.com/?p=15969 Human exploration has long been concerned with travelling outward as far as possible — the edge of the continent, around the world, outside the solar system. But a new frontier is about to explored in a big way: the human brain.

Not too long ago, I posted about the K computer simulating 1% of the human brain and how that was a really big deal. That still stands as the closest we’ve come to simulating the human brain in any way, but the European Commission’s Human Brain Project aims to go a lot further: they want to create “a unified picture of the brain as a single multi-level system.”

Exactly what this means isn’t exactly clear, but what is clear is that it’s going to result in some monumental advances in neural computing and brain simulation. They don’t say anywhere that they’re trying to simulate the entire human brain, but I’ve heard it said that this might be among their goals. And because this is a ten-year project, they might actually have a shot at doing it (or getting significantly closer than we’ve ever done before).

The project is focused on six distinct areas: neuroinformatics, brain simulation, high-performance computing, medical informatics, neuromorphic computing, and neurorobotics. I think it’s fair to say that we’ll be seeing some serious innovation from this group. Although the entire project is ambitious and exciting, the part of it that will likely be of most interest to neuroscientists is sub-project SP6, brain simulation.

In this sub-project, researchers will be striving to create an internet-based, collaborative system that will simulate the brain at a number of levels, from abstract computational models all the way down to molecular-level models. And these models will grow in complexity — and increase in neuroanatomical accuracy — as more research is released in the fields of computational neuroscience, machine learning, and neuroanatomy and these findings are integrated into the system. Obviously, this will be a hugely valuable tool to neuroscientists around the world.

One of the interesting strategies that HBP will be using to create more detailed and accurate models of the brain is analyzing the brains of mice and developing methods to extrapolate this information to further our understanding of the human brain. Data will be collected on the numbers and configuration of neurons, the vasculature of the brain, principles of brain mechanics, and synaptic maps. Once this data has been collected, the scientists at HBP will develop models that allow the data gathered from the mouse brain data to inform the understanding of the human brain.

On the computational side, HBP will be building a neuromorphic computing platform that will be accessible from around the world. Although the HBP website is quite difficult to understand (as it’s written in grant proposal language), this platform seems to be a highly accessible place for running large-scale simulations that aren’t as brain-focused as the brain simulations mentioned a couple paragraphs ago. I’m hoping that this is a hugely powerful neural network creation and training device that will help the machine learning and big data fields build more comprehensive and accurate models.

No matter what area of neuroscience you’re interested in, the Human Brain Project is very exciting. From mapping the brain’s axonal connections to simulating its structure to using neural network technology for informatics, it’s going to be a highly innovative project that could shake up a number of fields of research within the next decade. Which part of the HBP are you looking forward to most?

Learn more about the Human Brain Project here.

Image via solarsevens / Shutterstock.

]]>
http://brainblogger.com/2014/02/23/exploring-the-next-frontier-the-human-brain-project/feed/ 1
Seasonal Affective Disorder – Created By a Productivity-Centered Society? http://brainblogger.com/2014/02/15/seasonal-affective-disorder-created-by-a-productivity-centered-society/ http://brainblogger.com/2014/02/15/seasonal-affective-disorder-created-by-a-productivity-centered-society/#comments Sat, 15 Feb 2014 12:00:01 +0000 http://brainblogger.com/?p=15971 Living in England, I’ve spent the last few months in very dreary weather. It rains a lot, it’s cold, and it’s often cloudy for a good portion of the day, if not the vast majority. It’s just a part of living here — when you decide to move to England, it’s something that you’re aware of. People make jokes about the English weather being bad, but in the winter, they have it right.

Lately I’ve been feeling like I might be dealing with some symptoms of seasonal affective disorder (SAD). I’m much more tired than usual, despite a reduced training load; I find it harder to gather motivation to write; and I spend a lot more time sleeping (which doesn’t seem to help my tiredness).

After realizing that this could be connected to the weather, I bought a full-spectrum lamp (also known as a “SAD lamp”) to see if it might help. I wake up to it every morning and I keep it on my desk so I can get a blast of sunlight every once in a while throughout the day. It seems to be working — a lot of people find that light therapy is effective, especially if their SAD isn’t very severe.

SAD seems to be getting more recognition as time goes on — more people are talking about it, and more people are seeking treatment for it, though it can be very difficult to diagnose.

Anyway, I was thinking about SAD and my symptoms today. As far as depressive symptoms go, mine are pretty mild; it’s mostly manifested in energy and motivation. As a writer and a graduate student, that can have a big impact on my work. Trying to be a writer while studying for a PhD isn’t easy, and it requires a lot of focus to balance both of those responsibilities. A lot of times, it’s pretty easy. In the winter in the UK, it’s not so easy.

Today, when the weather went from sunny to rainy, I immediately found it more difficult to keep writing. I was reflecting on this a little bit when I started thinking about other creatures. Bears, marmots, hedgehogs and bats all hibernate through the winter to some degree or another, and no one says that they have SAD! There probably aren’t any marmots that get stressed about the productive time that they’re missing while they’re hibernating in the alpine winter.

But here we are, concerned about SAD and how it affects our motivation and how much we can accomplish in a day. Maybe seasonal affective disorder isn’t a disorder at all — maybe it’s just a natural thing that a lot of animals go through during times of reduced light. Melatonin, serotonin, and the body’s circadian rhythm are all connected to sunlight, and it makes sense that organisms would have decreased energy and motivation during the winter.

Don’t get me wrong — there are definitely some people who suffer from full-blown depression during the winter, and that’s definitely a disorder. But maybe being tired and less motivated helped early humans conserve energy throughout the winter when food was scarce and it was more dangerous to be out in the open. And there are a few researchers who think this is the case.

Interestingly, melatonin, which is one of the neurotransmitters affected by SAD, is one of the things that regulates hibernation in animals. One study found that melatonin ceased to fluctuate on the expected 24-hour cycle in hamsters who were hibernating; it was elevated around the clock. Melatonin secretion is increased in the dark, which is a lot more often during the winter.

So it seems like SAD might not be such a disorder after all. What do you think? Has SAD been pathologized with the rise of a society that’s ultra-focused on the idea of being productive at all times? Or am I just trying to rationalize my being tired and unmotivated to get away from writing my dissertation and cleaning my flat?

References

Revel FG, Herwig A, Garidou ML, Dardente H, Menet JS, Masson-Pévet M, Simonneaux V, Saboureau M, & Pévet P (2007). The circadian clock stops ticking during deep hibernation in the European hamster. Proceedings of the National Academy of Sciences of the United States of America, 104 (34), 13816-20 PMID: 17715068

Image via Jim Lopes / Shutterstock.

]]>
http://brainblogger.com/2014/02/15/seasonal-affective-disorder-created-by-a-productivity-centered-society/feed/ 5
Supercomputer Simulates 1% of the Brain – What’s Next? http://brainblogger.com/2014/02/05/supercomputer-simulates-1-of-the-brain-whats-next/ http://brainblogger.com/2014/02/05/supercomputer-simulates-1-of-the-brain-whats-next/#comments Wed, 05 Feb 2014 12:00:18 +0000 http://brainblogger.com/?p=15831 Neural networks are used in neuroscience to create models that could potentially explain some cognitive phenomena. For example, many researchers have built models that create pretty accurate representations of child language acquisition. These networks can, essentially, learn new words and meanings, and their learning trajectory follows that of a typical child.

Neural networks have also been used to study the hemispheric lateralisation of letter recognition, the label-feedback hypothesis, and spreading-activation conceptual networks. (Neural nets are also used as machine-learning algorithms in other fields, but I will leave that discussion for another time.)

One point of contention about neural networks is that we never really know if they accurately represent the brain: the way that they are created is heavily influenced by neuroanatomy, and includes nodes that represent neurons and weights that represent neural connections, but it is impossible to accurately model all of the billions of neurons and trillions of connections in the brain.

So how do we know whether what we are modeling is a good representation of the brain? The answer is that we don’t. Through testing, though, we can compare the results of neural network with the results of human learning, and if they match up, it is generally accepted that the neural network accurately represents the cognitive phenomenon it was built to study.

A Japanese research group called RIKEN is undertaking a project using the K computer (the fourth most powerful supercomputer in the world) to simulate neural activity on a scale that has never been done before. They modeled 1.73 billion nerve cells and 10.4 trillion connections. That is a fantastically huge number, though it falls far short of the 86 billion neurons that was recently posited for the brain. One of the collaborators in the project reports that they modelled about 1% of the brain. Even so, that is a huge accomplishment.

So what did this simulated brain compute? As far as I can tell, pretty much nothing. After 40 minutes of using 82,944 processor cores and about a petabyte of memory, the K computer had simulated approximately one second of brain activity. That is 40 minutes of time on one of the world’s most powerful supercomputers for a single second of brain activity. Puts the complexity of the brain in perspective, does it not?

Even though this test was designed as a test of the programmers and hardware at RIKEN, it brings up some really interesting questions about neural networks, what they can do, and how rapidly we are improving our ability to simulate the brain. According to some estimates, we will be able to simulate the entire brain — down to individual neurons and synapses — within the next decade or so. To do this, we will need an exa-scale computer (the scale of which is completely beyond my comprehension).

Personally, I am not hugely hopeful about simulating the entire brain anytime in the foreseeable future. Even if we have the hardware capability, we still have to have the neuroanatomical knowledge, software power, and programming ability to make it all work together. This is no small task, even with the impressive self-organizing powers of neural networks. But technology is advancing at an unbelievable rate, so who knows? Maybe we will see a computerized brain in the next 20 years. What do you think? What comes next for neural network computation? Is there any ceiling for what it can accomplish?

The world will be watching this technology closely in the coming years. You can look forward to some really exciting and interesting developments!

References

Sparkes, M. (January 13, 2014) Supercomputer models one second of brain activity. The Telegraph.

Image via agsandrew / Shutterstock.

]]>
http://brainblogger.com/2014/02/05/supercomputer-simulates-1-of-the-brain-whats-next/feed/ 4
Linguistic Relativity Today http://brainblogger.com/2014/02/01/linguistic-relativity-today/ http://brainblogger.com/2014/02/01/linguistic-relativity-today/#respond Sat, 01 Feb 2014 12:00:25 +0000 http://brainblogger.com/?p=15823 Linguistic relativity is the idea that the language you speak affects how you think. A lot of people know this as the “Sapir-Whorf hypothesis” or “Whorfianism” after one of its earliest proponents, Benjamin Whorf. Many people think that linguistic relativity has died out, that it has been disproven, or that it is generally accepted as nonsense. This is far from the truth.

However, the focus of linguistic relativity has changed radically. Previously, it was about “worldview,” a nebulous term that few people took the time to really develop. Today, researchers are looking into specific cognitive effects of language, and in very specific areas. For example, my supervisor has studied the difference in color perception between Greeks (who have twelve basic color terms) and English speakers (who have eleven). He found that there are slight differences in how they perceive color (if you are wondering how this is measured, it is often through reaction times on various tasks).

I have also mentioned Lera Boroditsky several times in my posts — she does a lot of research into metaphors of space and time and how they differ between languages. She has also found that this affects how people think about time. More recent investigations have been into motion perception, and have included languages ranging from English and Spanish to German, Czech, Swedish, and Algerian Arabic. The differences in these languages affect how speakers think about motion events that they perceive.

You may be wondering how we can tell that people think about something differently. This is a point of some contention, but there is a general consensus among researchers in this field that certain tasks are cognitively non-linguistic, meaning that linguistic parts of the mind and brain are not engaged in them. (Though this assumes that there are, in fact, processes that are non-linguistic, which some people disagree with.) And, depending on who you ask, this is the kind of thought we are talking about when we say that two people “think differently.”

For example, similarity judgment is often touted as a non-linguistic task: If you are presented with three different items, you can, theoretically, choose the two that are most similar without using language in any way. Memory is another one: The things that you remember are not always tied to language. Of course, these things are open to debate, but there is a general, if tentative, agreement on them at the moment. And, as I mentioned earlier, verbal interference tasks also prevent the use of linguistic information during a task.

These are the kinds of things that researchers have participants do to determine “how they think”. There are a lot of factors at play here, and there is a lot of room for debate over how valid these methods are.

The kinds of things that are being studied by relativity researchers today are quite minor in the grand scheme of things. What is the big deal if two different language groups think about color a little differently? If speakers of Swedish are slightly more likely to remember the endpoint of a movement than speakers of English, what does that tell us about anything?

I don’t have good answers to these questions — I can only say that we are looking into it. If nothing else, we are gaining a better appreciation of cognition and how the human mind works, which is certainly a good thing. The more we understand the mind, the brain, and the relationship between the two, the better we will be equipped to answer questions (potentially those concerned with pathology) about them in the future.

Anyway, I felt compelled, as a linguistic relativity researcher, to write this to help shed some light on what might be a few misperceptions on the field. I look forward to reading your comments and answering any questions you might have on this topic!

References

Athanasopoulos P, & Bylund E (2013). Does grammatical aspect affect motion event cognition? A cross-linguistic comparison of English and Swedish speakers. Cognitive science, 37 (2), 286-309 PMID: 23094696

Boroditsky L, Fuhrman O, & McCormick K (2011). Do English and Mandarin speakers think about time differently? Cognition, 118 (1), 123-9 PMID: 21030013

Gilbert AL, Regier T, Kay P, & Ivry RB (2006). Whorf hypothesis is supported in the right visual field but not the left. Proceedings of the National Academy of Sciences of the United States of America, 103 (2), 489-94 PMID: 16387848

Slobin, D. I. (1996). From “thought and language” to “thinking for speaking”. In J. J. Gumperz, S. C. Levinson (Eds.), Rethinking Linguistic Relativity (pp. 70–96). Cambridge: Cambridge University Press

Image via Ben Chart / Shutterstock.

]]>
http://brainblogger.com/2014/02/01/linguistic-relativity-today/feed/ 0
How Do We Think About Pitch? http://brainblogger.com/2014/01/30/how-do-we-think-about-pitch/ http://brainblogger.com/2014/01/30/how-do-we-think-about-pitch/#comments Thu, 30 Jan 2014 12:00:43 +0000 http://brainblogger.com/?p=15815 In linguistic relativity research, there is quite a bit of literature on metaphors and how they affect our perceptions of the world. Metaphors are built on language, and if it can be shown that people use those metaphors to think with, that would be taken as pretty solid evidence that language affects the way we think.

I have written a bit about the psychology and neuroscience of music in the past, and I am going to continue in that vein in the near future with a few more posts. I think this is a really fascinating area of study, and I have been reading some great research lately that needs to be shared. Lera Boroditsky has done some great work on metaphors of space and time, and if you are interested, I recommend you look her up.

A new study came out last year that brought metaphor research into the realm of music — more specifically, to pitch perception. The researchers used speakers of Dutch (which uses “high” and “low” to describe pitches, as we do in English) and Farsi (which uses “thin” and “thick” to describe these pitches) to see how language might affect the perception of pitch.

The experimenters played tones to each group and asked them to sing the tone back at the correct pitch. The catch is that a line was displayed on a screen while the original tone was played, and this was shown to affect how the subjects perceived the pitch! For example, when Farsi speakers saw a thick line, they were more likely to sing the tone back at a lower pitch than it was played at. And when it was a thin line, they sang it higher.

In the second phase of the experiment, Dutch speakers were trained to use the Farsi thick/thin distinction to describe pitch — they then underwent the same experiment. Perhaps unsurprisingly, they showed the same pattern that the Farsi speakers did in the first part of the experiment, suggesting that language experience affects pitch perception. This held even when the participants underwent verbal interference, preventing them from covertly labeling the lines on the screen as “thick” or “thin.”

So, it is clear that language affects low-level perceptual processing of pitch (i.e. participants actually perceive the pitch to be different based on their language experience). But are these metaphors created by the languages we speak?

The final part of the experiment sought to find out by teaching Dutch participants to use the opposite of the Farsi system: they were trained to describe low pitches as “thin” and high pitches as “thick.” Interestingly, after this training phase, there was no effect of showing them a thin or thick line during the playing of the tone.

So what does this all mean? The last part of the experiment shows that it is not just language that affects these metaphors; there is something else going on.

The authors of the study posit that the metaphors used by different languages draw attention to metaphors that are already in place. In essence, people are born with a set of innate conceptual metaphors that they use to think about pitch, and different languages base their linguistic metaphors on those inborn conceptual ones. Put simply, we are born thinking about low-frequency pitches as “low” and “thick,” and the language we speak determines which of these metaphors that we continue to think with.

Pretty heavy stuff — if we are born with these two metaphors already in place, what other metaphors are we born with? Where do they come from? As far as I know, no one has any idea. But as soon as someone comes up with a theory, I’ll let you know!

References

Dolscheid S, Shayan S, Majid A, & Casasanto D (2013). The thickness of musical pitch: psychophysical evidence for linguistic relativity. Psychological science, 24 (5), 613-21 PMID: 23538914

Image via Alpha Spirit / Shutterstock.

]]>
http://brainblogger.com/2014/01/30/how-do-we-think-about-pitch/feed/ 3
Some Potential Implications of the Label-Feedback Hypothesis http://brainblogger.com/2014/01/08/some-potential-implications-of-the-label-feedback-hypothesis/ http://brainblogger.com/2014/01/08/some-potential-implications-of-the-label-feedback-hypothesis/#comments Wed, 08 Jan 2014 12:00:28 +0000 http://brainblogger.com/?p=15733 The important part of Lupyan’s theory is that the effect of language on thought takes place online — it does not create long-lasting changes in cognition or perception (which is why it can be disrupted by aphasia). This is in contradiction to previous theories that have been used to support the idea of linguistic relativity, which is what makes the label-feedback hypothesis so interesting.

This is the third post in a three-part series on Gary Lupyan’s label-feedback hypothesis. Before getting into the interesting psychology of the label-feedback hypothesis, I would like to suggest that you read the previous two posts in the series, Is Linguistic Information Part of Every Cognitive Process? and Language Interference and Cognition.

Now that you are up to speed on the theory, I would like to point out why this is such a big deal. If the label-feedback hypothesis holds, this means that linguistic processing, unless it is disrupted by aphasia or an experimental condition, is active in just about every cognitive process. And if that is true, the line between linguistic and non-linguistic cognition has been significantly blurred, if not altogether annihilated. Exactly what that means is open to interpretation, and it is awfully complicated to think about.

The mechanism that Lupyan proposes for this linguistic involvement is an interactive cognitive processing model, in which different mental processes interact to create the phenomena that we consider under the umbrella term “cognition.” And if he is right, then thinking and learning could be much more complex than we initially imagined. While I am not an expert in cognitive models, I would think that trying to pin down any specific effects or phenomena would be significantly more difficult if we assume that any number of processes could be affecting each other simultaneously and automatically.

Of course, this could also affect how we do cognitive research. If researchers can come up with a theoretically viable account of which processes are interacting, we could, by process of elimination, support or refute the idea through interference-based experiments. This can be seen in the Lupyan’s experiments that I alluded to previously — he had a theoretical hypothesis that language affected categorization, he used interference to eliminate the linguistic effect, and he saw that categorization was performed differently.

Finally, and possibly most importantly, the label-feedback hypothesis has something very interesting to say about how concepts are stored in our brains. Lupyan holds that there is no core concept of a category; for example, there is nothing in our mind that holds the prototypical essence of a dog. Instead, whenever we are making a classification decision about dogs, we combine previous knowledge — dogs we have seen before — with current task demands, which could include telling the difference between dogs and cats, or dogs and foxes, or dogs and other dogs.

In this way, the label-feedback hypothesis actually proposes a very new kind of cognition: One that is modulated by multiple online processes, including language, and is affected by knowledge gained in the past and action undertaken in the present. I am still trying to fully understand and conceptualize this, as it is a significant departure from classical theories as well as many modern ones. Lupyan and others will certainly be putting a lot of time and energy into working out whether or not this is a viable model for human cognition, so watch for more news in the near future.

As I continue to study Lupyan’s ideas and those of others that think similarly, I will try to keep you updated on what I find out. And if you have any suggested readings or potentially useful information, please share!

References

Lupyan G (2012). Linguistically modulated perception and cognition: the label-feedback hypothesis. Frontiers in psychology, 3 PMID: 22408629

Image via OrganAlle / Shutterstock.

]]>
http://brainblogger.com/2014/01/08/some-potential-implications-of-the-label-feedback-hypothesis/feed/ 3
Language Interference and Cognition http://brainblogger.com/2014/01/05/language-interference-and-cognition/ http://brainblogger.com/2014/01/05/language-interference-and-cognition/#comments Sun, 05 Jan 2014 12:00:14 +0000 http://brainblogger.com/?p=15731 At the end of the last post, I stated that linguistic interference was often used as an argument against the interaction of language and thought, but that Lupyan turns this around and uses it as support for this very theory. Let us take a look at how this works.

This is the second post in a three-part series on Gary Lupyan’s label-feedback hypothesis. Before getting into the interesting psychology of the label-feedback hypothesis, I would like to make sure that you have read my previous post, Is linguistic information part of every cognitive process?, which lays out the basics of this interesting theory.

First, a quick overview of linguistic interference. When experiment subjects are taking part in a task, they are generally free to use any sort of cognitive strategy they want. If they are doing a memory task, they can repeat the name of one of the items over and over to help them remember it, for example. If they are classifying images, they can give them labels — they can think of one as “the ladder one” and another as “the shoe one.”

Linguistic interference seeks to disrupt these strategies by requiring the participant to engage in a linguistic task throughout the experiment, essentially “using up” their available linguistic resources so they cannot be used during the task. An example of this is repeating nonsense syllables over and over. The idea is that if you have to say “la lo li do ba da na lu na” repeatedly throughout a task, you will not be able to apply linguistic labels to any parts of the task.

Previous research has repeatedly used this fact to look into the relationship between language and thought. For example, the words that English speakers and Greek speakers use to describe motion are different (the exact grammatical difference is a bit complicated; if you arre interested, leave a comment and I can explain it). One study applied both linguistic and non-linguistic interference in an effort to see how it would affect participants. They concluded, based on their results, that linguistic strategies were only used in high-cognitive-load situations, and that it was a transient phenomenon. They argued that this provided evidence for the separation of language and thought.

Lupyan, however, sees it differently. In his hypothesis, language is always involved in categorizing unless it is disrupted, which means that, in everyday situations, language is affecting our “non-linguistic” cognition. It is only in cases of aphasia or interference that it goes away. Essentially, he is coming at the problem of language and thought from the other direction. Researchers who take the interpretation above tend to think of adding language to the non-linguistic process of categorization as the exception to the rule, while Lupyan considers the subtraction of language as the exception.

The label-feedback hypothesis essentially does away with the idea of a separation between linguistic and non-linguistic thought, which will likely make a number of cognitive scientists and psycholinguists quite uncomfortable. It looks to me like he might be onto something, though: The reconceptualization of linguistic interference and the reinterpretation of its effects are certainly appealing, and the highly interactive cognitive system that he proposes does seem to agree with some newer models of both neural computation and distributed cognitive processing.

What do you think? Is there a difference between linguistic and non-linguistic thought? Do they interact?

Check back soon for the final post in this series, where I will discuss some of the cognitive implications of Lupyan’s theory.

References

Lupyan G (2012). Linguistically modulated perception and cognition: the label-feedback hypothesis. Frontiers in psychology, 3 PMID: 22408629

Trueswell, J. C., & Papafragou, A. (2010). Perceiving and remembering events cross-linguistically: Evidence from dual-task paradigms. Journal of Memory and Language, 63(1), 64–82. doi: 10.1016/j.jml.2010.02.006

Image via Hasloo Group Production Studio / Shutterstock.

]]>
http://brainblogger.com/2014/01/05/language-interference-and-cognition/feed/ 2
Is Linguistic Information Part of Every Cognitive Process? http://brainblogger.com/2013/12/30/is-linguistic-information-part-of-every-cognitive-process/ http://brainblogger.com/2013/12/30/is-linguistic-information-part-of-every-cognitive-process/#comments Mon, 30 Dec 2013 12:00:28 +0000 http://brainblogger.com/?p=15729 When you think about a cup of coffee, what exactly are you thinking about? What sort of representations are you accessing in your memory? Tactile? Olfactory? The phrase “cup of coffee”? What makes the concept of a cup of coffee different from other concepts? These are all very difficult questions, and they get at one of the core issues of psycholinguistics: the relationship between language and thought.

This is the first post in a three-part series on Gary Lupyan’s label-feedback hypothesis.

I read and write about this topic a lot, but I recently came across a new perspective on this issue that I found to be very intriguing and compelling, if a bit hard to wrap my head around. This is Gary Lupyan’s “label-feedback hypothesis”, which states that verbal labels are co-activated when we activate conceptual representations — which means that if you think about the smell of a cup of coffee, you are activating the linguistic representation of the word “coffee” as well as the conceptual representation. I will come to why this is important in a moment.

Back to the label-feedback hypothesis and how it works: Let us talk about categorization for a second. When you see something — say a motor vehicle — you automatically categorize it. Is it a car? A van? A motorcycle? A bus? A scooter? A pickup? We know that this happens very quickly and without any conscious thought, but exactly how it happens is not well-understood. One of the common models is an exemplar model, in which objects are essentially defined by diagnostic features — for example, one diagnostic feature of a motorcycle is that it has two wheels. Maybe a sliding door is a diagnostic feature of a van.

Anyway, it is easiest to think about categorization as comparing what you see to a (real or imaginary) object that contains the diagnostic features. For example, one of the diagnostic features of cow could be that it has spots. If this is the case, when you see an animal, and it has spots, you will be more likely to classify it as a cow. If it neighs, you’ll be more likely to classify it as a horse, as this is a diagnostic feature of a horse.

In Lupyan’s model, when you activate a conceptual representation, the linguistic label of that category is also activated, and this highlights the diagnostic features of categories, essentially making categories seem more different than each other.

This is a lot of heavy cognitive talk, but what it comes down to is that it is the linguistic label that helps us categorize. The evidence for this comes from patients who have undergone brain trauma and have decreased linguistic abilities. Interestingly, people who are engaging in linguistic tasks (like repeating a string of nonsense syllables) behave quite similarly to aphasic patients in that they have difficulty in categorizing objects when they have to abstract qualities over sets of objects.

For example, when shown three pictures, two of red things and one of a blue thing, they have trouble grouping the red things together and saying that the blue one is the odd one out. However, if the objects are grouped by a theme — for example, a car, a truck, and a flower, the participants have much less trouble identifying the odd one.

The fact that linguistic interference disrupts the putative effect of language on non-linguistic thought can be seen as a challenge to traditional formulations of linguistic relativity, but Lupyan’s label-feedback hypothesis turns this challenge on its head and uses it as support for this particular instantiation of linguistic relativity.

More on how this works and what it means in the next post!

References

Lupyan G (2012). Linguistically modulated perception and cognition: the label-feedback hypothesis. Frontiers in psychology, 3 PMID: 22408629

Image via Elena Elisseeva / Shutterstock.

]]>
http://brainblogger.com/2013/12/30/is-linguistic-information-part-of-every-cognitive-process/feed/ 1
What Does It Mean to Know What Something Is? http://brainblogger.com/2013/11/10/what-does-it-mean-to-know-what-something-is/ http://brainblogger.com/2013/11/10/what-does-it-mean-to-know-what-something-is/#comments Sun, 10 Nov 2013 12:00:50 +0000 http://brainblogger.com/?p=15558 This is a surprisingly difficult question to answer. And it is an even more difficult one to quantify.

Think about it — you know what Neptune is, but you have never seen it. You have never touched it, or used it in any way. It doesn’t have any function at all for you. But you would still say that you know what it is. Is it because you can hold a picture of it in your mind? Then what about things like valor or honesty? You can picture people doing those things, but you cannot actually picture those things.

Fortunately, neuroscience can give us some tiny insights into the philosophical question of what it means to know what something is. A recent article concentrated on this question on a much smaller scale and sought to determine what kind of knowledge is activated when a concept is brought to mind. Here is how it worked.

Participants were first given a similarity judgment task — they were shown two different pictures of knots and asked to say whether or not the two pictures were of the same knot (if they were, they were taken from different angles). None of the participants had any sort of extensive experience with knot-tying. The researchers took baseline fMRI readings from this task.

Next, the participants learned 30 different knots in different ways. For the first set of 10 knots, the participants learned the name of the knot — they saw a video of the knot being rotated in space along with the name of the knot.

The next 10 knots they learned to tie by watching a video of the knot being tied. Finally, they learned the names and how to tie the last 10 knots. Now each participant had sensorimotor experience for 10 knots, linguistic experience for 10 more, and both types of experience for a further 10. There were also 10 more knots used in the test phase that the participants had never seen before.

The test phase consisted of the same distinction task as the experiment started with. Now, however, neuroimaging comparisons were made between the knots for which participants possessed different kinds of knowledge.

From the “tying” condition, researchers discovered that participants showed increased activation in the interparietal sulcus (IPS), which is traditionally associated with tool use, suggesting a sort of “pragmatic” knowledge representation. In contrast, the knots in the “naming” condition showed very little increased activation in the expected, linguistically oriented areas of the brain.

The authors offer a few possible explanations for this distribution of activity, and point out several factors that may have limited their ability to detect a linguistic component of object representation in the brain. One interesting point they bring up is that the name of the knot has no semantic value; it is simply a label — and previous experiments have shown that a descriptive, semantically valuable label is more likely to create a neurological response.

Anyway, the experiment provides some really great evidence for embodied cognition and the effect that it has on object knowledge; even when the participants were not asked to retrieve any functional or procedural information about the knots, they still activated areas of the IPS, suggesting that this sort of information is automatically activated when an object is perceived and recognized.

So, in the fashion of journalists who over-interpret and sensationalize scientific results, I will say that this proves that you do not really know what Neptune is.

What are your thoughts? I am interested to hear any alternative theories on why the linguistic parts of the brain were not activated to the expected levels.

References

Cross ES, Cohen NR, Hamilton AF, Ramsey R, Wolford G, & Grafton ST (2012). Physical experience leads to enhanced object perception in parietal cortex: insights from knot tying. Neuropsychologia, 50 (14), 3207-17 PMID: 23022108

Image via PeterPhoto123 / Shutterstock.

]]>
http://brainblogger.com/2013/11/10/what-does-it-mean-to-know-what-something-is/feed/ 4
Space and Time in the Bilingual Mind http://brainblogger.com/2013/11/07/space-and-time-in-the-bilingual-mind/ http://brainblogger.com/2013/11/07/space-and-time-in-the-bilingual-mind/#comments Thu, 07 Nov 2013 12:00:28 +0000 http://brainblogger.com/?p=15555 If you read my posts on a regular basis, you probably know that I am quite interested in bilingualism and its effects on cognition. Another person who conducts a lot of studies in things like this is Lera Boroditsky, one of the most public-friendly academics in psycholinguistics. Boroditsky recently released an interesting paper with Vicky Lai that I thought I would report on here for you.

Something that has been on Boroditsky’s radar for quite a while is the relationship between space and time as it is mediated by language, which is primarily through metaphor. For example, in English, we have two different metaphorical views of time. The first is an ego-moving view, in which time is stationary and we are moving along it. This is the metaphor we are using when we say “we’re coming up on the deadline.” The time-moving view, in which we are stationary and time is moving, results in things like “the deadline is approaching.”

This certainly is not a universal phenomenon, though. Mandarin, for example, discusses some elements of time in terms of front and back, similar to English, but also talks about the “up month” (last month) or the “down week” (next week).

Boroditsky and Lai used three different groups of speakers: English monolinguals, Mandarin monolinguals, and English-Mandarin bilinguals. Each of these was split into two smaller groups — one for each experimental condition.

In the experiment, the researchers asked groups of participants two different questions: one about rescheduling a meeting, and one about resetting a clock. In the first question, one group was asked which day a meeting that was originally scheduled on Wednesday, but was moved forward by two days, would fall on. The second group was asked what time a clock would say if it was moved forward one hour from 1:00 p.m.

Obviously, there are two different options here: in the first case, participants could say Monday or Friday. And in the second, they could say 12:00 p.m. or 2:00 p.m.

So what happened? The monolingual results are not all that surprising: English monolinguals were more likely to say “Friday” or “2:00 p.m.” than the Mandarin speakers, indicating an ego-moving perspective. Mandarin speakers were more likely to adopt a time-moving perspective.

What interested me, though, was the bilinguals. In the meeting-rescheduling experiment, the bilinguals were tested in English, while in the clock-resetting one, they were tested in Mandarin. In both situations, though, they were less likely to use an ego-moving perspective than English speakers, but more likely to use one than Mandarin speakers.

This sounds complicated, but what it comes down to is that they fell not in line with the English speakers, and not in line with the Mandarin speakers, but somewhere in the middle. And because they were tested in both of their languages, this experiment indicates that there is an effect of their first language on their second, but also of their second language on their first.

This all sounds quite complex, but the important thing is that it provides a bit more evidence for the argument that bilinguals are not just two separate monolinguals in one mind — they actually form a third cognitive system that is not reducible to either of their languages, but results from the interaction of both.

It is this third cognitive system that really intrigues me. People have shown it in a few different fields now, including shape classification, and I am looking forward to seeing where it goes in the future.

If you would like to read more about this, I recommend checking out Aneta Pavlenko’s work — she does a lot of research on bilingual memory, restructuring, and transfer. I have listed a few books and articles below that might interest you.

References

Lai VT, & Boroditsky L (2013). The immediate and chronic influence of spatio-temporal metaphors on the mental representations of time in english, mandarin, and mandarin-english speakers. Frontiers in psychology, 4 PMID: 23630505

Pavlenko, A. (1999). New approaches to concepts in bilingual memory. Bilingualism: Language and Cognition, 2(3), 209–230.

Pavlenko, A. (2011). (Re-)naming the world: Word-to-referent mapping in second language speakers. In A. Pavlenko (Ed.), Thinking and Speaking in Two Languages (pp. 198–236). Bristol: Multilingual Matters

Pavlenko, A. (2011). Thinking and speaking in two languages: Overview of the field. In A. Pavlenko (Ed.), Thinking and Speaking in Two Languages. Bristol: Multilingual Matters

Image via Andrey Kuzmin / Shutterstock.

]]>
http://brainblogger.com/2013/11/07/space-and-time-in-the-bilingual-mind/feed/ 3
Are Psychoactive Drugs a Thing of the Past? http://brainblogger.com/2013/11/01/are-psychoactive-drugs-a-thing-of-the-past/ http://brainblogger.com/2013/11/01/are-psychoactive-drugs-a-thing-of-the-past/#comments Fri, 01 Nov 2013 11:00:46 +0000 http://brainblogger.com/?p=15547 Before I actually get started on this article, I would like to immediately answer the question I posed in the title: no. Psychoactive drugs are alive and well. But they might not be for long.

In a recent article titled “Changing brains: why neuroscience is ending the Prozac era”, The Observer reports that large pharmaceutical companies have, for the most part, significantly slowed research on new drugs.

We already have drugs meant to treat pretty much any problem, and it has been a long time since the era of discovering revolutionary new prescription medications on a regular basis. Now, the occasional new drug has the same effects as a previous one, but it provides fewer side effects. Better, yes, but hardly exciting. And because of this, Big Pharma has largely ceased funneling major research dollars into new psychoactive drugs.

But that does not mean they are slowing their research. Pharmaceutical companies are still investing billions of dollars every year in new technologies and methods for treating various diseases.

So what are they working on now? One of the major up-and-coming areas is optogenetics, a term that you’ll likely be hearing about quite a bit over the next several years. Optogenetics is a fascinating neuroscientific field that may soon allow scientists to stimulate specific networks of neurons — a far more fine-grained approach than altering neurotransmitters, as many current psychoactive drugs do.

Optogenetics is based on a very interesting technological method. First, a virus containing the genetic details for light-sensitive proteins is injected into the brain. This virus “infects” a specific population of neurons, making them reactive to a specific wavelength of light (generally in the blue range). Next, tiny fiber optic cables are implanted in the brain, also targeting specific neurons. When a pulse of light is sent through the cable, the neuron is activated or inhibited.

It sounds like science fiction, but it’s not. In fact, they’ve actually been using a similar strategy to treat Parkinson’s disease for a while now, though they use electrodes instead of fiber-optics. Obviously, the electrodes offer significantly less fine control. And over the next decade or so, you can expect to see billions upon billions of dollars spent on research in this area. They have successfully used this technology in mice, but as of yet, there have been no human trials (at least that I am aware of).

But this kind of research brings with it very important philosophical considerations. Many cognitive scientists — including neuroscientists — have an essentially mechanistic view of the brain-mind-body interface; desires, preferences, and actions are determined by chemical and electrical reactions in the brain, all coming together to make us who we are. There is little room for the idea of a soul or, sometimes, even free will in this kind of view.

If this is the case, what happens to us when we fundamentally alter the brain and use that to affect changes in behavior? The change in behavior could be something as significant and positive as inhibiting suicidal behavior or encouraging compassion. What does this mean for us as people? Does this change who we are? If so, should we allow Big Pharma, an industry known for manipulating medicine and academic research to its own purposes, to dictate how this technology works and is applied?

There are a lot of questions surrounding optogenetics, and these are just a few. Share your thoughts in the comments below, and let us know what you think about optogenetics and its potential future uses.

References

Bell, V. (2013). Changing brains: why neuroscience is ending the Prozac era. The Observer, Sunday 22 September.

Image via Bluerain / Shutterstock.

]]>
http://brainblogger.com/2013/11/01/are-psychoactive-drugs-a-thing-of-the-past/feed/ 1
Mental Time Travel, Language, and Rats http://brainblogger.com/2013/10/26/mental-time-travel-language-and-rats/ http://brainblogger.com/2013/10/26/mental-time-travel-language-and-rats/#comments Sat, 26 Oct 2013 11:00:32 +0000 http://brainblogger.com/?p=15447 When you are not focusing on something and your mind starts to wander, what do you think of? Many people find that they think about about imagined events, either past or future. This is a process that has been called “mental time travel,” and some researchers have suggested that it is a uniquely human process. Interestingly, that is a theory that has come under fire.

The hippocampus is important in mental time travel, which has two aspects. Firstly, the hippocampus contains “place cells” that encode where a person or animal is located in space. A notable example is the hippocampus of London taxi drivers, who have to memorize a huge, complex map of London; studies have shown that they have enlarged hippocampi. The hippocampus also appears to be involved in human mental time travel, as this part of the brain is activated when people are asked to remember past episodes or imagine possible future ones.

But how can you tell if an animal is thinking about a past or possible future event? It is a simple matter of asking a research participant what they were thinking about, or asking them to recall an event, but this is impossible in animal subjects. What other options are there?

Well, for one, neuroimaging. Micro-electrode recordings have shown that place cells in the hippocampi of rats encode specific locations in a maze, and a later study showed that those same cells fire when the rats are outside of the maze, suggesting that the rats were re-experiencing (at least to a degree) their trajectory through the maze. Mental time travel.

Interestingly, the paths through the mazes that were encoded by specific cells were not always the paths that were shown in the “re-imagining” of the maze, suggesting that the rats were imagining future paths through the maze.

While the jump from place cells firing outside of a maze to mental time travel may be a bit of a stretch, the researchers could be onto something. The relationships between space and time in cognition are very complex, and any attempt to say that we have accurately pinned them down in an animal that cannot directly communicate its thoughts is certainly premature. That said, the neurological evidence is quite compelling.

An interesting corollary of this research is its relationship the development of language. It has been suggested that human language and the ability to discuss topics that were not immediately present co-evolved. The importance of mental time travel in this idea is clear — both future and past events would be extremely difficult to communicate without something as complex as human language.

Of course, this begs the question of why, if mental time travel is possible in animals like rats, humans are the only animals with advanced language capacities. This question is especially interesting when you take into account recent advances in the understanding of ape cognition, which shows that great apes have a deeper understanding of number, space, and other cognitive concepts than we previous thought.

So why are humans the only animals with language? No one has a definitive answer yet. But in my next post, I’ll address an interesting theory on the topic.

References

Corballis MC (2013). Wandering tales: evolutionary origins of mental time travel and language. Frontiers in psychology, 4 PMID: 23908641

Corballis MC (2009). Mental time travel and the shaping of language. Experimental brain research. Experimentelle Hirnforschung. Experimentation cerebrale, 192 (3), 553-60 PMID: 18641975

Keysers, C. (2012). Primate cognition: copy that. Nature, 482. 158–159. doi: 10.1038/482158a

Image via Hayati Kayhan / Shutterstock.

]]>
http://brainblogger.com/2013/10/26/mental-time-travel-language-and-rats/feed/ 1
What Makes Us Human? http://brainblogger.com/2013/10/20/what-makes-us-human/ http://brainblogger.com/2013/10/20/what-makes-us-human/#comments Sun, 20 Oct 2013 11:00:44 +0000 http://brainblogger.com/?p=15449 One of the most enduring pursuits in cognitive science is identifying the factors that separate us from lower animals — and while it might at first seem like a trivial matter (“well, we’re just a lot smarter!”), it is actually very difficult to scientifically quantify those factors.

In my last post, I discussed some recent advances in the understanding of rat cognition — it turns out that their brains are much more advanced than we first thought. And there have been a lot of recent advances in the understanding of ape cognition; they perceive space, quantities, categories, causality, and intention in ways that are more complex than we previously realized. So if we are so much more similar than we thought, what separates us?

At a keynote speech at the British Psychological Society’s CogDev 2013 conference, the researcher Michael Tomasello outlined his belief that the answer, or at least a significant portion of it, can be found in our ability to collaborate. In experiments with apes and human children, experimenters have seen some very interesting differences that point to underlying differences in how we work together to achieve goals.

Apes, including chimpanzees and bonobos, collaborate to achieve goals (usually the acquisition of food). Children do the same. However, they do so in different ways — for example, children take turns in a collaborative context, while apes do not. Children prefer to forage collaboratively, while apes prefer to go it alone when they can. Kids understand their collaborator’s role, and can play that role quickly and without much difficulty. Apes have to re-learn the new role as if they had never even seen it. The list goes on.

Of course, these differences could be ascribed to a number of things, but Tomasello believes that humans have evolved specifically to engage in “joint intention,” which is something that apes do not have. Infants will point to an object just so that an adult will look at it and create shared attention; they will also show an object to an adult without making any requests to play with it — they just want to share attention. This is called the informative motive, and it seems to only be present in humans.

These collaborative actions “create shared intentionality infrastructure for cooperative communication,” and Tomasello believes that the cognitive adaptations that have arisen to allow this joint intentionality are behind uniquely human processes like cognition, communication, culture, and morality.

So where did these cognitive adaptations come from? One theory is that they arose when competition for food increased, possibly from some great apes moving from the trees to the ground; humans responded by developing these collaborative abilities so that they could hunt larger game and continue to prosper.

No matter what brought about these adaptations, it is clear that there are some significant differences between humans and great apes when it comes to cognition, and Tomasello makes a very compelling case that these differences are motivated by differences in collaborative ability. As he has been a very prolific and influential researcher over the past several decades, I have no doubt that we will be hearing more about this from Tomasello in the near future.

Also, Tomasello has a new book coming out soon — A Natural History of Human Thinking. I think it is going to be a good one, so grab a copy of it next February when it comes out.

Image via Everett Collection / Shutterstock.

]]>
http://brainblogger.com/2013/10/20/what-makes-us-human/feed/ 2