My fingers did.
Ok, not really, but both are limited views of the complex process of writing this post.
It seems that the tide may be turning on neurobabble, and I thought I would contribute a few thoughts. A recent op-ed in the New York Times entitled “Neuroscience Under Attack” chronicled the rise of “neurodoubters,” a few (interestingly, mostly British) bloggers who have been skeptical about pop neuroscience. The unfortunate title to the op-ed (which the body of the piece does attempt to clarify) is that it is not neuroscience itself that it under attack, but rather the glib popularization. Like Neuroskeptic, I hope this attack isn’t construed as undermining the science itself. Rather, I see them as strengthening science through clarifying its communication to the public. I find it critically important that there is no neurocynic (although this attitude has been voiced by the occasional philosopher), that is, most of these bloggers are scientists themselves, and see neuroscience as potentially insightful into interesting questions about the human condition. But they see it as important not to inflate the insight.
This leads me to what I think is a good metaphor for the inflated power of neuroscientific explanations. Neuroscientific terms are an inflated currency in the bargains we all strike for credibility.
They are not worthless, but more often than not they are given more value than they deserve. When we make an argument, we want to persuade our reader and we make use of techniques and evidence at our disposal. Among this evidence is personal experience as well as statistical and scientific evidence. Scientists often bemoan the unreasonable power of personal experiences and case studies. Stories have a real power that numbers rarely do. But the science and the numbers too often have an outsized role in persuasion. Even within the science there is a hierarchy of inherent persuasiveness. It is here that neuroscience has an inflated weight in credibility. The classic paper on the Seductive Allure of Neuroscientific Explanations shows how even irrelevant neuroscience explanations can increase the perceived credibility of an explanation. As Neuroskeptic points out, this could merely be an artifact of adding complexity and jargon. However, other research has also identified images of brain scans as particularly persuasive, above and beyond other scientific explanations. Unfortunately, I think this inflated value of neurojargon often comes at the cost of discounting the value of psychological explanations.
When we read something like “multitasking impairs attention” it seems commonsensical and we shrug and say “sure, why did we need science for that?” When we read something like “multitasking impairs attention by inhibiting circuits involving the intraparietal cortex,” it just seem more science-y. In each case, we should be asking further questions: “How do they define multitasking?” “Under what conditions does multitasking impair attention?” “How do they measure attention?” For someone interested in the consequences of their own multitasking behavior and how to change it, these questions are relevant ones, not questions about the wiring diagram of the intraparietal sulcus. Which is not to say that these “wiring diagrams” are useless, just that they are not likely to be as immediately applicable as it may seem.
How would I like to see this changed? First, I’d like to see science journalists be more careful and precise in how they address the relationship between psychological research and neuroscience research. I think this would more often come with a careful attention not necessarily to how one uses the neuroscience words such as amygdala, hippocampus or oxytocin but rather in the words such as “because” (why do we remember some events more than others? Because of the hippocampus”), “underlie,” or “makes.” It is the strong causal links which are most often problematic, rather than saying that oxytocin is roughly associated with circumstances in which trust or love is a relevant emotion. (what a horribly vague and hedging sentence). In these cases, sometimes what is needed is the bravery to construct a horrible journalistic sentence which is nonetheless a precise and accurate summary of the science. If one can’t construct reasonably clear sentences for the public, maybe the science being described isn’t ready for a 800 word summary for someone with no background.
Second, I’d like to see neuroscientists and psychologists make more of an effort in getting out there and describing their research themselves. This might mean having a debate out in the open about what counts as overreaching. While I am sure this can result in some awkward moments with colleagues, I think it will ultimately bolster the credibility of the field as a whole. Part of this is also having higher standards for the journalists that cover them. Perhaps the most depressing part of this thorough post-mortem of the Lehrer fiasco was this quote:
If Lehrer was misusing science, why didn’t more scientists speak up? When I reached out to them, a couple did complain to me, but many responded with shrugs. They didn’t expect anything better. Mark Beeman, who questioned that “needle in the haystack” quote, was fairly typical: Lehrer’s simplifications were “nothing that hasn’t happened to me in many other newspaper stories.”
Not all science journalists are alike. Science journalists have to separate good and bad science mostly on their own with some help from scientists. Scientists could incorporate public communication into their identities more, and can help separate good and bad science journalism. I hope they do this by championing the good, and doing more to improve the bad (which is often well-intentioned simplifying), and in some cases out-and- out refuting.
Let’s slowly, patiently reduce this inflation, rather than popping it like a bubble.