Last week, Pamela Paul wrote a column in The New York Times that riled up a lot of academia because of its misunderstanding of the academic publishing process and also because it accepted the views by the heterodox folks in the Journal of Controversial Ideas at face value. I got some good hyperlinks in the column and in the paper!
There’s a deep dive on the paper by Timothy Burke that is worth reading.
Here’s what I put on my blog over at Science about this:
Scientific research is a social process that occurs over time with many minds contributing. But the public has been taught that scientific insight occurs when old white guys with facial hair get hit on the head with an apple or go running out of bathtubs shouting “Eureka!” That’s not how it works, and it never has been. Rather, scientists work in teams, and those teams share findings with other scientists who often disagree, and then make more refinements. Then those findings are placed in the scientific record for even more scientists to examine and produce further adjustments. Eventually, theories become knowledge. All along the way, these scientists are conspicuously and magnificently human—with all the assets and flaws that humans possess. And that means that who those individuals are, and the backgrounds they bring to their work, have a profound influence on the quality of the end result.
It has somehow become a controversial idea to acknowledge that scientists are actual people. For some, the notion that scientists are subject to human error and frailty weakens science in the public eye. But scientists shouldn’t be afraid to acknowledge their humanity. Because individual scientists are always going to make a mistake eventually, and the objective truth that they claim to be espousing is always going to be revised. When this happens, the public understandably loses trust. The solution to this problem is doing the hard work of explaining how scientific consensus is reached—and that this process corrects for the human errors in the long run.
A raging debate has set in over whether the backgrounds and identities of scientists change the outcomes of research. One view is that objective truth is absolute and therefore not subject to human influences. “The science speaks for itself” is usually the mantra in this camp. But the history and philosophy of science argue strongly to the contrary. For example, Charles Darwin made major contributions to the most important idea in biology, but his book The Descent of Man contained many incorrect assertions about race and gender that reflected his adherence to prevalent social ideas of his time. Thankfully, evolution didn’t become knowledge the day Darwin proposed it, and it was refined over the decades by many points of view. More recently, pulse oximeters that measure blood oxygen levels were found to be ineffective for dark skin because they were initially developed for white patients. These examples—and countless more in between—reveal how much work needs to be done to strengthen the scientific community and the public understanding of the process.
A monolithic group of scientists will bring many of the same preconceived notions to their work. But a group of many backgrounds will bring different points of view that decrease the chance that one prevailing set of views will bias the outcome. This means that scientific consensus can be reached faster and with greater reliability. It also means that the applications and implications will be more just for all. How is this a threat to scientific rigor and the merit of discoveries? Unfortunately, we’re nowhere close to achieving these goals. Science has had enormous trouble building a workforce that reflects the public it serves. And now, numerous state governments are trying to make it more difficult, if not impossible, at the public universities in their states, and even within the scientific community, there are efforts to derail the idea that it matters who does science.
The soundbite “trust the science” has been circulating recently. This framing is unfortunate. Because “the science” in this context is usually a snapshot of ideas or facts in a particular moment—and often from the perspective of a small number of people (or even one person). It would have been better to use a phrase like “trust the scientific process,” which would imply that science is what we know now, the product of the work of many people over time, and principles that have reached consensus in the scientific community through established processes of peer review and transparent disclosure.
Scientists should embrace their humanity rather than pretending that they are a bunch of automatons who instantly reach perfectly objective conclusions. That will be more work both in terms of ensuring that science represents that humanity and in explaining how it all works to the public. But in return, society will get better and more just science, and it will allow scientists to immerse themselves in the glorious, messy process of always striving for a greater understanding of the truth.
"Scientists should embrace their humanity rather than pretending that they are a bunch of automatons who instantly reach perfectly objective conclusions." Hear, hear. But also--this requires nuance, something woefully lacking in almost every area of public discourse. No idea how to address that.
I have used the term "Trust the science" in the past, but usually when it wasn't quite so fluid as it was during COVID (and for the record, COVID-19 is still an issue regardless of how many emergency declarations have been discarded. There are several issues to unpack here.
The COVID-19 pandemic may well have been the first where the whole world got to see how a lot of our public health, clinical treatment and viriology/vaccinology science unfolded. And it was messy. I've been involved in several disparate fields during a long and busy career, but perhaps the ugliest was medicine. In biomedical engineering, geodetics, meteorology, I never had someone stand up at a conference and shout at me, literally, for reporting my findings. I've had that at medical conferences, and in the course of the Pandemic, I've gotten the social media, and email variants thereof. But worse than having colleagues who were entitled to an opinion based on years of study and research in the arena, I got this from people who had little or no experience, but were able to see all the things that were happening. I was writing in English, but often in what amounted to a professional shorthand. The public would see what I, or a colleague, wrote, and the words all made sense... but they lacked the context of the discussion, and as a result, drew conclusions that were not even consistent with what we might be discussing.
Science is messy. A hypothesis might become a theory, subjected to rigorous evaluation (or not!) and "proven" or dispelled. If proven, it's still subject to review and attempts to reproduce results by others who might not be so wedded to said theory as you are, and might subsequently find flaws or outright discredit it. This doesn't mean the process is wrong, or that the original researcher was dishonest, but I've been told I was, indeed, dishonest when new data caused me to rather suddenly change my opinion and understanding of the subject, and then have to publish what amounted to a retraction, AND explanation. During the pandemic, I saw sufficient material to change an opinion or understanding of the process of the disease more than once per day, all too many times. Read an article reporting a series of research that was, indeed, well-designed, properly conducted, and reasonably analyzed, only to have another author report, nearly simultaneously, a result that was as carefully curated as the first, but cast all the earlier work into question. Because that was the nature of the disease we were trying to chase.
We also saw changes in understanding of the disease process itself, but these were not discerned through careful trials and rigorous control, but rather by an astute clinician, or several, seeing something that wasn't consistent with what we'd been thinking we would see, and discussing it. Online. I spent a lot of time on a couple of closed Facebook groups with critical care clinicians where we talked about what was being seen in a number of widely separated hospital settings. A lot changed in what we knew of the disease process, but trying to explain this to the public meant we had to tear down strawmen of the disease process we, ourselves had helped build, explain why we had changed our understanding, and in some cases, beg forgiveness. And yes, the public was watching.
One other thing: I was reading a lot of cased reports, case studies, research reports, etc., and trying to formulate a coherent view, and share my notes with colleagues. I honestly didn't have a lot of spare time to do other things. Like family, normal living, etc. As a result, I lost a bit of my own humanity, and as well, patience with people who couldn't see what I was seeing. My writing became more steeped in both the reported data, and worse, in the jargon we were accumulating around the Pandemic. I wasn't communicating with the public, and even my wife, with a medical background, complained I wasn't saying things that made sense to her (her background, in womens' health, is often foreign to me, at least when she goes into the minutia of obstetrics and gynecology, much as my cardiovascular medicine background causes her to go to sleep), which should have been a red flag to rethink how I was trying to communicate. But I wasn't alone. Within the public health community, I saw too many examples of what were hastily created, if stellar scholarly pieces put out into the light... that were completely unintelligible to the public.
Finally, tribalism perfused the whole discussion all to early in the process. From claiming the outbreak of a truly novel virus was not worth a public health emergency (I still disagree with that belief) to statements on the other side that we'd all die if we didn't isolate, mask, handwash, and wipe down our every surrounding object, there was little effort made toward civility in the discussion.
Overall, I wish we could have manipulated the Pandemic behind our academic walls and within the ivory towers as we always had, provided best practice (as of today) guidance, and not had the public see just how human we really are. Because, next time they need us, they're going to remember watching us, feeling like we weren't honest with them (or worse), and will try even harder to second-guess what we're doing. Or, we'll see legislation that will limit what we can do, and subsequently see the persecution of the scientists who failed to protect the world from the next outbreak.
Holden, thanks for your piece.