Is science censored? The answer depends entirely on the definitions you use. (That’s a cliche answer and often a cop-out, but it’s actually a dig at the sophistry used by people claiming science is censored.) The TL;DR answer is: no. Science is flawed, but censorship is not that flaw.
For our purposes here, science is being censored if ideas that have significant evidence in their balance are not being published in scientific journals. This is not the same as bad ideas failing to be published, or certain speakers not being invited (or even having their invite revoked) at private speaking events; it is not even censorship in science if a speaker wishes to recount a publicly controversial but scientifically robust idea (is it really censorship for a Church to cancel a speaker when they find out they want to talk about evolution?). The remit of this post is limited to whether getting an idea into scientific literature is regularly blocked for non-academic reasons.
What you will notice is that, to answer this, the discussion has to veer off the narrow remit that actually leads to an answer to the question. This is because the idea of science being censored in a significant way is nearly always motivated reasoning by people pedalling conspiracy theories. These conspiracy theorists think that ‘Cancel Culture’ (a private company choosing against promoting a controversial speaker) is the same as censorship, and to debunk that we have to explain how it doesn’t fit our remit and often isn’t even censorship in the broader context of free speech.
But, here, we concern ourselves with whether science of publishable quality gets published.
What is publishable science?
I am simplifying for the purposes of this post, but, broadly, these are the criteria of robust science worthy of publishing in scientific journals:
- Has a thorough and clear methodology
- Has data
- Has conclusions that are in proportion to the methodology and data
- Has conclusions that are comprehensible within established frameworks (in mature subjects)
- Or is a meta-analysis or systematic review of publications that meet these criteria.
These criteria exist in degrees. There is no bright line between thorough and unthorough, nor a comprehensive test for whether a conclusion is in proportion. As a result, there is also a sliding scale of prestige among scientific journals: some are considered better than others, as they have higher standards. And these are enforced by peer review.
Point 4 (“… comprehensible within established frameworks…”) might appear controversial and even open to being misused for censorship. Luckily, the other criteria come in to help: if your conclusion overturns established frameworks, it is helped if it has a thorough methodology and its conclusions are proportionate to the methods and data. A paper with a poor design or unclear trends in the data, that then doesn’t fit within existing frameworks, can’t draw a stronger conclusion than requires further research ― and that won’t happen unless there is something credible in the paper.
There are legitimate reasons for things to not be published. The peer review process is designed to identify this, and then either stops publication or requires edits be made first. I suppose, with some sophistry, this might earn the moniker “censored”. But, that would be to significantly miss the point. In science, being censored and failing to get published are not the same. To be considered censored you’d have to find that your paper was not being published for illegitimate reasons. Ideas that actually do survive the scrutiny of available data and still aren’t getting published.
The definition of censorship I am using is specific and has a purpose. I am looking solely at how ideas make it into the scientific literature. I am not looking how ideas propagate in society. I am not looking at public talks, newspapers, book deals or Twitter. I use this definition to answer the question of whether reading scientific articles leads to reliable conclusions; if the majority of papers answering a question tell a similar story, how much effort do I have to expend worrying that it’s a result of censorship?
That raises the question of whether a broader definition of censorship is appropriate. Is a rightwing Think Tank’s refusal to host a speaker in Critical Race Theory, or a Church’s deplatforming of an evolution-explaining scientist really “censorship”, and is it censorship in the scientific community? If a speaker goes on a publicity tour and makes a significant embarrassment of themselves, is an event organiser obliged to host them?
“A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up that is familiar with it”
- Max Planck
Science is not perfect. The people who perform science ― the experiments, data collection, analysis and synthesis ― are prone to groupthink. It is possible that entire topics are currently a generation of understanding behind where the current data should dictate they should be. That old generation ― who understand and synthesise data through old models, no matter how convoluted that synthesis becomes ― die out, and the new generation use the new models that deal with the data more elegantly.
This is a flaw. But it is not censorship.
Science still works, even with group think; it is just slower.
James Watson has had honorary titles stripped away. Richard Dawkins has had speaking events cancelled. David Irving was arrested. Is this not censorship? Perhaps. But they all require a little closer scrutiny, and a reminder that the remit of this post is with regards to whether this sort of censorship affects the body of published literature.
Watson resigned in 2007, following controversies about unfounded remarks he made in an interview on race and IQ. He also had speaking events cancelled. Resigning was part of his apology, and ― again ― the remarks were unfounded. Is this censorship? Maybe. But it’s not censorship within science; private event organisers didn’t wish to associate with his remarks. Those ideas still exist in society (books like The Bell Curve) and anyone who types “Race and IQ” into Google Scholar will see that the idea is thoroughly explored.
One could argue ― as the editor at Nature did ― that his events should not have been cancelled, and he should have faced the criticism in person. There is an argument for that. But, how would such a principle benefit you as a reader of science and not a researcher? It simply isn’t the case that all claims get published and discussion follows. That isn’t how science works. As discussed, the claims have to meet academic criteria first. Otherwise, unsupported claims make into the scientific literature and any reader may well miss the discussion that roundly debunks it. One only needs to look at Twitter to see how the marketplace of ideas works, if there’s no quality control. And ― this is important ― he wasn’t censored at all. He repeated the idea. He could still Tweet it, if he wanted to. He could probably even get a book deal. Some private companies didn’t want to be associated with him and that’s the story.
Dawkins has a similar story. He had an interview on radio cancelled after the public complained about Islamophobic comments. Again, you could argue that the Radio station could have acted as a forum to challenge his views. But moderating such a forum is difficult, it’s not the station’s obligation and ― as a private company ― they are allowed to distance themselves from controversial views.
What about David Irving? Surely, being arrested for nothing more than citing an academic view is all the evidence you need to see that scientists will self-censor and thus science is censored; that legitimate ideas won’t get published. Well, maybe. Irving’s idea was Holocaust denial. And so, the story has the benefit of not being an idea with a robust evidential base; he wasn’t arrested for having a good idea with academic or empirical support. More importantly, he was not arrested for some general law about thinking the wrong academic thing; it was a very specific law against Holocaust denial in Austria. Scientists who have an extensive evidence-base to support the theory that viruses can be responsible for disease (an apparently controversial idea these days) shouldn’t be looking over their shoulder for Holocaust denial laws. And Holocaust Deniers in the UK also don’t need to look over their shoulders for Austrian laws.
The Austrian Holocaust Denial law is censorship. I don’t think that can be contested. But, it is not censorship of science and academia, the law applies to everyone, and I don’t think it’s the kind of censorship that is relevant to my question: how much effort do I have to expend worrying that [well read scientific consensus] is a result of censorship? If a habit forms of these sorts of law popping up, I’ll change my perspective. But, Holocaust denial is an exception and for good reasons.
No one is listening to me!
There is another tendency for people to consider themselves “censored” if they are not taken seriously. This is not entirely ridiculous. If there were such a thing as a scientific establishment, then throwing their weight around to humiliate people into not saying things would be censorship. (For fun, notice the people who are more than capable of inventing this power struggle when their conspiracy theory isn’t respected, but are entirely blind to it when race or gender relations are the topic of conversation.)
Some of these people are respected academics. Professor Karol Sikora, for example, has a fantastic resume in the medical field, including a leading role in the World Health Organisation. Should we be worried, then, when his views of coronavirus and lockdowns are not more broadly respected? I don’t think we have to worry. He’s an oncologist (a good one, don’t get me wrong), and was the Head of the WHO’s Cancer Program. He’s not an expert. The fact he has become somewhat a pseudocelebrity in the UK, and is invited to talk on radio shows, is evidence against censorship, even in the public domain.
I am reminded of Trump loving to find people of ethnic minority backgrounds who supported him. 91% of black voters voted for Clinton, 6% for Trump, and 3% other parties. But, in absolute numbers, that 6% is still plenty of individuals Trump can point to as “proof” his campaign wasn’t racist. It’s not that we don’t take that 6%’s vote seriously. It’s that 94% is, well, more.
It isn’t censorship to not take an individual seriously. This is particularly true when that individual is not a relevant expert. It’s also pertinent when what they are trying to do is not publish data and conclusions, but be invited to do public speaking.
Is science reliable, then?
The reliability of science depends heavily on the literacy of the reader. Huge numbers of scientific articles are wrong. There are a whole host of reasons for this:
- Once you have a research question, the pressure to publish means you want to find a positive result. Legitimate negative results are less likely to be published.
- Significance tests means that a particular conclusion has a probability of being right. A lot of science considers something a positive result if it is 95% likely to be right.
- Coupled with the bias towards only publishing positive results, that means a conclusion in that 5% is likely to be published.
- Poor methodology design. This might be too few data points, or a design that can’t control for ‘noise’.
- Outright frauds. The experiment design looks robust, but the data is simply made up. It might have ideological motives, funding motives or even personal and ego motives.
So, there are a number of things the reader has to do to make science reliable for them. Firstly, be able to spot poor design. Secondly, do not rely on single papers, instead look for a number of papers on the same subject; things are unlikely to be accidentally wrong in the same way, and so if a majority of papers tell a similar story, that is reliable. If there seems to be a split ― e.g. a lot of papers saying fat is bad for your health, a lot of papers saying it’s overall calories regardless of food type ― exercise caution; maybe look into funding.
Censorship ― of a sort ― is quite common. Private companies can both refuse to offer and repeal invites to speakers for any reason: protecting their brand, not being able to moderate a discussion, pig headedness, a change in wind direction. It’s a low-level censorship, and is easily gotten around by finding a different private company or even Twitter or a Podcast. It’s even overstating it to call this censorship.
In exceptional cases, there is high-level censorship. This is not easily gotten around: going to YouTube or Podcasting to express Holocaust denial is still illegal is at least 17 countries. This is, thankfully, rare: it pertains to denial of one well-established fact, and is the case in less than 20 countries. If it were more commonplace, I would likely change my mind and dedicate more time to it.
Censorship of the specific form concerned in this post doesn’t appear to happen in science. Irving’s view is not a scientific one (and neither is it a legitimately academic one). Watson’s view is thoroughly explored in the scientific literature, and it doesn’t agree with him. Dawkin’s view wasn’t scientific or even academic; it was a political claim.