To combat “fake news,” governments and corporations are increasingly turning to censorship regimes. Let’s explore some of the problems which “fake news” censorship entails to see why biblical, critical thinking presents a far more viable solution.
Some solutions have a nasty habit of causing more problems than they set out to solve. History’s closets are littered with such backfire stories—like the account of the “cobra incident” in colonial Delhi. The British government, concerned that the city was becoming overrun by venomous snakes, reportedly began offering rewards for cobra skins. When a lucrative cobra farming industry resulted, the government canceled the deal, leading cobra breeders to release their now worthless snakes into the wild—making the city less safe than before.1
You can probably think of other examples of “solutions” with unintended consequences. These episodes remind us to stop, think, pray, and seek biblical counsel before we respond to complex issues, rather than to react with (all too easily manipulated) emotions. And the issue of “fake news” is no exception.
As this article describes, one of the main routes governments and corporations are taking to tackle the rise of media misinformation is censorship. But like a cobra bounty endangering the city that it was intended to safeguard, might “fake news” censorship undermine the democracies it’s meant to protect?
Let’s look at a few of the reasons to think so. Flurries of scholarly papers have been debating this issue in far more detail than we can delve into here, so this won’t be an exhaustive discussion. But by examining some of the potential troubles with “fake news” censorship, we’ll be better able to understand why biblical critical thinking presents a far more viable solution.
To start, we’d better backtrack to the root issue behind today’s “post-truth” society, which has become the backdrop for “fake news” and censorship discussions. The Oxford Dictionary made post-truth its Word of the Year in 2016, with a commentary explaining that post means “belonging to a time in which the specified concept has become unimportant or irrelevant.”2
So, has truth become irrelevant? Many people might say so. However, if you were to walk up to a person and steal their cat, they’d likely respond with objective truths such as “that’s not your feline” and “randomly stealing cats is wrong.” Truth isn’t so irrelevant after all. Even so, we live in a culture that talks as though truth is irrelevant yet has no ultimate foundation for truth when it is convenient (like when protesting cat thievery).
How did we get to this pseudo-post-truth world? The answer lies in a long history of rejecting our Creator, whose character is the source of absolutes. This goes back to the Garden of Eden, when Adam and Eve believed that by rejecting God’s Word, they would “become like God” themselves (Genesis 3:1–7). Humans today fall for another incarnation of this lie by believing we have no Creator and therefore must be “like God” ourselves, responsible for determining our own truths.
The trouble is that, as resources linked below unpack, trying to ground truth in any foundation besides our eternal, unchanging, self-existent Creator is logistically impossible. This understanding helps explain some of the complications which culture is facing for defining and responding to “fake news.” For instance, one research paper on “fake news” declared that “classifying a piece of content as false requires a grounding of a universal truth, which can be a difficult endeavor that requires collective consensus.” But universal truth can’t just be a product of popular consensus.3 Masses of people can be wrong together, and all those wrong ideas cannot add up to objective reality.
Maybe humans can arrive at truth—and therefore have a basis for tackling fake news—by comparing careful observations of reality. In other words, maybe science can be our final truth authority. This answer might sound foolproof, but it doesn’t pan out for several reasons. For one thing, humans are nowhere near infallible. Even experts make mistakes, fall into faulty thinking patterns, and are biased by their presuppositions. Peer review, experimental replication, and other checks exist to help them catch such errors, but these checks aren’t immune to human fallibility either.4 Consequently, the research process involves flaws which are becoming more apparent than ever.
Humans also are not all-knowing. We don’t see the big picture of reality. Instead, we must inductively construct what we think we know from information our senses give us. Relying on our senses to inform us about whether our senses can reliably inform us is arbitrarily circular. So, as this article explains, a purely material worldview—unlike a Biblical worldview—cannot supply a foundation for certainly knowing facts.5 And even if it did, the same facts can often be assembled or interpreted in different ways based on different assumptions, leading different researchers to different conclusions. Later, new facts may come to light which are consistent with still other conclusions.
These types of issues help explain why some “facts” considered scientific today (or conversely, considered “fake news” today) may not be viewed the same way tomorrow. None of this implies that science isn’t an essential tool; it’s just not an infallible authority.6 That’s why in a recent essay advising against health-related misinformation censorship, one bioethics associate researcher argued that the public should understand the limitations of human science and recognize that even the most careful research can be misinterpreted or misreported.7 The author noted,
Although the censorship on social media may seem an efficient and immediate solution to the problem of medical and scientific misinformation, it paradoxically introduces a risk of propagation of errors and manipulation. This is related to the fact that the exclusive authority to define what is “scientifically proven” . . . is attributed to either the social media providers or certain institutions, despite the possibility of mistakes on their side or potential abuse of their position to foster political, commercial or other interests.8,9
In turn, the author recommended teaching citizens to think critically about information they find online rather than always accepting reports about “what studies show” at face value.
A biblical worldview makes both critical thinking and scientific reasoning possible by providing a basis for truth.10 Without this foundation, defining “truth” quickly becomes a matter with major implications for society. After all, if truth is up to humans, then which humans ultimately determine what ideas, beliefs, explanations, or interpretations count as “true”? In the case of conflicting truth claims, whose truth should be considered “truest?”11 Does the final authority for truth rest in the loudest, strongest, or most powerful—whether an individual, a mob, or the State?12
Besides being illogical, placing the authority for truth on human shoulders clearly poses issues for a free society. We can see at least three of these issues regarding “fake news” censorship:
Because “fake news” has no single definition, rulers and corporations can easily define “fake news” in ways which let them censor information that doesn’t fit their agendas. As this article describes, arrests and imprisonments for spreading “fake news” have already unfolded in multiple countries. Recently, Reporters Without Borders also documented “many cases of governments censoring websites on the pretext that they were spreading fake news”—which is to say, information that did not “toe the government line.”13
Reporters Without Borders noted that these instances of censorship “deprived people of reliable, independently reported information” in a time when reliable information mattered more than ever.14 Limited access to information leaves citizens less able to think critically about the one-sided story they are allowed to hear—even if that story turns out to be false. Ironically, then, “fake news” censorship can easily be used to foster the spread of false information.
Another major issue with “fake news” censorship is that it conditions citizens to be programmable yea-sayers rather than independent thinkers.15 As one philosophy professor observed,
If there were really a group of people with this form of universal expertise whom we could trust to determine on our behalf which news is real and which is fake, then we would have no need to rationally inquire into the facts ourselves or debate them amongst ourselves. Indeed, we would have no need to vote ourselves. We could leave all of these activities up to these god-like figures.16
Can’t safeguards be put in place to censor “fake news” without imposing risks to democracy? In answer, the same professor wrote,
For those who don’t object to ‘internet regulation’ so long as it is carried out by ‘good governments’, I have little to say except to remind them of the long history of even the best intentioned censorship regimes leading to harms their authors did not foresee or want, and more specifically of the many cases of people becoming victims of their own censorship legislation.17,18
The author is certainly not alone in pointing this out. For example, a paper from the journal Historical Research raised similar concerns in 2020:
Who decides on what constitutes restricted content? If firms are made liable for the published content, won’t they exercise private censorship in excess of what might be censored at a state level, for fear of the financial consequence? Where restrictions of this kind have historically been enacted–for example in wartime, where there is the need to censor certain types of information for security reasons–there have been unintended and counter-productive consequences.19
Yet another warning comes from the High-Level Expert Group which the European Union’s lawmaking branch commissioned to advise on responding to “fake news.” They stated that, compared to wise practices like teaching critical thinking skills,
By contrast, bad practices tend to present risks to freedom of expression and other fundamental rights. They include censorship and online surveillance and other misguided responses that can backfire substantially, and that can be used by purveyors of disinformation in an “us vs. them” narrative that can de-legitimize responses against disinformation and be counter-productive in the short and long run.20
Later, the report again emphasized,
Legal approaches amounting to well-intentioned censorship are neither justified nor efficient for disinformation. Right of defence and speed are incompatible. Public reaction to censorship will backfire, as ‘the establishment’ or ‘parties in power’ could be (mis-)perceived as manipulating the news to their advantage.21
These high-level observations that “fake news” censorship poses a counter-productive threat to free society make a strong case in themselves. The case only grows stronger the more we examine the practical problems with “fake news” detection. For instance, a recent paper in Trends in Cognitive Sciences listed studies highlighting multiple problems with leaving “fake news” detection to human fact-checkers.22,23 These include the impracticality of fact-checking given the scale of information online, and the fact that, especially with the ambiguity and subjectivity surrounding definitions of “fake news,” it’s not uncommon for professional fact checkers to disagree about which news is “fake.”24
To bypass some of these issues, many researchers have turned to programming software to automatically detect “fake news.” For instance, algorithms may flag articles as “fake news” based on cues like source popularity or linguistic features.25 But a message isn’t false based only on how it’s communicated or where it originated. Relying too much on these cues leads to fallacies in human thinking. Similar types of fallacy-prone thinking are now being programmed into machines.26 This doesn’t mean machines aren’t useful for efficiently flagging possible misinformation. It just shows that they’re no less fallible than the humans who programmed them. Technology can’t be humans’ final authority for truth or mastery of digital censorship. This is especially true given how opaque technological algorithms tend to be. As a recent paper noted,
One general problem is that decision-making is being delegated to a variety of algorithmic tools without clear oversight, regulation, or understanding of the mechanisms underlying the resulting decisions. . . . Delegating decision-making this way not only results in impenetrable algorithmic decision-making processes but also precipitates people’s gradual loss of control over their personal information and a related decline in human agency and autonomy.27
Considering the theoretical and practical problems of “fake news” censorship, it’s no wonder that a joint declaration by rapporteurs for international organizations, including the United Nations, stated in 2018 that, “General prohibitions on the dissemination of information based on vague and ambiguous ideas, including ‘false news’ or ‘non-objective information’, are incompatible with international standards for restrictions on freedom of expression, as set out in paragraph 1(a), and should be abolished.”28
In the end, manifold factors suggest that while misinformation is a problem, censorship is not the solution. Censorship of messages deemed “fake news” poses concerns for free societies, opens the door to abuses, limits critical thinking, fosters dependency by telling people what to think instead of how to think, is riddled with logistical problems, and—like a well-intentioned war on cobras—tends to backfire. “Fake news” censorship therefore seems impractical at best, draconian at worst. And it’s just one example of the issues that arise when a society, having rejected its Creator, places its final truth authority in fallible humans and the technology they develop.
If censorship isn’t the answer, what is? While a phenomenon as complex as “fake news” doesn’t have one catch-all solution, a recurring theme in discussions about this issue is citizens’ need for critical thinking skills. Critical thinking is a preventative, biblically founded solution which equips people to recognize and respond to any new misinformation themselves without sacrificing freedom or autonomy and without having to wait for a “fact-checker” to do it for them. Research is continuing to uncover ways this biblical solution is both practical and effective at combatting “fake news.” But more on that later.
Relative Thinking—A Life Without Moorings and Meaning
Inerrancy and the Test of Truth
Atheism: The Weakest of Worldviews
Biblical Faith is Not “Blind”—It’s Supported by Good Science
Morality and the Irrationality of an Evolutionary Worldview
Is Morality Determined by Its Popularity?
Atheism: An Irrational Worldview
The Real Reason for Creationists’ “Deep Distrust of the Media”
Behind the Scenes of the Mainstream Media
Four Ways the Media Can Misrepresent Reality
Is “Fake News” Fake News? Understanding Media Misinformation
Answers in Genesis is an apologetics ministry, dedicated to helping Christians defend their faith and proclaim the good news of Jesus Christ.