Much of the coverage of academic disinformation studies and fact-checking bodies in Taiwan reflects what the academic Eve Sedgwick called a “paranoid position.” That is, it promotes the anticipation and relentless exposure of a bad thing, in the form of potential fake news, rather than the discovery of good things.
This makes some sense. Taiwan is subject to more disinformation targeting than anywhere else in the world, according to The V-Dem Institute, so why shouldn’t we all be fact-checkers now? The tacit message is, be careful who you trust, unleash the multilayered deep learning approaches to bot detection and stand back.
But while paranoid positions can be useful in some circumstances, Sedgwick pointed out that they can be limiting on their own. And indeed, this is the case in Taiwan. The more boosterish coverage of fake news tackling finds little room to concede that the act of exposing the bad thing is not on its own a panacea. And there is even less room to concede that an atmosphere of paranoid pursuit can have costs as well as benefits — not least internal polarization and deflection from other issues.
In this context, it’s useful to consider that other approaches to improving Taiwan’s political and informational circumstances do exist, and the work they do is just as serious as those seeking to eliminate potential fake news.
One example is the Taiwan Reach-Out Association for Democracy (T-ROAD). T-ROAD is a civil society organization that dedicates itself to civic engagement, social innovation and what it calls “democratic deepening.” Practically, it designs facilitation techniques to enable dialogue and collaboration among people of different backgrounds within Taiwan, offering things like digital democracy forums and participatory budgeting sessions.
“In our forums that were designed with deliberative democracy principles, we prepared the issue manuals that cover as many perspectives as possible in order to allow participants to exchange ideas on serious issues, and even to help participants clarif[y] their own views during the process,” Lin Hsin-i (林心乙), director of social advocacy division at T-ROAD, told Domino Theory by email.
According to Lin, when T-ROAD participants expand on evidence-based discussions, clarify problems and discuss solutions, they are “able to see the differences in each other’s values and different levels of trust toward society.” What’s especially interesting about this is that Lin conceded that “False narratives are not easy to define,” especially if they are feedback from the participants themselves, “and we have no way of knowing if they are made up by the participants.”
The takeaway here is that the existence of fake news doesn’t have to make for an all-encompassing paranoia. Troubling as it might be, you don’t have to stop all discussion until you weed out every false or disproportionate statement — and this comes with other benefits.
“In my opinion, the practices of deliberative democracy cannot be used to distinguish false narratives,” Lin expanded. “What it can do is to create a trusting and safe environment where people are willing to express themselves as honest[ly] as possible, to listen to others as much as possible, to correct their biases against others in different positions, or to be more confident in their own power to express.”
In other words, when the aim is not only to cleanse debate of anything anyone might call false, there is room to create a more fundamental connection between people so that they might understand their different experiences and values — allowing for the sense of proportion that fake news looks to distort.
This perhaps sounds unfashionably soft in geopolitical terms. In circles where the famous Henry Kissinger quote “Even a paranoid can have enemies” can be taken as a serious philosophical dictum, the idea of tackling disinformation with trust-building exercises might look like bringing a peace flag to a firing range. But research has linked trust with improved economic growth, democracy, tolerance, charity, community, health and happiness. And there are also good reasons to think this approach is simply more realistic than eliminating every potential falsehood.
“I think ‘truth’ has always been a contested concept in the history, even in the area of scientific facts (let’s put the ‘narrative’ aside), what we know/define as ‘facts’ also evolved through different times,” said Yachi Chiang (江雅綺), president of Taiwan’s Law and Technology Association, when asked about defining false narratives via email. “So if you ask me directly about whether we should trust some organization (be it governmental-O[organization] or NGO) to do the thinking job for us (what is misleading, what is not), my direct gut feeling is no.”
To be sure, Chiang added that this “does not mean these organizations can’t do any help to ease the pain of rampant disinformation in [the] digital age,” but the implications are clear. Definitive takes on which narratives are proportionate don’t exist, so thinking can’t simply be outsourced. On these terms, isn’t negotiating through the inevitable disagreements in an atmosphere of trust healthier than doing it in an atmosphere of paranoia?
This isn’t to make the case against fact-checking, or eliminating the most egregious fake news. Rather it is to suggest that other mitigation strategies should and do usefully coexist beside those efforts. Indeed, this coexistence is actually played out within individual organizations in Taiwan.
When Filip Noubel, managing editor of Global Voices, described his organization’s fact-checking efforts via email, it incorporated both approaches. “[A]n essential element is to open the data and research as we do and to present it to the public and to the media so everyone can verify the analysis of information items and in the process become more literate in narrative detection. To sum up, it must be a collective, transparent and participatory process [emphasis added,]” he said.
Another advantage to incorporating a more participatory approach is that it avoids over-empowering disinformation. One of Sedgwick’s observations is that paranoid positions tend to lead to an attitude of “Anything you can do (to me) I can do first — to myself,” and there are parallels with this in the way discourse around disinformation can, conversely, feed into disinformation’s aims.
Disinformation contributes to an environment where political debate becomes about throwing competing accusations of fake news at one another, where no issue can be discussed directly in case the debate has secretly been usurped, and where the other side is seen as an enemy. But excessive focus on the existence of disinformation itself can double down on these effects — so that like hammers that only see nails we reach a point where anything we disagree with is seen as “fake news.” It is telling, for instance, that the customary paragraph about how “empirically it is evident that Beijing’s political information warfare campaign against Taiwan has largely failed” tends to appear at the bottom of news coverage rather than in the headline.
Ultimately, then, inviting people into meaningful discussions about politics and information and values is win-win — and it’s by no means a “soft” option.
When Lin described three main areas of agreement from a 2021 T-Road forum, two focused on open discussion in Taiwan, but the third was, notably, called “Seeing China.” Lin explained participants had all ultimately agreed that, “We need to see China that does not give up its aggression against Taiwan’s sovereignty and at the same time uses culture of expansion as a strategy. We need to know more about China’s current situation, intentions, and strategies, and recognize Taiwan’s experience in pragmatic engagement with China in the past.”
These points couldn’t be characterized as ignoring the threat of China to Taiwan or disinformation campaigns. But they’re built on broad consent, and the discussions they’ve emerged from build valuable trust. Fact-checking alone won’t do that. Taiwan’s society can’t survive off only rooting out the bad things.
Leave a Reply