Online discussion around what the emergence of AI chatbots means for Taiwan has so far tended to focus on the details of what they are unintentionally “willing” to say. We have heard much about the incidence of unflagged bias, including on this site. That focus, however, risks missing another key point: The chatbots’ ability to generate human-like text could aggravate the existing problem with deliberate disinformation in Taiwan, by making the process by which it spreads easier, cheaper and possibly even more effective.
The situation that commercially available AI chatbots arrive into is this: Taiwan is already the world’s biggest target for foreign disinformation, according to The V-Dem Institute. And that ranking is substantively made up of Chinese government influence efforts, which can involve: Propaganda departments and individual users joining up to spread coordinated disinformation through social media; indirect Chinese government investment in Taiwanese media; and targeting those who are already pro-China within Taiwan to disseminate messaging both offline and through social media.
Replace human-generated content with that generated by AI chatbots and the proposition is simple: The bots look perfectly positioned to generate the disinformation to be poured through the existing networks, except … they’re just better at it.
“I’m afraid the unprecedented/mind-blowing capability of AI chatbots to generate miscellaneous contents by a simple sentence of command will unavoidably make the situation worse,” says Yachi Chiang (江雅綺), president of Taiwan’s Law and Technology Association and an associate professor at National Taiwan Ocean University, who specializes in intellectual property law and cyber law. “The potential of new AI bot[s] to mimic human-written content not only will make the production of disinformation easier [and] cheaper, but the generated contents will look more convincing to the target audience,” she adds in response to questions by email.
How exactly could it work?
The key way this process could operate in practice has been set out generally by Gary Marcus, author of Rebooting AI, but it can be applied specifically to Taiwan.
Currently, disinformation content often starts out in so-called content farms, such as “Mission,” a site which in April 2019 set a record for most shared website by Facebook users in a single week in Taiwan. Mission and sites like it create masses of low quality, often inaccurate content, written specifically to fulfill algorithms and rank highly in Search Engine Results Pages. While doing this, they make Chinese government narratives about Taiwan highly visible, with inflammatory and misleading content such as “Oh my god! DPP! Taiwanese are dying because of you! President Tsai spent $102 million USD building 4,500 units of social housing in Paraguay, bringing Taiwanese to tears!” as cited by The Reporter. In a world where AI can automate these sites, and even individual bad actors have access to commercially available chatbots, there can quite simply be many more of them, generating more content, more quickly. And that gives disinformation more chances to spread.
This potential proliferation “could be an imminent, existential threat” to Taiwan, according to Chiang. So what can be done about it?
Ideas floating around so far seemingly focus on how to avoid commercially available chatbots being repurposed toward disinformation. Axios, for instance, points out that “NewsGuard last week introduced a new tool [that] assembles data on the most authoritative sources of information and the most significant top false narratives spreading online [so] AI providers can then use the data to better train their algorithms to elevate quality news sources and avoid false narratives.” In short, more reliable inputs could help generate more reliable outputs.
Along the same lines, Taiwan has announced that it is building its own Chinese-language AI chatbot, which its government says is to prevent systems trained to provide “biased” information from dominating discourse. This is a response to Chinese companies building their own chatbots, which Taiwan’s government says could have their algorithms deliberately adjusted to spread pro-Chinese government ideas. Chiang explains that the advantage of this plan is that it could “counter China’s dominance in this field,” avoiding a ‘cleaned AI’ “in which users can only get contents that pass the [Chinese government] censorship mechanism.”
However, there remain serious issues here. First, the Taiwanese government (like any other) is not a neutral actor, and there’s always going to be a problem in deciding which “bias” is acceptable and which isn’t. Second, so far attempts to control chatbots’ outputs have had limited success. And third, Taiwan’s plan has been initiated by the National Science Council, which has limited experience in delivering advanced software engineering projects to millions of end users, according to T.H. Schee (徐子涵), an expert on Internet and public policy in Taiwan responding by email. “To my best knowledge there has been [no example of this kind of project being delivered] in the past 20 years,” Schee says. “Even just offering an [Application Programming Interface] you need a solid team which could shell out millions of USD per season [with] no apparent path to financial return unless you are an infrastructure service provider (like MSFT [Microsoft]).”
And all of that precedes the main issue here. Readers will note that neither of the two attempts listed above prevent bad actors creating and refining their own AI chatbots, which can generate all the content farm disinformation they want.
On those terms, the direction looks clear. Taiwan is already the world’s biggest target for foreign disinformation, and it’s very hard not to see the existing bombardment escalating.
Image: Illustration by Carlos PX