Capping off a busy year for Chinese cognitive warfare operations in 2025, China’s Eastern Theater Command created a homemade music video compiling shots from its recent large-scale military exercises around Taiwan. The video was set to Mandopop and ended with black-and-white drone shots of Taipei 101 overlaid with paparazzi camera clicks. It was captioned: “So close, so beautiful, ready to reach Taipei at any time.”
This bubblegum pop production of fighter-jet footage reflects a concerning trend in Chinese cognitive warfare. Filtered through social media, videos like this one distort and amplify the threat posed by Chinese forces with layers of disinformation and misunderstanding. Taiwan’s Ministry of National Defense highlighted 46 such pieces of disinformation spread during China’s most recent series of military exercises around Taiwan in December. These included one claim circulated by the state-run Global Times that China’s coast guard had quarantined ports around Taiwan, and another that the People’s Liberation Army had advanced within 9 kilometers of Taiwan’s shores.
By making Taiwan feel helpless in the face of Chinese aggression, cognitive warfare aims to “shape the future environment of the battlefield,” said Yi-Suo Tzeng (曾怡碩), a research fellow at the Institute for National Defense and Security Research. “The intensifying level of cognitive operations indicate that they [China] are speeding up their readiness for the potential fight.”
Part of what makes tools of cognitive warfare, like disinformation, so hard to tackle is that it can often seem benign. A report published by Taiwan’s National Security Bureau last week discussed how China employs third party actors like the Wubianjie Group, a marketing company, to establish fake accounts on platforms like Threads and X. These accounts initially post lifestyle and entertainment content to gain Taiwanese followers and then pivot to political content. Eve Chiu (邱家宜), the chief executive of Taiwan FactCheck Center, said there has been a boom in suspicious accounts featuring AI doctors in recent months. These videos are attractive to Taiwanese social media users because they ostensibly provide useful information about diet, exercise and living a longer life. “I think it’s a very good way to access people,” said Chiu.
Another type of fake account that has surged in recent months, according to Chiu, features AI-generated videos that play into Taiwan’s national pride by praising its food, culture and healthcare system. These videos sometimes use clips that aren’t actually of Taiwan, and other times they feature attractive AI-generated women as anchors. Some of these videos tell fabricated stories, including of European royalty traveling to Taiwan to find superior medical treatment.
The Taiwan FactCheck Center has so far identified 56 such accounts aimed at Taiwanese audiences. It’s unclear who is behind these accounts. More than half of the IP addresses are based in Taiwan, and the others are based in the U.S., Hong Kong and elsewhere, but their true location could be obscured with a virtual private network. Chiu believes these accounts, which post videos like clockwork, are coordinated by teams working behind the scenes to establish footholds in Taiwanese society.
While their content might seem harmless, these accounts are drawing in thousands of Taiwanese subscribers. Chiu’s theory is that “if something happened,” as in, if China invaded, the people behind these accounts could replace the harmless content with messaging like, “the Taiwan government is going to surrender.” Subscribers to these accounts would see the disinformation first.
These dormant networks of political influence are aided by artificial intelligence. China’s lack of understanding of Taiwan was long a significant obstacle to making fake accounts seem genuine. Despite shared history and language, Chinese disinformation often felt “alien” to Taiwanese, making mistakes as simple as using simplified instead of traditional Mandarin. But AI has done much to smooth this over.
“I don’t think we can rule out that China has a better understanding of our demographics than we would like it to have,” said Chihhao Yu (游知澔), the co-director of the Taiwan Information Environment Research Center, also known as IORG. Yu recalled a paper published by researchers at Xiamen University last April about the creation of millions of AI virtual avatars based on the activity data from 10 million users on X and Xiaohongshu. The worry, articulated last year by Taiwanese media, is that this technology could be used to create AI-powered simulations of Taiwanese society to predict how Taiwan would react to various inputs. Xiamen University is also home to the Taiwan Research Institute, which performs computer simulations toward the “strategic objective of complete national unification.” To facilitate more targeted propaganda and disinformation attacks, Chinese IT companies like Golaxy have been commissioned by the PLA to build databases profiling prominent figures in Taiwanese politics.
Billion Lee (李比鄰), director of the information checking platform Cofacts, noted China’s use of AI to make content that sounds more genuinely Taiwanese. Taiwan’s National Security Bureau report identified Magic Data and iFlytek as two IT companies that Beijing has commissioned to covertly gather voice recordings in Mandarin, Taiwanese Hokkien and Hakka. “We do not rule out the possibility that this system can be used to clone voices mimicking Taiwanese accents, thereby enhancing the authenticity of AI-generated video content,” the report said.
China has also improved its ability to create sophisticated deepfakes. One that circulated on the internet recently year showed Democratic Progressive Party (DPP) politician Wang Shih-chien (王世堅) traveling to China. Tzeng of the Institute for National Defense and Security Research was impressed with how closely the deepfake mirrored Wang’s personality and style. He interpreted the video as a warning from China of its evolving cognitive warfare capabilities.
Deepfakes could be leveraged during the upcoming election cycles or even during an invasion. For example, China could spread a targeted deepfake of Taiwan’s president saying something along the lines of, “we are ready to negotiate with Beijing” or “no one is coming to help us,” perhaps through the fabricated social media channels mentioned earlier. Tzeng worries that if China then cut Taiwan’s electricity and disconnected it from the internet, it would be “almost impossible” to mitigate the harm of the disinformation.
Even for Taiwanese who are not normally impacted by Chinese disinformation or propaganda messaging, wartime conditions might change this. “In the aftermath of any kinetic attack, if you see smoke, see fire, and then you see this video clip, the impact will be a bit different,” said Tzeng.








Leave a Reply