Averaged out against GDP size, Taiwan’s nascent AI industry had the fifth most AI-related patents accepted by the U.S. Patent and Trademark Office between 2008 and 2018. The only countries ahead of it were Israel, the U.S., South Korea and Ireland. While the bare numbers — 1,097 patents accepted against the U.S.’s 87,244 — do not sound as impressive as that ranking, it’s one indication that the industry is well-placed to become an important piece of Taiwan’s economy over the next decade.
There is cause for concern among any hype though. That growth has come about in an environment where there has been almost no AI-specific regulation, and only very recently has that prospect begun to look likely. Taiwan’s National Science and Technology Council chief, Wu Tsung-tsong (吳政忠), told Taiwan’s Central News Agency earlier this month that a law that regulates AI practices is being drafted and should be ready for submission to Taiwan’s Legislative Yuan in September. This seemingly leaves a large regulatory gap — and there are good reasons to believe it’s not going to be easy to fill it.
The broad problem is that many risks created by AI do not sit within existing regulatory agencies’ jurisdictions. “Technological unemployment, human-machine relationships, biased algorithms and existential risks from future super-intelligence” are all new territory, according to Gary Marchant, law professor at Arizona State University’s Center for Law, Science and Innovation. This leaves Taiwan’s previous approach of relying on existing laws and regulatory frameworks looking inadequate. In its 2022 Taiwan AI Readiness Report, the section on “enhanc[ing AI] security, safety and risk management” suggests that existing laws such as “the Personal Data Protection Act [and] the Cyber Security Management Act,” have “helped lay a good foundation and legal environment for creating trustworthy AI.” Only the establishment of the Smart Medical Device Office by the Food and Drug Administration of the Ministry of Health and Welfare is listed as an explicitly new institution for regulating AI. Much of the rest of the document focuses on accelerating growth.
The issue now that approach is changing is that attempts at regulation aren’t guaranteed to be effective. According to Minister Without Portfolio and Cabinet Spokesperson Lo Ping-cheng (羅秉成) in March, current discussions within Taiwan’s government are focussed on making sure that AI’s development matches society’s demands and protects people’s rights and security. “Legal issues about personal data and AI are the two focuses of our current discussions. We must think about and resolve these issues to ensure that digital policy laws are up to date and AI development in the country is better protected and applied,” Lo said in a speech translated by Taiwan News. However, there are at least three reasons for doubt about those goals being achieved.
The first reason is the same for Taiwan as anywhere else. Regulating rapidly emerging AI is inherently difficult. As John Villasenor, Professor of Electrical Engineering, Law, Public Policy and Management at the University of California, Los Angeles, has explained, regulation can quickly become out of date. It can hand advantages to competitors. And it can have unintended consequences.
The second reason is that Taiwan exists outside of most international institutions and systems. That means that when, for example, all the Member states of the U.N. Educational, Scientific and Cultural Organization (UNESCO) adopted “an historic agreement that defines the common values and principles needed to ensure the healthy development of AI,” Taiwan wasn’t on the list. In other areas, such as environmental policy, while Taiwan usually follows regulatory shifts from outside, this absence has been used to explain the slow speed at which change happens.
Finally, there are the issues that have occurred with Big Data. Perhaps the closest analogue with regulating AI, with many areas of overlap, in recent years Taiwan has been beset by data leaks. The most high profile of these have been from government, but some major businesses have also been involved in their own incidents. The government’s response to these leaks is widely held to have been lax — denials, insignificant fines, voluntary regulation — which does not bode well for AI, if indeed it is any kind of indication of how things will play out there.
Of course, other more optimistic factors are available. When we visited the Taiwan 2023 AI event in Taipei last week, a number of representatives from the industry told us that they hoped for specific legislation around AI as it would help with legitimacy when pitching to other countries. This atmosphere — theoretically not entirely obstructive — would presumably be helpful in creating useful regulation, if it’s replicated through the top-end of the industry. Additionally, anecdotally, Monique Yang (楊明慧) of AetherAI (雲象科技), a medical image AI company, was among many to praise the government’s approach to working with the industry so far. She summarized that in AetherAI’s interactions the government had been a good mix of cautious and helpful. This again sounds promising.
Obviously, then, it’s a case of “time will tell.” We’ll start to get answers in September.
Photo by Google DeepMind on Unsplash
Leave a Reply