In 2016, accounting firm PricewaterhouseCoopers (PwC) predicted that artificial intelligence “could contribute up to $15.7 trillion to the global economy” between 2017 and 2030. But the way these numbers theoretically divide might be the most compelling thing about them. “The greatest economic gains from AI will be in China (26% boost to GDP in 2030) and North America (14.5% boost),” PwC’s report outlined. That’s equivalent to $10.7 trillion and it could mean that nearly 70% of the global economic impact of AI is under the control of North America and China.
Although the North American category also includes Canada — which is a major AI player — within those predictions, much of the anticipated economic gains will be concentrated in the hands of two world superpowers engaged in a major competition: China and the U.S. This clarifies a major problem the world is already starting to come up against: How can AI be regulated so as not to become destructive to humanity when the two most powerful players in the game are competing directly against one another everywhere else?
The most dangerous element within this paradigm is what happens if or when the most powerful kind of AI, known as “artificial general intelligence” or AGI, is developed. AGI is, according to prominent AI investor Ian Hogarth, a “god-like … superintelligent computer that learns and develops autonomously [emphasis added], that understands its environment without the need for supervision and that can transform the world around it.” In other words, it’s AI operating beyond the abilities of the likes of ChatGPT, which use Large Language Models based on existing human-generated information to generate responses.
Working on the assumption that AGI is within reach and that its potential power could be extremely destructive if its values are not aligned with humanity’s, Hogarth set out the danger of having geopolitical competition mapped onto its development in an interview this week.
“The U.S. and China clearly are locked in a battle to compete economically. They’re certainly locked in a battle to compete militarily — look at hypersonic missiles or something like that. But the final level, which is like ‘are they locked in a battle to kind of ensure that the human species thrives?’ I think they should be on the same page about that. So you’ve got this really challenging problem where you’ve got two levels of competition and one level where you desperately need cooperation. And that’s why I think the really missing piece here is international leadership [and] coordination around how we approach these most powerful AI systems that could become superintelligence,” he said.
Hogarth’s point is that humanity can’t afford for a great power battle over AGI. But on that, the mood music doesn’t sound great.
The recent G7 meeting — including Canada, France, Germany, Italy, Japan, the United Kingdom and the United States, as well as the European Union — placed headline plans for “guardrails” on AI directly alongside agreements on how to “de-risk” from China. At the same time, publicly at least, there appeared to be nothing on cooperation with China over AI — an approach replicated in the Washington-based think tank Center for Strategic and International Studies (CSIS) report on “Advancing Cooperative AI Governance at the 2023 G7 Summit,” which similarly made no mention of China.
Thus, some of the most high profile, state-level discussions on AI cooperation are doing the opposite of what they sound like they’re doing. They’re talking about cooperation minus what is perhaps the world’s second-largest AI player. That’s the equivalent of negotiating a truce and forgetting to talk to the other side.
And, in fact, this is not just talk. The U.S.’s dramatic export sanctions on advanced semiconductors heading for China were to a large degree targeted at China’s AI industries, and they “mean China is not likely in a position to race ahead of [high-profile U.K.-based AI project] DeepMind or [U.S.-based] OpenAI.” That does not exactly feel like the opening move in a barrage of upcoming cooperation. Meanwhile, at the same time, China’s presence is still being used as a justification for U.S. and European researchers pushing forward with their research, even while they might harbor doubts about its risks. According to Hogarth, “they often worry that if they don’t stay ahead, China might build the first AGI and that it could be misaligned with Western values.”
The worry has to be, then, that certain governments might take steps to internally regulate their AI industries, or to do so with their strategic partners, but their unwillingness to cooperate with China over such measures will create a more powerful external pressure which will override any of the mechanisms they put in place.
Image: Photo by Google DeepMind on Unsplash
Leave a Reply