How China’s New AI Rules Could Affect U.S. Companies


Soon after China’s artificial intelligence rules came into effect last month, a series of new AI chatbots began trickling onto the market, with government approval. The rules have already been watered down from what was initially proposed, and so far, China hasn’t enforced them as strictly as it could, experts say. China’s regulatory approach will likely have huge implications for the technological competition between the country and its AI superpower rival the U.S.

The Cyberspace Administration of China’s (CAC) Generative AI Measures, which came into effect on Aug. 15, are some of the strictest in the world. They state that the generative AI services should not generate content “inciting subversion of national sovereignty or the overturn of the socialist system,” or “advocating terrorism or extremism, promoting ethnic hatred and ethnic discrimination, violence and obscenity, as well as fake and harmful information.” Preventing AI chatbots from spewing out unwanted or even toxic content has been a challenge for AI developers around the world. If China’s new regulations were maximally enforced, then Chinese AI developers could find it difficult to comply, some analysts say. 

Chinese regulators are aware of this issue, and have responded by defanging some of the regulations and taking a lax enforcement approach in an effort to strike a balance between controlling the flow of politically sensitive information and promoting Chinese AI development, experts say. How this balance is struck will not only impact Chinese citizens’ political freedoms and the Chinese AI industry’s success but it will also likely influence U.S. lawmakers’ thinking about AI policy in the face of a brewing race for AI dominance.

Regulatory relaxation

At the end of August, the CAC approved the release of eight AI chatbots, including Baidu’s Ernie Bot and ByteDance’s Doubao. 

version of the regulations published in July was less strict than the draft regulations published for comment in April. The CAC made three key changes, says Matt Sheehan, a fellow at The Carnegie Endowment for International Peace.

First, the scope was narrowed from all uses of generative AI to just public-facing uses, meaning internal uses are less strictly regulated. Second, the language was softened in multiple places. For example, “Be able to ensure the truth, accuracy, objectivity, and diversity of the data,” was changed to “Employ effective measures to increase the quality of training data, and increase the truth, accuracy, objectivity, and diversity of training data.” Third, the new regulations inserted language encouraging the development of generative AI, whereas before the regulations were solely punitive.

The CAC made the regulations more permissive, partly in reaction to the poor health of the Chinese economy, according to Sheehan, whose research focuses on China’s AI ecosystem. Additionally, a public debate—including think tank and academic researchers, government advisors, and industry—concluded that the rules were too harsh and could stifle innovation.

Flexible enforcement

Once regulations are finalized, their enforcement is at the discretion of authorities, and is often more arbitrary and less consistent than it is in the West, according to Sihao Huang, a researcher at the University of Oxford who spent the past year studying AI governance in Beijing.

“When we look at rules for recommendation algorithms that were published before, or deep synthesis, or the CAC cybersecurity laws—they are enforced when the CAC wants to enforce them,” says Huang. “Companies are on a pretty long leash, they can develop these systems very ambitiously, but they just need to be conscious that if the hammer were to come down upon them, there are rules that the government can draw on.”

Haung says that whether the CAC enforces the regulation often depends on “whether the company is in good graces with the governments or if they have the right connections.” Tech companies will also often try to expose vulnerabilities in each other’s products and services in order to incite government action against their competitors, and public pressure can force the CAC to enforce the regulations, he says.

“China is much more willing to put something out there, and then kind of figure it out as they go along,” says Sheehan. “In China, the companies do not believe that if they challenge the CAC on the constitutionality of this provision they’re gonna win in court. They have to figure out how to work with it or work around it. They don’t have that same safety net of the courts and independent judges.”

China hawks warn that the U.S. risks falling behind China in the competition to develop increasingly powerful AI systems, and that U.S. regulation might allow China to catch up. 

Huang disagrees, arguing that Chinese AI systems are already behind their U.S. equivalents, and that the strict Chinese regulation compounds this disadvantage. “When you actually use Chinese AI systems… their capabilities are significantly watered down, because they’re just leaning on the safer side,” he says. The poor performance is a result of content filters that block the system from answering any prompts remotely related to politics, and “very aggressive fine tuning,” he says.

“Chinese companies are going to have way higher compliance burdens than American companies,” agrees Sheehan.

Jordan Schneider, an adjunct fellow at the Center for a New American Security, a military affairs think tank, says that the current crop of Chinese chatbots are behind their U.S. competitors in terms of their sophistication and capabilities. “These apps are maybe GPT-3 level,” says Schneider. But Schneider points out that GPT-3, a language model developed by OpenAI, is only around two years old. “That’s not like a huge gap,” he says. (OpenAI’s most advanced publicly available AI system is GPT-4.)

Schneider also emphasizes that it has proved easier to control the outputs of chatbots than developers and policymakers—including those in China—originally feared. Aside from the concerning flaws revealed when Microsoft launched its Bing AI chatbot, there haven’t been many other problems with U.S. companies’ AI chatbots going rogue, he says. “American models are, by and large, not racist. Jailbreaking is very difficult and it’s patched very quickly. These companies have broadly been able to figure out how to make their models conform to what is appropriate discourse in a Western context. That strikes me as broadly a similar challenge [to what] these Chinese firms have [faced].” (Language models still exhibit issues such as hallucinations—a term for inventing false information).

Because of this, Schneider argues that the tradeoff between political stability and promoting development is overstated. In future, he says, Chinese tech firms will continue to argue successfully for regulatory leniency if they can make the case that they are falling behind. Still, Scheider says that, even for the hawks, some regulation will be required to prevent any public backlash against AI if the technology rapidly starts to negatively affect people’s day-to-day lives, such as through the automation of jobs.

Sheehan agrees. “We should not count on these regulations absolutely smothering China’s AI ecosystem. We should look at them and realize that Chinese companies have high regulatory burdens, and they’re still probably going to be competitive,’ says Sheehan. “To me, that’s a signal that we could also impose some regulatory burdens on our companies, and we could still be competitive.”

(Source: Time)