Transcript

My name is Kwan Yee Ng, I work at Concordia AI, and we're a Beijing based social enterprise focused on AI safety and governance. 


My talk today will be based off of Concordia AI's State of AI Safety in China report, which we published earlier this year. It also covers things like China's AI policy, technical research, positions on the international stage, expert and public views on AI safety, as well as industry practices, all as they relate to AI safety.


So, because I don't have much time in this talk, I'll just focus on China's AI policy. For analytical purposes, we can understand China's AI policy in terms of different layers ranging from harder measures like binding regulations to softer measures like voluntary standards, and I'll go through these in the presentation.


In response to top level directives, the National Administrative State has introduced binding regulations on specific types of AI: recommendation algorithms, deep synthesis or Deepfakes, and generative AI services. And these are mostly led by the Cyberspace Administration of China. I don't have time to go into details about each of these regulations, but my key takeaway here is that while the regulatory focus so far has been on social stability, on data security, some of the tools introduced by these regulations, such as the Algorithm Registry and Safety Self Assessments, could be oriented towards more frontier types of risks if regulators become concerned about them in future.


For example, developers are currently required to register their models with the Algorithm Registry, and in the process of doing that they have to do a Safety Self Assessment. If regulators become concerned about, for example, the ability of AI to assist in the development of biological weapons, they could plausibly ask AI developers to also incorporate such assessments into their self assessments when they submit to their Algorithm Registry. 


Beyond these three regulations, there are also indications of more national level developments on the horizon, and I'll just flag two here. In June last year, the Chinese government announced plans to develop a national AI law, and Chinese experts have already started to draft their own suggested versions of the law.


One is led by the Chinese Academy of Social Sciences, CASS, and the other is led by Zhang Linghan, who is a member of the UN High Level Advisory Body on AI, and also a professor at the China University of Political Science and Law, or CUPL. Both expert drafts contain several provisions relevant to frontier safety, and both also flag special requirements for AI systems above a certain size, and also list exemptions for open source AI. 


But it's unclear how quickly the national law is going to progress. Recently, a senior official from the National People's Congress, which is China's equivalent of a national legislature, said in a speech that there were… he suggested a preference for flexible application of existing regulations and local policy trials over immediate national legislation, which could indicate that policymakers might take a more incremental approach here. 


But, separately this July, China convened the third plenum, which is one of the most important meetings on China's political calendar. And, the meeting resolution proposes to institute oversight systems to ensure the safety of artificial intelligence, and the official study materials explaining the meeting resolution, which are co-edited by President Xi as well as others on the Politburo Standing Committee, talks about emphasizing forward looking prevention and to minimize the risks as much as possible.


AI safety here is also classified as a public safety and national security issue alongside other issues including cybersecurity, biological security, and natural disasters. The section is also supportive of global AI governance efforts and positively references the EU AI legislation, the Safety Summits and some American AI safety standards.


For context, these plenary meetings like the third plenum are held once or twice a year and set the overarching strategic direction for the Chinese government. So the resulting documents are among the most sort of authoritative indications of leadership thinking. So it's to be seen how this directive will be eventually implemented, but it's one of the strongest public signals we've seen that the top echelons of the Chinese system are concerned about AI safety. 


Moving on, apart from binding regulations, China's also begun formulating a system of voluntary standards to guide AI development. And there are standards being developed or existing for things like watermarks, other aspects of AI development, but for this talk, I'll focus on talking about a forthcoming standard for safety self assessments, which could be relevant for frontier safety discussions. 


For context, as I mentioned earlier, the binding regulations require the developers to submit their services, their generative AI services, into the algorithm registry and undergo self assessment. And, currently, these self assessment requirements largely follow a technical document issued this February by TC260, one of the official tech standardization committees, and they in the document talk about different types of safety risks, split up into five main categories, as shown on the slide.


The February technical document also suggests that providers should pay close attention to long term risks, including deceptiveness, self replication, self modification, and misuse in cyber, biological weapon, or chemical weapon domains. There's currently a lack of concrete testing requirements for these risks, but the document is in the process of being developed into a national voluntary standard, one that is more authoritative, and while the standards first draft in May didn't specify long term risks, the same committee that published this technical standard document in February, the TC260, recently published the AI Safety Governance Framework, which mentions, extensively, about frontier AI risks.


And you could see on the table over there, in red squares, some examples of the risks they classify that are related to frontier safety, and here's a quote from the paper as well about how they discuss risks of AI becoming uncontrollable in the future. While it remains to be seen how these frontier safety considerations will be incorporated into final standards, this could be some indication about how standards bodies in China are thinking about frontier safety risks.


Beyond the central level and standards bodies, there are also local governments that have been rolling out local policies promoting and in some cases regulating, frontier AI such as AGI and large models. China has a long tradition here of trialing policies at the local level before instituting them on the national level if seen as successful, and so these local level policies could be some indication of forthcoming national level policies.


And we've identified six provincial jurisdictions so far that have released policies relating to AGI or large models. And these include the three major AI hubs in China: Beijing, Shanghai, Guangdong. And the policies are primarily aimed at promoting development, but they also include provisions on topics like international cooperation, ethics, and safety testing and evaluation.


And I won't go into detail again about each of them, but this comparison table indicates some of the features in these policies. Just wrapping up, this was my brief overview about China AI policy and analysis on recent developments with the national law, third plenum, and the AI safety assessment requirements.


As I mentioned in the beginning of the presentation, this is all based off a report on the state of AI safety in China we published earlier this year, so people could check that out online if interested, and I'd also be happy to talk about this in office hours or come find me anytime. Thank you.