The document aims to mitigate risks and support innovation in the field of AI.
China has issued a trial guideline on the ethics review and service of artificial intelligence (AI) technology, the Ministry of Industry and Information Technology said last Friday.
The guideline, jointly issued by 10 government departments, including the ministry, calls for efforts to support technological innovation in AI ethics review and to strengthen the use of technical measures to prevent AI-related ethical risks.
The rules set out key review criteria such as whether a technology promotes social well-being and prioritizes the protection of life and health. Reviews also examine risks including algorithmic discrimination and assess whether systems are controllable, reliable, transparent and explainable, with accountability traceable and privacy adequately protected.
The document also details issues that should be addressed in the review, such as the selection criteria for training data, the rationality of the algorithm, model and system design, and measures to prevent bias, discrimination and algorithmic exploitation.
The rules require universities, research institutes, healthcare providers and companies to set up ethics-compliance systems for AI research and development that could pose risks to human dignity, public order, health, the environment and sustainable development.
The guideline also calls for promoting the orderly open-sourcing of high-quality datasets for AI ethics review, strengthening the development of general risk management, assessment and auditing tools, and exploring risk assessment based on application scenarios.
At the core is a mechanism that places primary responsibility on institutions to conduct internal reviews, supported by external service providers and a government-led expert reassessment process.
Universities, research institutions, medical institutions and companies engaged in AI development must establish ethics committees with at least five members, including experts in AI technology, applications, ethics and law.
A quasi-administrative approval process requires project leads to obtain ethics clearance from internal committees or designated service centers before proceeding. For projects on the high-risk list, organizations must seek an additional expert reassessment after passing the initial review.
Applicants are required to submit detailed materials including project plans outlining algorithms, data sources and application scenarios, along with ethical risk assessments, contingency plans and a letter of assurance.
The adoption of the criteria attempts to address concerns about job displacement and potential infringements on personal rights from the “algorithmic exploitation” of workers as AI adoption accelerates across sectors in China.