Py学习  »  aigc

Key Points of the Measures for the Administration of AIGC

君合法律评论 • 2 年前 • 572 次点击  


On July 13, 2023, the Cyberspace Administration of China (“CAC”) and six other departments jointly promulgated the Interim Measures for the Administration of Artificial Intelligence Generated Content (“Measures”).

The Measures are landmark legislation in China regarding Artificial Intelligence Generated Content (“AIGC”), and have been revised based on the draft Measures (“Draft”) issued on April 11, 2023 for public consultation. The Measures, together with the Measures on the Administration of Algorithmic Recommendation of Internet Information Services (“Algorithm Recommendation Measures”), and the Measures on the Administration of Deep Synthesis of Internet Information Services (“Deep Synthesis Measures”), constitute important regulations in the field of artificial intelligence and algorithms.

This article outlines the key regulatory requirements for AIGC as set forth in the Measures, and provides a comparison with some provisions outlined in the Draft. 


I

The Establishment of Regulatory Frameworks and Principles


1. The Establishment of Regulatory Principles of Inclusive Encouragement and Prudential Supervision


At its meeting on April 28, 2023, the Political Bureau of the Central Committee of the Communist Party of China (“CPC”) emphasized that “great importance should be attached to the development of general artificial intelligence and the enhancement of the innovation capacity, as well as the prevention of potential risks”1. This concept is also reflected in the Measures. Chapter 1 (General Provisions) of the Measures states that the government welcomes the development of AIGC and prioritizes both technology and technological security. While promoting innovation and creativity in AIGC, it is also important to establish a regulated and categorized system to govern artificial intelligence. This approach reflects the government’s commitment to an open and inclusive attitude towards the development of AIGC. Article 16 of the Measures requires the relevant competent authorities improve the scientific regulatory approach appropriate to development and innovation and formulate corresponding classifications and grading regulatory rules or guidelines according to the characteristics and service applications of AIGC in the relevant industries and fields.

The Measures introduce new incentives and encourage support across various fields. For instance, they promote collaboration between industrial organizations, enterprises, and educational and scientific research institutions to apply and develop AIGC technologies. The Measures also encourage independent innovation in fundamental AIGC technologies, such as algorithms, frameworks, chips, and supporting software platforms. Additionally, they advocate for equal and mutually beneficial international communication and cooperation, including participation in the formulation of international rules related to AIGC.

Moreover, the Measures establish principles to advance the construction of AIGC infrastructure and a public training data resource platform. They also aim to facilitate the collaborative sharing of arithmetic resources to enhance their efficiency of use. The Measures further promote the transparent disclosure of public data in a classified, hierarchical, and orderly manner. Additionally, they expand the availability of high-quality public training data resources. The Measures also encourage the use of secure and trustworthy chips, software, tools, arithmetic power, and data resources. (Articles 5 and 6 of the Measures)

The regulatory principle of classification and grading established in the Measures takes a “risk-based approach” in the European Union’s Artificial Intelligence Act (“AI Act”) as reference to some extent. For technology with different risk levels, it is likely that corresponding regulatory requirements will be set forth in the guidelines and the Artificial Intelligence Law to be adopted in the future. Enterprises need to set up internal compliance systems according to the risk level of their products.


2. Preliminary Ideas and Frameworks for Legal Regulation has been Established


China’s current regulatory requirements for artificial intelligence and algorithms are still scattered, as contained in the Measures, the Algorithm Recommendation Measures and the Deep Synthesis Measures. The Measures follow such approach and are set to only apply to the use of AIGC provided to the public within the territory of the PRC with services that generate content such as text, images, audio, video, etc., and do not cover other forms of artificial intelligence.

The State Council 2023 Legislative Work Plan listed the Artificial Intelligence Law as a draft legislation to be submitted to the Standing Committee of the National People’s Congress (“NPC”) for deliberation2. Upon the promulgation of the Artificial Intelligence Law, the above-mentioned regulations may be amended or replaced by its provisions. The implementation of the Measures will also provide practical experience for the future drafting of the Artificial Intelligence Law.

When AIGC involves the processing of the personal information, important data and data security, the requirements of the existing Personal Information Protection Law (“PIPL”), the Data Security Law (“DSL”) and other regulations will also need to be complied with.


3. A Regulatory Regime for Artificial Intelligence is gradually Taking Shape


The Measures were jointly issued by the CAC, the National Development and Reform Commission (“NDRC”), the Ministry of Education (“ME”), the Ministry of Science and Technology (“MST”), the Ministry of Industry and Information Technology (“MIIT”), the Ministry of Public Security (“MPS”), and the National Radio and Television Administration (“NRTA”).  Chapter 4 stipulates that the CAC, NDRC, ME, MST, MIIT, MPS and NRTA shall strengthen the management of AIGC services based on their respective responsibilities. (Article 16 of the Measures)

It is foreseeable that the application of AIGC may relate to the regulatory requirements of one or more of the above-mentioned governmental authorities due to the differences in the specific fields, services, groups and risks involved in the application process.


II

Clarifying the Scope of Application


According to Article 2 of the Measures, the Measures are applicable to AIGC services that are provided to the public within the territory of the PRC for generating content such as text, images, audio and video. That means, the Measures do not apply if services are not provided externally.

Compared to the Draft, the Measures have also specifically excluded from its application to the research and development of AIGC without providing services to the public within the territory of the PRC.

The Measures further point out that if there are other regulations on the use of AIGC in activities such as news publishing, film and television production, and literary and artistic creation, such regulations shall apply. (Article 2 of the Measures)

The Measures do not specify how they apply to offshore entities providing AIGC services to the PRC, but Article 20 of the Measures clarifies that the CAC has the power to notify the relevant organizations to take technical and other necessary measures to eliminate any violation of the laws and regulations by AIGC services from overseas to the domestic market.

It is important to note that Article 4 of the Measures expands upon the Draft by outlining specific obligations for the “use” of services provided by AIGC, in addition to the previously mentioned “provision” of services. These obligations include refraining from disseminating illegal information, respecting intellectual property rights and business ethics, safeguarding trade secrets, refraining from exploiting algorithms, data, platforms, and other advantages to engage in monopolization and unfair competition, respecting the legitimate rights and interests of others, and avoiding infringement upon the rights and interests of others.


III

Setting out Specific Regulatory Requirements


The Measures impose a series of regulatory obligations on service providers:

1.Algorithmic security assessment and filing


Compared with the requirement in the Draft that all AIGC service providers should fulfill the security assessment and filing obligations, the Measures explicitly state that the security assessment and filing obligations are applicable to the provider of AIGC services with public opinion attributes or social mobilization capabilities, which is consistent with the obligations under the Algorithm Recommendation Measures and the Deep Synthesis Measures. (Article 17 of the Measures)


2.  Training data


Article 7 of the Draft requires that AIGC researchers and developers are responsible for the legitimacy of the data sources used for pre-training and optimized training, but it does not specify the extent to which AIGC researchers and developers are obligated to audit the source of the data used for training algorithms. The Measures stipulate that artificial intelligence researchers and developers shall “take effective measures to improve the quality of the training data, and to enhance the authenticity, accuracy, objectivity, and diversity of the training data”, instead of “guaranteeing the authenticity, accuracy, objectivity, and diversity of the training data” stipulated in the Draft.

Article 8 of the Measures, on the basis of the Draft, puts forward requirements for the use of manual labeling in the development of products, including (a) formulating clear, specific and operable rules of labeling; (b) providing necessary training for the personnel engaged in labeling activities; and (c) verifying the accuracy of labeling through sampling.


3. Content management


In response to the false information issue that may be caused by AIGC, the Measures include a series of requirements, including the requirement of not generating false and harmful information that violates laws, regulations, social morality and ethics and taking effective measures to improve the accuracy and reliability of generated content. (Article 4 of the Measures)

Article 14 of the Measures stipulates the obligations of service providers when they discover illegal content. This includes taking measures such as stopping generation, stopping transmission and elimination, and making rectifications through measures such as model optimization training, and reporting to the relevant competent authorities. If a service provider discovers that a user has utilized AIGC to engage in unlawful activities, it shall, in accordance with the law, take measures such as warning, limiting functions, suspending or terminating the provision of services to the user, keep relevant records, and make a report to the relevant competent authorities.

Unlike the Draft, the Measures do not explicitly require service providers to implement real-name verification of users, but whether the general obligation of users to provide real identity information stipulated in the Cybersecurity Law (“CSL”) is still applicable may need to be discussed on a case-by-case basis. The possibility cannot be ruled out that service providers will still need to implement real-name verification of a user’s identity in specific circumstances.


4. Prevention of discrimination


Article 4 of the Measures requires that effective measures be taken to prevent discrimination on the basis of ethnicity, belief, location, geography, gender, age, occupation, and health in the process of algorithm design, the selection of training data, model generation and optimization, and service provision.


5.  Labeling and reviewing generated content


The Measures retain the obligation of service providers to label AIGC generated images, videos and other content in accordance with the Deep Synthesis Measures. (Article 12 of the Measures)

The Measures have deleted the requirement in the Draft that service providers should prevent the re-generation of inappropriate content within three months through, for example, model optimization training, and only require service providers to optimize their models in a timely manner and report to the competent authorities. The above provisions consider, to some extent, the uncertainty over the pace of the development and improvements in AIGC technology. (Article 14 of the Measures).


6. Protection of users’ personal information (including input information)


Article 11 of the Measures requires that service providers to fulfill their obligations to protect users’ input information and usage records in accordance with the law, not to collect unnecessary personal information or unlawfully retain or provide to others input information and usage records that can identify users. As to users’ personal information rights, apart from the “correction, deletion, and blocking” listed in the Draft, the methods of access, copying, and supplementing are added in the Measures, to be consistent with the PIPL.


7.   Protection of User Rights


Article 13 of the Measures requires service providers to provide safe, stable and continuous services to ensure normal use by users. Like the Draft, Article 10 of the Measures stipulates the obligation to prevent over-reliance on or addiction to AIGC, requiring service providers to clarify and publicize the applicable groups, occasions and purposes of their services, guide users to scientifically and rationally understand and use AIGC technology in accordance with the law, and take effective measures to prevent juveniles from over-reliance on or addiction to AIGC. Article 15 of the Measures also imposes the obligation on service providers to establish a complaint reporting mechanism for the public.


IV

Stipulating Liabilities


According to Article 9 of the Measures, providers are required to assume the responsibility of “producers of network information content” and fulfill the obligations of network information security. If personal information is involved in the generation of network information content, the “producers of network information content” must also assume the responsibility of “processors of personal information” and fulfill the obligation of personal information protection.

With regard to the specific causes and measures for punishment, Article 21 of the Measures follows the concept of the Draft and stipulates that punishment shall be imposed in accordance with the requirements of the CSL, DSL, PIPL and Science and Technology Advancement Law (which is newly added to the Measures). Where not provided in these laws and administrative regulations, the relevant competent authorities in accordance with their duties may give a warning, a notification of criticism, an order to make corrections within a certain period of time, or an order to suspend the provision of related services if service providers refuse to rectify or the circumstances are serious. The penalties of “a fine ranging from RMB 10,000 to 100,000” and the “termination of service provision using AIGC” in the Draft have been deleted. If there is a violation of public security management, penalties shall be imposed in accordance with the law, and criminal liability shall be imposed in accordance with the law if convicted of a crime.


V

Evaluation and Prospect


The Measures demonstrate a more comprehensive approach towards the principle of “equal emphasis on development and safety” and the combination of promoting innovation and governance in accordance with the law, compared to the Draft. The Measures address issues such as morality and ethics, security and privacy, intellectual property rights, discrimination, and illegal content that may arise in the application of AIGC. Moreover, they create an inclusive and open space for the development of new technologies.

Besides the encouragement measures, the Measures also consider the practical difficulties in implementation. As a result, the service providers’ obligations have been specifically adjusted and modified. The obligation to guarantee the accuracy of training data and avoid using discriminatory content is not included in the final version which has undergone extensive discussions in the market after the release of the Draft. The Measures have deleted the obligation to verify the user’s identity and prevent the generation of inappropriate content within a period of three months, which was stipulated in the Draft.

To anticipate and address potential challenges and problems that may arise from the rapidly developing pace of AIGC, the Measures have reserved space for future preparation. The principles outlined in the Measures will guide the safe and sustainable development of AIGC. Corresponding management standards and regulatory measures will be adopted based on the characteristics and differences of the application scenarios and technological levels. This approach will ensure that AIGC continues to develop in a responsible and secure manner.

Given that the Artificial Intelligence Law has been listed as draft legislation to be submitted to the Standing Committee of the NPC for deliberation, we anticipate that a more comprehensive regulatory and governance system will soon be established for the research, service, and usage of artificial intelligence.  With the advancement of the technology, the identification and assessment of risks, as well as the focus of supervision, will also become clearer in conjunction with the application of the Measures.

Enterprises that are actively developing and utilizing AIGC need to gradually adopt a sense of compliance in this area. They can achieve this by developing compliance systems and processes, referencing the Measures in terms of data protection, transparency, and interpretability, content compliance, security risks, and the protection of users’ rights and interests. This will help to establish good practices in this new and constantly evolving area of technology.

1.https://www.gov.cn/yaowen/2023-04/28/content_5753652.htm

2.https://www.gov.cn/zhengce/content/202306/content_6884925.htm


Marissa Dong

Partner

dongx@junhe.com


Practice Area

Corporate and M&A

Telecom and Internet

Data Privacy, Cybersecurity and Information Law



Jinghe Guo

Associate

guojh@junhe.com



Xiaoyu Shi

shixiaoyu@junhe.com


* Intern Yiru Zhan also contributed to this article


欢迎订阅《君合数据保护和网络安全月报》如需订阅本月报及君合其他业务组月报请扫描下方“二维码”填写订阅表单,以便我们将相关月报及时发送给您。


 Related Articles 


人工智能和算法系列文章(五):《生成式人工智能服务管理暂行办法》要点及影响

人工智能和算法系列文章(四):人工智能及算法治理的新进展——基于ChatGPT在意大利的监管案例评析

人工智能和算法系列文章(三):人工智能,伦理先行——简评《科技伦理审查办法(试行)(征求意见稿)》

人工智能和算法系列文章(二):网信办发布《生成式人工智能服务管理办法(征求意见稿)》,人工智能法律治理持续发力

人工智能和算法系列文章(一):算法规定如何管理ChatGPT类产品

尘埃落定-标准合同落地八大关注点

China’s Long-awaited Standard Contract Released

2023年数据保护领域值得关注的十大趋势和问题

China: The Top 10 Trends and Issues in Data Protection in 2023

印度发布《2022数字化个人数据保护法案》征求意见稿,鲜明特色引起广泛关注

《医疗卫生机构网络安全管理办法》要点解读

美国统一隐私立法萌芽:《美国数据隐私保护法》草案解读 

网安法修订草案发布中英文

人工智能场景创新最新政策动态

《数据出境安全评估申报指南(第一版)》正式发布   

Release of Guidelines for Data Export Security Assessment 

人工智能在医疗健康领域应用涉及的数据合规问题

数据出境安全评估办法简评中英文  

电子签约新规落地,私募基金面签难题有新解 

New Private Fund Electronic Contract Rules Issued    

China Standard Contract for Exporting Personal Information

证券期货业网安新规草案简评   

GDPR下不同数据跨境转移工具之评析

“数据隐私日特辑” · 2022数据保护十大趋势

《算法推荐管理规定》要点解读    

《网安办法》七要点解读     

2021年数据网络安全解读合集

2020年数据网络安全解读合集

2019年数据网络安全解读合集

2018年数据网络安全解读合集



“碳中和专项基金”是君合律师事务所与北京绿化基金会共同发起的公益项目。

>> 扫码购车贴


Click "Read more" to Visit JunHe Official Website



Disclaimer

Articles published on JUNHE Legal Updates represent only the opinions of the authors and should not in any way be considered as formal legal opinions or advice given by JunHe or its lawyers. If any part of these articles is reproduced or quoted, please indicate the source.Any picture or image contained in these articles MUST not be reproduced or used unless otherwise consented by us in writing. You are welcome to contact us for any further discussion or exchange of views on the relevant topic.

Python社区是高质量的Python/Django开发社区
本文地址:http://www.python88.com/topic/157931
 
572 次点击