ai

International Standards for Regulating Artificial Intelligence

Analysis of Acts Developed as a Result of the Hiroshima AI Process

2/15/2024
ai

As accurately noted by Ursula von der Leyen, President of the European Commission, the potential benefits of Artificial Intelligence (AI) for citizens and the economy are immense. However, the accelerated capabilities of AI also bring new challenges.

In an effort to develop specific methodologies and means to address or at least effectively respond to these challenges, states around the world are establishing standards, defining directions, and reasoned limits of legal regulation for the development and implementation processes of AI across all spheres of life. It is the general strategies and concepts, agreed upon at the international and/or pan-European levels, that are expected to serve as substantive foundations for the development of regulatory acts in the field of AI application in national legislation.
Another example of global regulation of specific issues related to AI usage is the adoption, particularly during the G7 Hiroshima Artificial Intelligence Process, of the International Guiding Principles for Organizations Developing Advanced AI Systems (hereinafter - Guiding Principles) and the International Code of Conduct for Organizations Developing Advanced AI Systems (hereinafter - Code of Conduct), which were published on October 30, 2023.

While the provisions set forth in these acts are not legally binding and are voluntary, they express the general intentions and visions of G7 countries and the European Union regarding the directions of further comprehensive, including legal, development of AI regulation. Furthermore, it is implied by the EU's statements that the content of the developed Guiding Principles and Code of Conduct is harmonized with legally binding rules potentially to be included in the text of the Artificial Intelligence Act, currently being developed within the EU.
The published Guiding Principles and Code of Conduct are based on principles contained in the Recommendation on Artificial Intelligence adopted by the OECD on May 22, 2019, which effectively became the first intergovernmental standard on AI. The purpose of adopting additional standards as a result of the Hiroshima AI Process was the need to leverage the benefits and overcome the risks and challenges associated with recent developments in advanced AI systems.< /p>

Eleven developed AI regulation standards

Overall, from the text of the Guiding Princiles and Code of Conduct, it is envisaged that they promote the idea of applying eleven basic principles and measures in the development, deployment, and use of AI, particularly advanced AI systems, namely:

  1. employing various measures of internal and independent external testing during the development process of advanced AI systems to identify, assess, and mitigate risks throughout the AI lifecycle;
  2. dentifying and mitigating vulnerabilities, incidents, and misuse patterns post-deployment, including in the marketplace;
  3. publicly disclosing capabilities, limitations, and areas of proper and improper use of advanced AI systems to ensure sufficient transparency;
  4. exchanging information and reporting incidents among organizations developing advanced AI systems, including industry representatives, governments,
  5. civil society, and academia;
  6. developing, implementing, and disclosing AI governance policies and organizational mechanisms for implementing these policies based on risk assessment;
  7. investing in and deploying reliable security control measures, including physical security, cybersecurity, and protections against internal threats throughout the AI lifecycle;
  8. developing and implementing reliable content authentication and provenance confirmation mechanisms that allow users to identify content created by AI;
  9. prioritizing research to reduce risks to society, safety, and security, as well as prioritizing investments in effective ways to mitigate consequences;
  10. prioritizing the development of advanced AI systems to address global human challenges; developing international technical standards;
  11. implementing proper data input and personal data and intellectual property protection procedures.

The above list of standards and measures is non-exhaustive, as stated in the preambles of the Code of Conduct and Guiding Principles, as they are a "living document" that continues to be discussed and developed.

Application of AI Regulation Standards

Analysis of the principles and measures outlined in the Guiding Principles and Code of Conduct indicates that they are primarily based on a risk-oriented approach. Thus, the overarching idea is that the efforts of key "actors" involved in the development, deployment, and market introduction of AI systems should be accompanied by ongoing research, prevention, identification, awareness, and consequently, response to risks associated with such AI systems.

Testing measures aimed at identifying risks and mitigating negative consequences should be directed, in particular, towards ensuring the reliability, safety, and security of AI systems throughout their lifecycle. To achieve this goal, developers must ensure traceability of datasets, processes, and decisions made during system development, including through documentation and regular updating of technical documentation.

Testing should take place in a secure environment and be conducted at multiple checkpoints throughout the AI lifecycle, including before deployment and market release, to identify both accidental and intentional risks and vulnerabilities.

When developing and implementing testing measures, organizations developing AI systems are obligated to pay special attention to the following risks:

  • Chemical, biological, radiological, and nuclear risks, specifically how advanced AI systems may lower barriers to entry, including for non-state actors, for the development, acquisition, or use of weapons;
  • Offensive cyber capabilities, such as how AI systems may aid in the discovery or exploitation of vulnerabilities;
  • Risks to health and/or safety, including the consequences of system interactions and tool usage, including the ability to control physical systems and interfere with critical infrastructure;
  • Risks associated with the "self-replication" of AI models, i.e., the creation of copies of themselves or the training of other models;
  • Social risks, including the risks of causing harmful biases and discrimination against individuals or communities, violations of existing legal norms, including confidentiality and data protection;
  • Threats to democratic values and human rights, including facilitating disinformation or violating the privacy of individuals;
  • The risk that a particular event may lead to a chain reaction with significant negative consequences that could affect entire cities, industries, or communities.

Implementing obligations to track the above risks, particularly for organizations developing advanced AI systems, is driven by the need to establish clear frameworks for creating and using advanced AI systems in everyday human life. The ability to apply computer technologies to generate results under conditions of a predetermined set of human-defined goals, which is effectively provided by AI systems, is in any case associated with an impact on the environment in which user interactions or the implementation of obtained results occur. However, given the areas where the use of advanced AI systems is planned or possible, such an impact may have irreversible consequences for society, fundamental values, rights and freedoms, and/or any other manifestations of irreversible transformation of the modern world order. Thus, the question of identifying, preventing, and adequately responding to all challenges and risks associated with the application of AI systems is the most prioritized on the agenda.

Since monitoring various vulnerabilities, incidents, abuses, risks associated with AI systems, and ensuring the implementation of effective response measures to mitigate negative impacts should occur throughout the AI lifecycle, developers are encouraged to incentivize opportunities for third parties and users to identify and report problems and vulnerabilities in AI systems even after their deployment. Such encouragement may be organized, for example, through reward systems, competitions, or prizes for responsible vulnerability disclosure.

To effectively track vulnerabilities in AI systems, it is necessary to create an appropriate environment, including the development and maintenance of documentation on identified risks and vulnerabilities, implementation of mechanisms for responsible vulnerability disclosure available to a wide range of stakeholders.

Ensuring sufficient transparency of information about the capabilities, limitations, and areas of proper and improper use of advanced AI systems should be done through the publication of reports by developers' organizations on transparency, containing significant information about all new significant releases of advanced AI systems.

Furthermore, comprehensive cooperation between organizations throughout the AI lifecycle, involving the exchange of relevant information about the capabilities, limitations, and risks of using advanced AI systems, disseminating such information to the public to enhance the security, protection, and reliability of advanced AI systems, and reporting to relevant government authorities, if necessary, is necessary.

Given the specificity of interacting with AI, it is proposed that, where technically feasible, mechanisms for authentication and identification of the origin of content created using AI systems be introduced and used, including watermarks, labeling, which help users distinguish content created by artificial intelligence from other types of content, as well as warnings to users that they are interacting with an AI system in a particular case.

Instead of conclusions, it is anticipated that adhering to the set of aforementioned standards in the development and subsequent use of AI systems will safeguard humanity from irreversible consequences and facilitate the effective utilization of computer technologies to address pressing issues and derive global benefits from neural networks. At the same time, an analysis of the content of the developed standards undoubtedly indicates their general nature, which necessitates further steps towards creating special regulations at both international and national levels of individual states, including imperative character, and the development of effective algorithms to ensure the implementation of the AI concepts proposed at the Hiroshima process for safe interaction with neural networks in the practical activities of AI system development and their subsequent use.

Ksenia Rakityanska, lawyer at AC Crowe Ukraine.