BRUSSELS (Reuters) – The Group of Seven industrial countries will on Monday agree a code of conduct for companies developing advanced artificial intelligence systems, a G7 document showed.
The voluntary code of conduct is a significant step in the governance of AI, addressing concerns over privacy and security risks, according to the document seen by Reuters.
The code, which will be put in place by the G7 countries, including the United States, Canada, Japan, Germany, France, Italy, and the United Kingdom, aims to mitigate potential misuse of AI technology.
Artificial intelligence has become a crucial part of various sectors, from healthcare to finance, raising concerns over the ethical implications and the need for regulation.
The code will provide guidelines for companies in designing, developing, and implementing AI systems, ensuring they operate in a manner that is ethical, transparent, and accountable.
Privacy concerns surrounding AI have grown in recent years, with the technology’s ability to collect and analyze personal data at scale. The voluntary code will aim to address these concerns and protect individuals’ privacy rights.
In addition to privacy concerns, security risks associated with AI systems have also come under scrutiny. The code will emphasize the need for robust security measures to safeguard against potential breaches.
The G7 countries, as well as other major economies, recognize the importance of establishing global standards for AI governance to ensure the responsible and safe deployment of the technology.
Companies that adhere to the code will be expected to regularly assess the impact of their AI systems on society and the environment, taking appropriate steps to mitigate any negative consequences.
The code is seen as a breakthrough in international efforts to regulate AI, solidifying the commitment of G7 countries to proactively address the challenges posed by advanced technology.
(Reporting by Foo Yun Chee; Editing by Alexander Smith)