OpenAI, the publisher of ChatGPT, has announced the creation of a new team tasked with anticipating and combating the potentially catastrophic risks that could arise from the misuse of artificial intelligence. So what kinds of risks could be involved?
The myth of AI surpassing human intelligence may soon no longer be the stuff of science fiction. So, faced with the spectacular development of AI tools such as ChatGPT and others, OpenAI has decided to create its own team dedicated to anticipating and preparing for the risks that this technology could pose in the wrong hands. Open AI cites cyberattacks, but also nuclear threats.
In fact, OpenAI states that it is committed to promoting safety and trust in AI, particularly with regard to the risks of radical misuse or manipulation. While the most advanced AI models of the future could have positive benefits for ordinary people, in their personal or professional lives, there is still the question of their malicious use and the potential risks this could engender on a large scale.
The idea is therefore to put in place a solid framework for monitoring, evaluating, predicting, and protecting against the harmful use of the capabilities of future artificial intelligence systems. This is why OpenAI has announced the creation of a dedicated Preparedness team.
This team will be tasked with anticipating potential risks to both cybersecurity and people, be they chemical, biological, radiological, or nuclear. One of its missions will also be to draw up a kind of charter, a development policy that takes risks into account, in order to mitigate them as far as possible. – AFP Relaxnews