«AI will lead to human extinction»: let's clarify the alarm raised by 13 experts

An open letter on the web written by a group of 13 artificial intelligence experts – including 7 former OpenAI employees (of which 2 anonymous), 4 current OpenAI employees (who all preferred to remain anonymous) e 2 Google DeepMind scientists (one of which is still part of the latter) – is causing discussion (and quite a bit) on the Web. The group of experts addressed the message to companies working in the development of AI, arguing that this technology, although useful, in the future it could lead to possible human extinction. In particular, the letter places emphasis on little attention to safety and on the culture of secrecy for which they are responsible big tech of the AI.

Among the signatories of the letter there are experts of the caliber of William Saunders, Carroll Wainwright And Daniel Ziegler (all former OpenAI employees). The message also received approval from Stuart Russella leading expert on artificial intelligence security, and Geoffrey Hinton And Yoshua Bengioboth winners of the Turing Award in 2018 (an important recognition for those who work in the field of AI, considered a sort of “Nobel Prize of computer science”).

Could AI lead to human extinction? What former OpenAI and Google DeepMind employees say

Why on earth could AI lead to human extinction? According to the experts who wrote the letter in question, the reasons are multiple. Among other things the document reads:

These risks range from further reinforcing existing inequalities, to manipulation and disinformation, to the loss of control of autonomous AI systems that could lead to human extinction. AI companies themselves have recognized these risks, as have governments around the world and other AI experts.

As you continue reading the letter, the accusations gradually become more and more serious. At a certain point, in fact, we learn the following:

AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protection measures, and the risk levels of different types of harm. However, they currently have only a weak obligation to share some of this information with governments and none with civil society. We don't believe everyone can be relied upon to share them voluntarily. Until there is effective government oversight of these companies, current and former employees are among the few people who can hold them accountable to the public. However, extensive confidentiality agreements prevent us from expressing our concerns except to the very companies that may not address these issues. (…) Some of us reasonably fear various forms of retaliation, given the history of similar cases across the industry. We are not the first to encounter or talk about these issues.

Are there possible solutions to the problem? The answers AI expert group

After the heavy accusations, the experts also listed some possible solutions contained in the four points that we summarize below.

  1. AI companies should not encourage agreements that prohibit criticism of them, nor should they retaliate against employees and former employees (e.g. by hindering the economic benefits acquired by them) if they do so .
  2. AI companies should establish an anonymous process to allow current and former employees to communicate their concerns to the Board of Directors, regulators and independent organizations.
  3. Companies should foster a culture of openness to criticism.
  4. Companies should not retaliate against employees and former employees who publicly share confidential information due to security concerns, even if the reports do not lead to the desired results.

It is difficult to say whether companies of the caliber of OpenAI, Google DeepMind and Anthropic will favorably welcome the principles set out in the open letter written by AI experts, given the possible negative repercussions that this could have on their business. Certainly, regulation of technologies that have such an impact on human life is necessary to avoid the proliferation of ethical and practical problems.

The letter from former OpenAI and Google DeepMind employees

If you want to learn more, here you can find the full text of the open letter published by the group of 13 artificial intelligence experts.