How current the three laws of Asimov’s robotics in modern robots are: what they are and what they say

The Three laws of roboticsformulated by the science fiction writer Isaac Asimov In the 1942represent an ethical code designed to regulate the interaction between human beings and intelligent machines. Introduced into the story Runaround and then become a point of reference in gender literature, these rules they impose the robots not to harm human beingsOf obey the orders and of protect one’s existencebut always respecting a precise hierarchy. In this case, these laws are formulated as follows:

  1. A robot cannot damage a human being nor can it allow that, due to its failure to intervene, a human being receives damage.
  2. A robot must obey orders given by humans, provided that these orders do not contrast the first law.
  3. A robot must protect its existence, provided that the safeguarding of it does not contrast with the first or second law.

Subsequently, Asimov introduced an even more general law, the “Zero Law”which pretext the well -being of humanity as a whole to that of individuals. Despite the charm and logical solidity of these rules, their implementation in modern robots clashes with technical limits and with the real needs of technological development. Current artificial intelligence is unable to understand concepts such as “injuring a human being” in their complexity, and the robotic industry often follows priorities far from those imagined by Asimov, especially in particular areas, such as military one.

The idea behind the three laws of robotics

When Asimov formulated the Three lawsrobotics was still a field in the dawn, and its vision reflected more a literary utopia than a concrete perspective of technological development. The laws establish that a robot should never damage a human being nor to allow, through the inaction, that this happens. They must also perform the orders received, unless these go into contrast with the first rule, and protect their existence without violating the previous two. The so -called “Zero Law”introduced by Asimov in 1985 with the novel Robots and empiresays that “A robot cannot give damage to humanity, nor can it allow that, due to its lack of intervention, humanity receives damage“By imposing robots to safeguard humanity as a whole, even at the cost of sacrificing individuals. The idea behind these rules was Create machines that, by their nature, could not rebel against their creatorsavoiding the classic scenario of the rebellion of the machines, widespread in the science fiction of the time.

In commenting on them, Filippo Cavalloprofessor of the Sant’Anna high school in Pisa, on the occasion of the 75th anniversary of the three laws, said:

Asimov’s laws are very stringent and rigid, but have been conceived in a historical, cultural and scientific context not yet evolved as the current one. They are still valid and common sense, but it is necessary to insert them in a more complex framework of rules.

Because the three laws of robotics are difficult to apply today

Despite their theoretical elegance, the Three laws prove difficult to apply in the real world And for various reasons. A first obstacle is certainly of nature technique: current robots do not have an advanced understanding of natural language or complex situations. Let’s take this example. Should a nurse robot prevent a patient from refusing potentially life -saving therapy? In an inevitable traffic accident, how should an autonomous vehicle behave to minimize damage? These questions show how the binary logic of the three laws is not sufficient to manage complex situations from an ethical point of view.

Another limit is linked to economic and political interests. Much of robotic research is financed by military entities, which develop machines with war purposes, making it impossible to apply the first law, which prohibits harmful to human beings. In addition, robots designed for the industrial or domestic sector are built for Maximize efficiency and productivitynot to follow an abstract ethical code. This raises issues on safety and ethical responsibility to which it is difficult to answer. Just to say: if a robot takes a “decision” that causes damage to a human being, who responds? The robot manufacturer? The development team of its software? The robot itself?

Not surprisingly, the scientific community has been discussed for years on the need to develop regulations most appropriate to current reality. According to some experts, The three laws could be integrated into a more complex systemthat takes into account specific scenarios and the actual skills of the machines. Others claim that it would be more realistic to rely on legal regulations and technical standards, rather than to philosophical principles that are difficult to translate into IT code. Given the progress in the field of robotics and the steps forward in the AI ​​sector, the debate is destined to remain open and ask more and more questions to which answers will have to be found.