1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.The problem is that the first law doesn't work. Basically, carried to its logical conclusion, it allows and commands robots to do what they tried to do in the movie: herd us all into safe areas and not allow any harm to come to us (we'd basically become pets).
2) A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
So, what's the solution? Anyone?
3 comments:
Zeroth's law....
Jack Williamson identified that problem in his 1947 novellette, "With Folded Hands". He avoided directly citing Asimov's three laws (which were just five years old), but his summary of the "humanoid" robots' programming was functionally equivalent: "to serve and obey and guard men from harm", and the humanoids guarded men from harm right into padded cells and lobotomies...
But robots capable of reasoning to such conclusions are still just a theoretical possibility. Governments formed of humans are a far more immediate threat to try the same thing.
paratrooperjj beat me to it, but I will expand.
The zeroth law, as posited by R. Daneel Olivaw in the later Foundation books holds that: "A robot may not injury humanity, or through inaction allow humanity to come to harm." The 1st, 2nd and 3rd laws are amended appropriately to give precedence to the 0th law.
The implications of this law are explored in depth in "Robots and Empire" and "Foundation and Earth."
Post a Comment