In the beginning of the movie /, Robot, a robot has to decide whom to save after two cars plunge into the water—Del Spooner or a

admin2021-01-08  35

问题    In the beginning of the movie /, Robot, a robot has to decide whom to save after two cars plunge into the water—Del Spooner or a child. Even though Spooner screams "Save her! Save her!" the robot rescues him because it calculates that he has a 45 percent chance of survival compared to Sarah’s 11 percent. The robot’s decision and its calculated approach raise an important question: would humans make the same choice? And which choice would we want our robotic counterparts to make?
   Isaac Asimov evaded the whole notion of morality in devising his three laws of robotics, which hold that 1. Robots cannot harm humans or allow humans to come to harm: 2. Robots must obey humans, except where the order would conflict with law 1: and 3. Robots must act in self-preservation, unless doing so conflicts with laws 1 or 2. These laws are programmed into Asimov’s robots—they don’t have to think, judge, or value. They don’t have to like humans or believe that hurting them is wrong or bad. They simply don’t do it.
   The robot who rescues Spooner’s life in J, Robot follows Asimov’s zeroth law: robots cannot harm humanity(as opposed to individual humans)or allow humanity to come to harm—an expansion of the first law that allows robots to determine what’s in the greater good. Under the first law, a robot could not harm a dangerous gunman, but under the zeroth law, a robot could kill the gunman to save others.
   Whether it’s possible to program a robot with safeguards such as Asimov’s laws is debatable. A word such as "harm" is vague(what about emotional harm? Is replacing a human employee harm?), and abstract concepts present coding problems. The robots in Asimov’s fiction expose complications and loopholes in the three laws, and even when the laws work, robots still have to assess situations.
   Assessing situations can be complicated. A robot has to identify the players, conditions, and possible outcomes for various scenarios. It’s doubtful that a computer program can do that—at least, not without some undesirable results. A roboticist at the Bristol Robotics Laboratory programmed a robot to save human proxies(替身)called "H-bots" from danger. When one H-bot headed for danger, the robot successfully pushed it out of the way. But when two H-bots became imperiled, the robot choked 42 percent of the time, unable to decide which to save and letting them both " die. " The experiment highlights the importance of morality: without it, how can a robot decide whom to save or what’s best for humanity, especially if it can’t calculate survival odds?
What does the author want to say by mentioning the word "harm" in Asimov’s laws?

选项 A、Abstract concepts are hard to program.
B、It is hard for robots to make decisions.
C、Robots may do harm in certain situations.
D、Asimov’s laws use too many vague terms.

答案A

解析 推理判断题。从定位句可以看出,作者之所以提到“harm”一词,是想以此为例说明阿西莫夫法则存在的问题,该段第二句提到类似“伤害”这样的字眼太过模糊,接着提到抽象概念凸显了代码问题,再结合最后一句所述可知,如此抽象模糊的概念在实际的机器人程序设置中很难得到应用和体现,故答案为A)。
转载请注明原文地址:https://kaotiyun.com/show/4aP7777K
0

最新回复(0)