Henry Kissinger published an article in the June 2018 Atlantic Monthly detailing his belief artificial intelligence (AI) threate

admin2019-06-06  62

问题    Henry Kissinger published an article in the June 2018 Atlantic Monthly detailing his belief artificial intelligence (AI) threatens to be a problem for humanity—probably an existential problem.
   He joins Elon Musk, Bill Gates, Stephen Hawking and others who have come out to declare the dangers of AI. The difference is, unlike those scientists and technologists, the former secretary of State speaks with great authority to a wider audience that includes policy makers and political leaders, and so could have a much greater influence.
   And that’s not a good thing. There’s a widespread lack of precision in how we describe AI that is giving rise to a significant apprehension on its use in self-driving cars, automated farms, drone airplanes and many other areas where it could be extremely useful. In particular, Kissinger commits the same error many people do when talking about AI: the so-called conflation error. In this case the error comes about when the success of AI programs in defeating humans in games such as chess and go are conflated with similar successes that might be achieved with AI programs used in supply chain management or claims adjustments or other, more futuristic areas.
   But the two situations are very different. The rules of games like chess and go are prescriptive, somewhat complicated and never change. They are, in the context of AI, "well bounded." A book teaching chess or go written 100 years ago is still relevant today. Training an AI to play one of these games takes advantage of this "boundedness" in a variety of interesting ways, including letting the AI decide how it will play.
   Now, however, imagine the rules of chess could change randomly at any time in any location: Chess on Tuesdays in Chicago has one set of rules but in Moscow there are a different set of rules on Thursdays. Chess players in Mexico use a completely different board, one for each month of the year. In Sweden the role for each piece can be decided by a player even after the game starts. In a situation like this it’s obviously impossible to write down a single set rules that everyone can follow at all times in all locations.
   AI is today being applied to business systems like claims and supply chains that, by their very nature, are unbounded. It is impossible to write down all the rules an AI has to follow when adjudicating an insurance claim or managing the supply chain, even for something as simple as bubblegum. The only way to train an AI to manage one of these is to feed it massive amounts of data on all the myriad processes and companies that make up an insurance claim or a simple supply chain. We then hope the AI can do the job—not just efficiently, but also ethically.
What can we learn from Paragraph 5?

选项 A、It is an example of an unbounded problem.
B、It is difficult to form unified playing principles.
C、Different countries have their own rules of playing.
D、There are too many different rules to develop a unified one.

答案A

解析 推理判断题。根据定位词定位到文章第五段。第五段从反面论证了第四段的观点,即不可能写下所有人在任何时间、任何地点都可以遵循的单一规则,指的是“无界性”,故A项为正确选项。
转载请注明原文地址:https://kaotiyun.com/show/Q3nZ777K
0

最新回复(0)