AlphaZero seems to express insight. It plays like no computer ever has, intuitively and beautifully, with a romantic, attacking

admin2022-08-22  51

问题     AlphaZero seems to express insight. It plays like no computer ever has, intuitively and beautifully, with a romantic, attacking style. It plays gambits and takes risks. In some games it paralyzed Stockfish and toyed with it. While conducting its attack in Game 10, AlphaZero retreated its queen back into the corner of the board on its own side, far from Stockfish’s king, not normally where an attacking queen should be placed.
    Yet this peculiar retreat was venomous: No matter how Stockfish replied, it was doomed. It was almost as if AlphaZero was waiting for Stockfish to realize, after billions of brutish calculations, how hopeless its position truly was, so that the beast could relax and expire peacefully, like a vanquished bull before a matador. Grandmasters had never seen anything like it. AlphaZero had the finesse of a virtuoso and the power of a machine. It was humankind’s first glimpse of an awesome new kind of intelligence.
    When AlphaZero was first unveiled, some observers complained that Stockfish had been lobotomized by not giving it access to its book of memorized openings. This time around, even with its book, it got crushed again. And when AlphaZero handicapped itself by giving Stockfish ten times more time to think, it still destroyed the brute. Tellingly, AlphaZero won by thinking smarter, not faster; it examined only 60 thousand positions a second, compared to 60 million for Stockfish. It was wiser, knowing what to think about and what to ignore. By discovering the principles of chess on its own, AlphaZero developed a style of play that "reflects the truth" about the game rather than "the priorities and prejudices of programmers," Mr. Kasparov wrote in a commentary accompanying the Science article.
    The question now is whether machine learning can help humans discover similar truths about the things we really care about: the great unsolved problems of science and medicine, such as cancer and consciousness; the riddles of the immune system, the mysteries of the genome.
    The early signs are encouraging. Last August, two articles in Nature Medicine explored how machine learning could be applied to medical diagnosis. In one, researchers at DeepMind teamed up with clinicians at Moorfields Eye Hospital in London to develop a deep-learning algorithm that could classify a wide range of retinal pathologies as accurately as human experts can. (Ophthalmology suffers from a severe shortage of experts who can interpret the millions of diagnostic eye scans performed each year; artificially intelligent assistants could help enormously.)
    The other article concerned a machine-learning algorithm that decides whether a CT scan of an emergency-room patient shows signs of a stroke, an intracranial hemorrhage or other critical neurological event. For stroke victims, every minute matters; the longer treatment is delayed, the worse the outcome tends to be. (Neurologists have a grim saying: "Time is brain.") The new algorithm flagged these and other critical events with an accuracy comparable to human experts—but it did so 150 times faster. A faster diagnostician could allow the most urgent cases to be triaged sooner, with review by a human radiologist.
    What is frustrating about machine learning, however, is that the algorithms can’t articulate what they’re thinking. We don’t know why they work, so we don’t know if they can be trusted. AlphaZero gives every appearance of having discovered some important
@B@but sometimes mothers or siblings give us a surprise, too@C@but sometimes mothers or siblings turn out not to be genetic ones, either@D@but sometimes mothers or siblings do not show up, either principles about chess, but it can’t share that understanding with us. Not yet, at least. As human beings, we want more than answers. We want insight. This is going to be a source of tension in our interactions with computers from now on.
    In fact, in mathematics, it’s been happening for years already. Consider the longstanding math problem called the four-color map theorem. It proposes that, under certain reasonable constraints, any map of contiguous countries can always be colored with just four colors such that no two neighboring countries are colored the same.
    Although the four-color theorem was proved in 1977 with the help of a computer, no human could check all the steps in the argument. Since then, the proof has been validated and simplified, but there are still parts of it that entail brute-force computation, of the kind employed by AlphaZero’s chess-playing computer ancestors. This development annoyed many mathematicians. They didn’t need to be reassured that the four-color theorem was true; they already believed it. They wanted to understand why it was true, and this proof didn’t help.
                                                                                                                                (选自《纽约时报》2019年1月2日)
In Paragraph 8, the four-color map theorem is mentioned to show that________.

选项 A、the theorem was already proved
B、humans cannot solve such problems
C、the algorithms can’t tell us what they are thinking
D、the computer is of great help in solving such problems

答案C

解析 推断题。根据语篇逻辑,此处举例的目的是为了证明前文提到的观点,即the algorithms can’t articulate what they’re thinking(程序无法说明其工作思路),故正确答案为C。
转载请注明原文地址:https://kaotiyun.com/show/q60O777K
0

最新回复(0)