Neurotechnology has long been a favorite of science-fiction writers. In Neuromancer, a wildly inventive book by William Gibson w

admin2020-01-11  35

问题    Neurotechnology has long been a favorite of science-fiction writers. In Neuromancer, a wildly inventive book by William Gibson written in 1984, people can use neural implants to jack into the sensory experiences of others. The idea of a neural lace, a mesh that grows into the brain, was conceived by Iain M. Banks in his "Culture" series of novels. The Terminal Man by Michael Crichton, published in 1972, imagines the effects of a brain implant on someone who is convinced that machines are taking over from humans. (Spoiler: not good.)
   Where the sci-fi genre led, philosophers are now starting to follow. In Howard Chizeck’s lab at the University of Washington, researchers are working on an implanted device to administer deep-brain stimulation (DBS) in order to treat a common movement disorder called essential tremor. Conventionally, DBS stimulation is always on, wasting energy and depriving the patient of a sense of control. The lab’s ethicist, Tim Brown, a doctoral student of philosophy, says that some DBS patients suffer a sense of alienation and complain of feeling like a robot.
   To change that, the team at the University of Washington is using neuronal activity associated with intentional movements as a trigger for turning the device on. But the researchers also want to enable patients to use a conscious thought process to override these settings. That is more useful than it might sound: stimulation currents for essential tremor can cause side-effects like distorted speech, so someone about to give a presentation, say, might wish to shake rather than slur his words.
   Giving humans more options of this sort will be essential if some of the bolder visions for brain-computer interfaces are to be realised. Hannah Maslen from the University of Oxford is another ethicist who works on a BCI project, in this case a neural speech prosthesis being developed by a consortium of European researchers. One of her jobs is to think through the distinctions between inner speech and public speech: people need a dependable mechanism for separating out what they want to say from what they think.
   That is only one of many ethical questions that the sci-fi versions of brain-computer interfaces bring up. What protection will BCIs offer against neural hacking? Who owns neural data, including information that is gathered for research purposes now but may be decipherable in detail at some point in the future? Where does accountability lie if a user does something wrong? And if brain implants are performed not for therapeutic purposes but to augment people’s abilities, will that make the world an even more unequal place?
   For some, these sorts of questions cannot be asked too early: more than any other new technology, BCIs may redefine what it means to be human. For others, they are premature. "The societal-justice problem of who gets access to enhanced memory or vision is a question for the next decades, not years, " says Thomas Cochrane, a neurologist and director of neuroethics at the Centre for Bioethics at Harvard Medical School.
   In truth, both arguments are right. It is hard to find anyone who argues that visions of whole-brain implants and AI-human symbiosis are impossible to realize; but harder still to find anyone who thinks something so revolutionary will happen in the near future.
Why can’t some sorts of questions be asked too early for some people?

选项 A、These questions have been solved.
B、BCIs may redefine what it means to be human more than any other new technology.
C、More than any other new technology, BCIs will not take human into consideration.
D、These questions don’t exist at all.

答案B

解析 根据题干定位至第6段。题目问为什么对有些人来说一些问题不能问得太早。A项“问题已经被解决”与D项“问题根本不存在”,在该段均未提及;C项“比起其他新科技,BCIs将不会考虑人类”与原文不符;B项“比起任何其他新技术,BCI可能会重新定义人类的意义”与原文一致。因此正确答案为B项。
转载请注明原文地址:https://kaotiyun.com/show/DKwO777K
0

最新回复(0)