Social platforms large and small are struggling to keep their communities safe from hate speech, extremist content, harassment a

admin2022-11-01  50

问题     Social platforms large and small are struggling to keep their communities safe from hate speech, extremist content, harassment and misinformation. Most recently, far-right agitators posted openly about plans to storm the U.S. Capitol before doing just that on January 6. One solution might be AI: developing algorithms to detect and alert us to toxic and inflammatory comments and flag them for removal. But such systems face big challenges.
    The prevalence of hateful or offensive language online has been growing rapidly in recent years, and the problem is now rampant. In some cases, toxic comments online have even resulted in real life violence, from religious nationalism in Myanmar to neo-Nazi propaganda in the U.S. Social media platforms, relying on thousands of human reviewers, are struggling to moderate the ever-increasing volume of harmful content. In 2019, it was reported that Facebook moderators are at risk of suffering from PTSD as a result of repeated exposure to such distressing content. Outsourcing this work to machine learning can help manage the rising volumes of harmful content, while limiting human exposure to it. Indeed, many tech giants have been incorporating algorithms into their content moderation for years.
    One such example is Google’s Jigsaw, a company focusing on making the internet safer. In 2017, it helped create Conversation AI, a collaborative research project aiming to detect toxic comments online. However, a tool produced by that project, called Perspective, faced substantial criticism. One common complaint was that it created a general "toxicity score" that wasn’t flexible enough to serve the varying needs of different platforms. Some Web sites, for instance, might require detection of threats but not profanity, while others might have the opposite requirements. Another issue was that the algorithm learned to conflate toxic comments with nontoxic comments that contained words related to gender, sexual orientation, religion or disability.
    Following these concerns, the Conversation AI team invited developers to train their own toxicity-detection algorithms and enter them into three competitions hosted on Kaggle, a Google subsidiary known for its community of machine learning practitioners, public data sets and challenges. To help train the AI models, Conversation AI released two public data sets containing over one million toxic and non-toxic comments from Wikipedia and a service called Civil Comments. The comments were rated on toxicity by annotators, with a "Very Toxic" label indicating "a very hateful, aggressive, or disrespectful comment that is very likely to make you leave a discussion or give up on sharing your perspective," and a "Toxic" label meaning "a rude, disrespectful, or unreasonable comment that is somewhat likely to make you leave a discussion or give up on sharing your perspective."
Perspective was criticized owing to its__________.

选项 A、creation of toxicity score based on a random sample
B、struggle to deal with the rising amount of harmful content
C、neutral position in distinguishing online content
D、failure to meet the different needs of some social platforms

答案D

解析 由题干关键词Perspective was criticized定位到文章第三段第三句:However,a tool produced by that project,called Perspective,faced substantial criticism.(然而,该项目开发的一个名为“透视”的工具却遭到了大量批评。)随后的几句给出了其遭受批评的原因:一是其“有害度评分”不能灵活地满足不同平台的不同需求,二是其将有害评论与一些无害评论混为一谈。对比四个选项,选项[D]“无法满足某些社交平台的不同需求”与原因相对应,故为正确答案。
转载请注明原文地址:https://kaotiyun.com/show/JgMD777K
0

最新回复(0)