首页
外语
计算机
考研
公务员
职业资格
财经
工程
司法
医学
专升本
自考
实用职业技能
登录
考研
Social platforms large and small are struggling to keep their communities safe from hate speech, extremist content, harassment a
Social platforms large and small are struggling to keep their communities safe from hate speech, extremist content, harassment a
admin
2022-11-01
71
问题
Social platforms large and small are struggling to keep their communities safe from hate speech, extremist content, harassment and misinformation. Most recently, far-right agitators posted openly about plans to storm the U.S. Capitol before doing just that on January 6. One solution might be AI: developing algorithms to detect and alert us to toxic and inflammatory comments and flag them for removal. But such systems face big challenges.
The prevalence of hateful or offensive language online has been growing rapidly in recent years, and the problem is now rampant. In some cases, toxic comments online have even resulted in real life violence, from religious nationalism in Myanmar to neo-Nazi propaganda in the U.S. Social media platforms, relying on thousands of human reviewers, are struggling to moderate the ever-increasing volume of harmful content. In 2019, it was reported that Facebook moderators are at risk of suffering from PTSD as a result of repeated exposure to such distressing content. Outsourcing this work to machine learning can help manage the rising volumes of harmful content, while limiting human exposure to it. Indeed, many tech giants have been incorporating algorithms into their content moderation for years.
One such example is Google’s Jigsaw, a company focusing on making the internet safer. In 2017, it helped create Conversation AI, a collaborative research project aiming to detect toxic comments online. However, a tool produced by that project, called Perspective, faced substantial criticism. One common complaint was that it created a general "toxicity score" that wasn’t flexible enough to serve the varying needs of different platforms. Some Web sites, for instance, might require detection of threats but not profanity, while others might have the opposite requirements. Another issue was that the algorithm learned to conflate toxic comments with nontoxic comments that contained words related to gender, sexual orientation, religion or disability.
Following these concerns, the Conversation AI team invited developers to train their own toxicity-detection algorithms and enter them into three competitions hosted on Kaggle, a Google subsidiary known for its community of machine learning practitioners, public data sets and challenges. To help train the AI models, Conversation AI released two public data sets containing over one million toxic and non-toxic comments from Wikipedia and a service called Civil Comments. The comments were rated on toxicity by annotators, with a "Very Toxic" label indicating "a very hateful, aggressive, or disrespectful comment that is very likely to make you leave a discussion or give up on sharing your perspective," and a "Toxic" label meaning "a rude, disrespectful, or unreasonable comment that is somewhat likely to make you leave a discussion or give up on sharing your perspective."
AI is used in detecting toxic online content to__________.
选项
A、eliminate religious nationalism
B、snatch jobs from human
C、reduce human suffering from negative effect
D、crack down on neo-Nazi propaganda
答案
C
解析
由题干关键词AI is used in detecting toxic online content定位到文章第二段最后两句话:“将这项工作外包给机器学习有助于管理日益增多的有害言论,同时限制人类接触这些内容。事实上,许多科技巨头多年来一直在将算法应用于内容审核。”由此可知,人工智能用于检测网上有害内容的原因有两个:不良内容日益增多;人工审查会给人带来不良影响。选项[C]“减少人类遭受的负面影响”符合原文所述内容,故为答案。
转载请注明原文地址:https://kaotiyun.com/show/DgMD777K
0
考研英语一
相关试题推荐
Peoplegenerallyassumethatwhentheyconsideranotherpersona"friend,"thatpersonalsothinksofthemasafriend,whichme
AstudyinCyberpsychology,Behavior,andSocialNetworkingsuggeststhatartificialintelligenceholdsapromisingfutureinhe
AstudyinCyberpsychology,Behavior,andSocialNetworkingsuggeststhatartificialintelligenceholdsapromisingfutureinhe
Howseriouslyshouldparentstakekids’opinionswhensearchingforahome?Inchoosinganewhome,CamilleMcClain’skidsh
Howseriouslyshouldparentstakekids’opinionswhensearchingforahome?Inchoosinganewhome,CamilleMcClain’skidsh
TheInternetaffordsanonymitytoitsusers,ablessingtoprivacyandfreedomofspeech.Butthatveryanonymityisalsobehind
TheInternetaffordsanonymitytoitsusers,ablessingtoprivacyandfreedomofspeech.Butthatveryanonymityisalsobehind
HowdoyouexplaineconomicsinplainEnglish?TheFederalReserveBankofNewYorkhasbeenansweringthequestionwithaneven
HowdoyouexplaineconomicsinplainEnglish?TheFederalReserveBankofNewYorkhasbeenansweringthequestionwithaneven
HowdoyouexplaineconomicsinplainEnglish?TheFederalReserveBankofNewYorkhasbeenansweringthequestionwithaneven
随机试题
Ifyou’redrivinginBrooklyn,Ohio,andfindyourselfattractedbyyoursurroundings,resisttheurgetogetholdofyourcell
Canadaisthesecondlargestcountryintheworldinarea,althoughits【1】isonlysome25million,most【2】ina200-milestrip【3】
脑底动脉环在脑循环中起着非常重要的作用,能沟通脑前、后、左、右的血液供应,下列哪条动脉不参与脑底动脉环的组成
主治节是指
尼古拉兹实验曲线图中,在以下哪个区域里,不同相对粗糙度的试验点,分别落在一些与横轴平行的直线上,阻力系数λ与雷诺数无关?()
甲公司为一家上市公司,2013年对外投资有关资料如下:(1)1月20日,甲公司以银行存款购买A公司发行的股票200万元股作为可供出售金融资产,实际支付价款11000万元(含已宣告尚未发放的现金股利20万元),另支付相关税费10万元,占A公司有表决
某公司对某仪器的尺寸测量如图5.5—4所示,为测量尺寸lB,先采用卡尺测得尺寸lA和尺寸lC分别为29.95mm和16.52mm,而lB=lA—lC=13.43mm。设包含因子k=3,则尺寸lB的扩展不确定度U=()。
下列文种中,行文方向固定的是()。
山洪中学、华师附中、仁达附中、交大附中这四所中学每所有两支足球队,这8支足球队的队长分别为A、B、C、D、E、F、G、H(不按顺序)。近期,这四所学校的8支球队将联合举办一系列友谊赛。赛事规定本校的两只球队互相不比赛,任两个队(除同个学校的两个队外)
(1):考生文件火下有一个上程文件sjt3.vbp。程序的功能是:通过键盘向文本框中输入数字,如果输入的是非数字字符,则提示输入错误,且文本框中不显示输入的字符。单击名称为Command1、标题为“添加”的命令按钮,则将文本框中的数字添加到名称为Cnmbb
最新回复
(
0
)