欢迎光临散文网 会员登陆 & 注册

外刊| 经济学人 如何明智地担忧AI

2023-04-29 01:50 作者:狂奔的外刊  | 我要投稿

文章简介:文章讨论了人工智能(AI)的快速进展引起的兴奋和担忧,并探讨了人们应该有多大程度的关注。文章介绍了新的“大型语言模型”(LLM)的能力,以及这些能力的发展方向。文章讨论了人工智能可能会威胁到工作岗位、事实准确性、声誉以及人类的存在本身,并探讨了政府应该如何对其进行监管。最后,文章提出需要平衡人工智能的优势和风险,并准备好随时适应AI。

 


Leaders

社论

How to worry wisely about AI

如何明智地担忧人工智能

Rapid progress in AI is arousing fear as well as excitement. How concerned should you be?

人工智能的快速发展不仅引起了兴奋,也引起了恐惧。你应该有多担忧呢?

 

“SHOULD WE AUTOMATE away all the jobs, including the fulfilling ones? Should we develop non-human minds that might eventually outnumber, outsmart...and replace us? Should we risk loss of control of our civilisation?” These questions were asked last month in an open letter from the Future of Life Institute, an NGO. It called for a six-month “pause” in the creation of the most advanced forms of artificial intelligence (AI), and was signed by tech luminaries including Elon Musk. It is the most prominent example yet of how rapid progress in AI has sparked anxiety about the potential dangers of the technology.

 

“我们应该自动化所有工作,包括那些有意义的工作吗?我们应该开发非人类思维,可能最终会超过我们的人口、比我们更为聪明……并取代我们吗?我们应该冒着失去对我们文明控制的风险吗?”这些问题是一家非政府组织——未来生命研究所——上个月在一封公开信中提出的。这封信呼吁暂停最先进的人工智能(AI)的开发六个月,并由包括埃隆·马斯克在内的技术界名人签署。这是迄今为止最显着的例子,说明快速进展的人工智能引发了人们对该技术潜在危险的担忧。

 

In particular, new “large language models” (LLMs)—the sort that powers ChatGPT, a chatbot made by OpenAI, a startup— have surprised even their creators with their unexpected talents as they have been scaled up. Such “emergent” abilities include everything from solving logic puzzles and writing computer code to identifying films from plot summaries written in emoji.

 

特别是新的“大型语言模型”(LLMs)——类似于由初创公司OpenAI制作的聊天机器人ChatGPT的模型——在不断铺开时,它们的创造者甚至都被其出乎意料的才能惊讶了。这些“新兴”的能力包括从解决逻辑难题和编写计算机代码,到从用表情符号写的情节概要中了解电影情节。

 

These models stand to transform humans’ relationship with computers, knowledge and even with themselves. Proponents of AI argue for its potential to solve big problems by developing new drugs, designing new materials to help fight climate change, or untangling the complexities of fusion power. To others, the fact that AIs’ capabilities are already outrunning their creators’ understanding risks bringing to life the science-fiction disaster scenario of the machine that outsmarts its inventor, often with fatal consequences.

 

这些模型有望改变人类与计算机、知识乃至自身的关系。AI的支持者主张,通过开发新药物、设计新材料来帮助对抗气候变化或解决核聚变等问题,AI有潜力解决大问题。但对于其他人来说,AI的能力已经超越了创造者的理解,这种情况可能会带来科幻灾难的场景,即机器比它的发明者更聪明,经常导致致命的后果。

 

This bubbling mixture of excitement and fear makes it hard to weigh the opportunities and risks. But lessons can be learned from other industries, and from past technological shifts. So what has changed to make AI so much more capable? How scared should you be? And what should governments do?

 

这种兴奋和恐惧的沸腾混合物使得权衡机会和风险变得困难。但可以从其他行业以及过去的技术转变中学到一些经验教训。那么,是什么已经改变,使得人工智能变得如此有能力?你应该有多担心?政府应该采取什么措施?

 

In a special Science section, we explore the workings of LLMs and their future direction. The first wave of modern AI systems, which emerged a decade ago, relied on carefully labelled training data. Once exposed to a sufficient number of labelled examples, they could learn to do things like recognise images or transcribe speech. Today’s systems do not require pre-labelling, and as a result can be trained using much larger data sets taken from online sources. LLMs can, in effect, be trained on the entire internet—which explains their capabilities, good and bad.

 

在这个特别的科学报道,我们探讨了LLMs的工作原理和它们的未来方向。现代人工智能系统的第一波高潮出现于十年前,它们依赖于经过精心标记的训练数据。一旦接触到足够数量的标记样本,它们就可以学会识别图像或转录语音等操作。如今的系统不需要预先标记,因此可以使用在线源获取的更大数据集进行训练。LLMs实际上可以在整个互联网上进行训练,这解释了它们的能力,包括了正反两面。

 

Those capabilities became apparent to a wider public when ChatGPT was released in November. A million people had used it within a week; 100m within two months. It was soon being used to generate school essays and wedding speeches. ChatGPT’s popularity, and Microsoft’s move to incorporate it into Bing, its search engine, prompted rival firms to release chatbots too.

 

当ChatGPT于11月发布时,这些能力开始为更广泛的公众所认识。一周内有100万人使用它,两个月内有1亿人使用。它很快被用来生成学校论文和婚礼演讲。ChatGPT的流行,以及微软将其纳入其搜索引擎Bing中的举措,促使竞争对手公司也推出了聊天机器人。

 

Some of these produced strange results. Bing Chat suggested to a journalist that he should leave his wife. ChatGPT has been accused of defamation by a law professor. LLMs produce answers that have the patina of truth, but often contain factual errors or outright fabrications. Even so, Microsoft, Google and other tech firms have begun to incorporate LLMs into their products, to help users create documents and perform other tasks.

 

其中一些产生了奇怪的结果。Bing Chat向一位记者建议他应该离开他的妻子。 ChatGPT被一位法学教授指控诽谤。LLMs生成的答案具有真实的光泽,但通常包含错误或彻底的虚构。即便如此,微软、谷歌和其他科技公司已经开始将LLMs纳入其产品中,以帮助用户创建文档和执行其他任务。

 

The recent acceleration in both the power and visibility of AI systems, and growing awareness of their abilities and defects, have raised fears that the technology is now advancing so quickly that it cannot be safely controlled. Hence the call for a pause, and growing concern that AI could threaten not just jobs, factual accuracy and reputations, but the existence of humanity itself.

 

AI系统的能力和可见性的近期进展加速,以及对其能力和缺陷不断增加的认识,引发了人们的担忧,认为这项技术如今正在以如此快的速度前进,以至于无法安全地控制它。因此暂停对AI的开发被提起,人们越来越担心AI不仅会威胁到就业、事实准确性和声誉,还会威胁到人类自身的存在。

 

Extinction? Rebellion?

灭绝?反叛?

 

The fear that machines will steal jobs is centuries old. But so far new technology has created new jobs to replace the ones it has destroyed. Machines tend to be able to perform some tasks, not others, increasing demand for people who can do the jobs machines cannot. Could this time be different? A sudden dislocation in job markets cannot be ruled out, even if so far there is no sign of one. Previous technology has tended to replace unskilled tasks, but LLMs can perform some white-collar tasks, such as summarising documents and writing code.

 

担心机器会夺走工作的恐惧已经存在了几个世纪。但到目前为止,新技术已经创造了新的工作来取代它摧毁的那些工作。机器倾向于能够执行某些任务,而不能执行其他任务,这增加了需要那些能够执行机器无法胜任的工人需求。这一次会不会有所不同呢?虽然迄今为止没有迹象表明会出现突然的就业市场失调,但这种情况也不能排除。以往的技术往往替换的是无技能劳动,但是LMM可以执行一些白领任务,例如文档摘要和编写代码。

 

The degree of existential risk posed by AI has been hotly debated. Experts are divided. In a survey of AI researchers carried out in 2022, 48% thought there was at least a 10% chance that AI’s impact would be “extremely bad (eg, human extinction)”. But 25% said the risk was 0%; the median researcher put the risk at 5%. The nightmare is that an advanced AI causes harm on a massive scale, by making poisons or viruses, or persuading humans to commit terrorist acts. It need not have evil intent: researchers worry that future AIs may have goals that do not align with those of their human creators.

 

AI所带来的生存风险程度一直存在着激烈的争议。专家们的意见不一。在2022年进行的一项AI研究员调查中,48%的人认为AI对人类的影响至少有10%的可能性是“极其糟糕的(例如,人类灭绝)”。但25%的人认为风险为0%,研究员认为风险的中位数为5%。噩梦是,一个先进的AI会通过制造毒药或病毒,或者说服人类进行恐怖主义行为,从而造成大规模的危害。它不必具有邪恶的意图:研究人员担心未来的AI可能与人类创造者的目标不一致。

 

Such scenarios should not be dismissed. But all involve a huge amount of guesswork, and a leap from today’s technology. And many imagine that future AIs will have unfettered access to energy, money and computing power, which are real constraints today, and could be denied to a rogue AI in future. Moreover, experts tend to overstate the risks in their area, compared with other forecasters. (And Mr Musk, who is launching his own AI startup, has an interest in his rivals downing tools.) Imposing heavy regulation, or indeed a pause, today seems an over-reaction. A pause would also be unenforceable.

 

这些情景不应被忽视。但是,所有这些情景都需要进行大量的猜测,并且需要跨越当前技术的巨大飞跃。许多人想象未来的AI将拥有不受限制的能源、金钱和计算能力,而这些如今的现实约束,可能会拒绝提供给失控的AI。此外,专家们往往与其他预测者相比夸大了自己领域的风险。(马斯克先生正在推出自己的AI初创公司,他有兴趣让自己的竞争对手下岗。) 现在强制实施严格的监管或甚至暂停AI开发似乎是过度反应。何况,暂停行为也无法履行。

 

Regulation is needed, but for more mundane reasons than saving humanity. Existing AI systems raise real concerns about bias, privacy and intellectual-property rights. As the technology advances, other problems could become apparent. The key is to balance the promise of AI with an assessment of the risks, and to be ready to adapt.

 

AI需要监管,但其目的不是为了拯救人类,而是出于更为普通的原因。现有的AI系统引发了有关偏见、隐私和知识产权方面的真正问题。随着技术的进步,其他问题可能会变得明显。问题的关键在于平衡AI的承诺与风险评估,并准备适应变化。

 

So far governments are taking three different approaches. At one end of the spectrum is Britain, which has proposed a “light-touch” approach with no new rules or regulatory bodies, but applies existing regulations to AI systems. The aim is to boost investment and turn Britain into an “AI superpower”. America has taken a similar approach, though the Biden administration is now seeking public views on what a rulebook might look like.

 

至今,政府们采取了三种不同的方法。在一端是英国,提出了一种“轻触式”方法,不制定新规则或设立监管机构,但将现有规定应用于人工智能系统。其目的是促进投资,将英国打造成为“人工智能超级大国”。美国采取了类似的方法,尽管拜登政府现在正在征求公众对规则书的看法。

 

The EU is taking a tougher line. Its proposed law categorises different uses of AI by the degree of risk, and requires increasingly stringent monitoring and disclosure as the degree of risk rises from, say, music-recommendation to selfdriving cars. Some uses of AI are banned altogether, such as subliminal advertising and remote biometrics. Firms that break the rules will be fined. For some critics, these regulations are too stifling.

 

欧盟采取了更加严厉的立场。其提出的法律按照风险程度将不同的人工智能用途分类,并要求随着风险程度从音乐推荐到自动驾驶汽车而增加,监管和披露要求逐渐变得更加严格。有些人工智能的用途被禁止,例如潜意识广告和远程生物特征识别。违反规定的公司将被罚款。对于一些批评者来说,这些规定过于压抑创新。

 

But others say an even sterner approach is needed. Governments should treat AI like medicines, with a dedicated regulator, strict testing and pre-approval before public release. China is doing some of this, requiring firms to register AI products and undergo a security review before release. But safety may be less of a motive than politics: a key requirement is that AIs’ output reflects the “core value of socialism”.

 

但是,还有人认为需要更加严格的方法。政府应该像管理药品一样管理人工智能,设置专门的监管机构,在公开发行之前进行严格的测试和预先批准。中国正在做一些这方面的工作,要求企业在发布人工智能产品之前进行注册和安全审查。但安全相比于政治可能不是其主要动机,因为重要的要求是,人工智能的输出内容反映“社会主义核心价值观”。

 

What to do? The light-touch approach is unlikely to be enough. If AI is as important a technology as cars, planes and medicines—and there is good reason to believe that it is —then, like them, it will need new rules. Accordingly, the EU’s model is closest to the mark, though its classification system is overwrought and a principles-based approach would be more flexible. Compelling disclosure about how systems are trained, how they operate and how they are monitored, and requiring inspections, would be comparable to similar rules in other industries.

 

怎么办?轻触式的方法不太可能足够。如果人工智能像汽车、飞机和药品一样重要,而有充分的理由相信它确实是这样——那么,像它们一样,AI将需要新的规则。因此,欧盟的模式最接近实际情况,尽管它的分类系统有些繁琐,采用基于原则的方法会更加灵活。强制公开系统的训练方式、运行方式和监控方式,并要求进行检查,可以与其他行业的类似规则相媲美。

 

This could allow for tighter regulation over time, if needed. A dedicated regulator may then seem appropriate; so too may intergovernmental treaties, similar to those that govern nuclear weapons, should plausible evidence emerge of existential risk. To monitor that risk, governments could form a body modelled on CERN, a particle-physics laboratory, that could also study AI safety and ethics—areas where companies lack incentives to invest as much as society might wish.

 

如果需要的话,这将允许加强监管。专门的监管机构可能是合适的选择;类似于管理核武器的政府间条约,如果有存在风险的合理证据,也可以采取这种方法。为了监测这种风险,政府可以成立一个以欧洲核子研究中心为模型的机构,该机构还可以研究人工智能安全和道德伦理,在这些领域,私人公司缺乏投资的动力。

 

This powerful technology poses new risks, but also offers extraordinary opportunities. Balancing the two means treading carefully. A measured approach today can provide the foundations on which further rules can be added in future. But the time to start building those foundations is now.

 

这种强大的技术带来了新的风险,同时也提供了非凡的机遇。平衡这两者意味着要谨慎行事。今天采取适度的方法可以为未来增加更多规则提供基础。但开始建立这些基础的时机已经到了。

 

长难句分析

1.   "These questions were asked last month in an open letter from the Future of Life Institute, an NGO."

这个句子主要包含两个从句,分别是:“These questions were asked last month in an open letter”和“from the Future of Life Institute, an NGO”。第一个从句是主句,它的主语是“These questions”,谓语是“were asked”,表示这些问题在上个月被问了。第二个从句是一个介词短语,用来说明主语所在的来源,其中“from”表示“来自于”,“the Future of Life Institute”是这个来源的名称,而“an NGO”是对这个机构的进一步说明,意为“一个非政府组织”。

 

2.   "It called for a six-month “pause” in the creation of the most advanced forms of artificial intelligence (AI), and was signed by tech luminaries including Elon Musk."

这个句子也包含两个从句,分别是:“It called for a six-month ‘pause’ in the creation of the most advanced forms of artificial intelligence (AI)”和“and was signed by tech luminaries including Elon Musk”。第一个从句是主句,主语是“It”,谓语是“called for”,表示这个机构呼吁停止六个月的最先进人工智能的开发。第二个从句是一个并列的从句,使用了连词“and”,用来说明这个呼吁的重要性,因为它被一些技术界的名人签署了,其中包括埃隆·马斯克。

 

3.   "These models stand to transform humans’ relationship with computers, knowledge and even with themselves."

这个句子主要包含一个主语和谓语,但是在主语中使用了不定式短语,这增加了整个句子的复杂度。主语是“These models”,谓语是“stand to transform”,表示这些模型有可能改变人与计算机、知识甚至自身的关系。不定式短语“to transform humans’ relationship with computers, knowledge and even with themselves”是用来说明主语的作用的,意为“改变人与计算机、知识甚至自身的关系”。

 

4.   "The first wave of modern AI systems, which emerged a decade ago, relied on carefully labelled training data."

这个句子包含两个从句,分别是:“The first wave of modern AI systems relied on carefully labelled training data”和“which emerged a decade ago”。第一个从句是主句,主语是“The first wave of modern AI systems”,谓语是“relied on”,表示这些现代人工智能系统依赖于精心标注的训练数据。第二个从句是一个定语从句,用来进一步说明主语,其中“which”代表的是“the first wave of modern AI systems”,“emerged”是谓语,意为“出现”,“a decade ago”是时间状语,表示“十年前”。

 

5.   "ChatGPT’s popularity, and Microsoft’s move to incorporate it into Bing, its search engine, prompted rival firms to release chatbots too."

这是一个复合句,其中“ChatGPT’s popularity”和“Microsoft’s move to incorporate it into Bing, its search engine”是并列的两个独立分句,它们都作为主语,谓语动词是“prompted”,意为“促使”。主句的宾语是“rival firms to release chatbots too”,意为“竞争对手公司也开始发布聊天机器人”。

 

6.   Experts are divided, but all involve a huge amount of guesswork, and a leap from today’s technology, and many imagine that future AIs will have unfettered access to energy, money and computing power, which are real constraints today, and could be denied to a rogue AI in future.

这个句子有三个并列的分句,其中第一个分句的主语是“Experts”,谓语是“are divided”,意思是“专家们存在分歧”。第二个分句是一个复合句,其中“all”指的是前文提到的与AI相关的可怕情景,“involve”是谓语,表示这些情景都需要很多猜测和技术的飞跃。第三个分句的主语是“many”,谓语是“imagine”,表示“很多人想象未来的人工智能将会无限制地获得能源、资金和计算能力,而这些都是现在存在的真正限制,未来的叛逆AI可能无法获得这些资源”。

 

7.   The key is to balance the promise of AI with an assessment of the risks, and to be ready to adapt.

这是一个复合句,其中主语是“The key”,谓语是“is”,意思是“关键在于”,后面是一个并列的复合谓语,“balance”和“be ready to adapt”,分别表示“平衡AI带来的好处和风险评估”以及“做好应对准备”。

 

8. "If AI is as important a technology as cars, planes and medicines—and there is good reason to believe that it is —then, like them, it will need new rules."

这句话的主语是“AI”,谓语是“will need”,主干是“AI will need new rules”,它是一个复杂句,其中的条件状语从句为“if AI is as important a technology as cars, planes and medicines”,意思是“如果AI与汽车、飞机和药品一样重要,那么它就需要新的规定”。整个句子的意思是如果AI是像汽车、飞机和药品一样重要的技术,那么它就需要像这些技术一样的新规定。

 

9.   "Accordingly, the EU’s model is closest to the mark, though its classification system is overwrought and a principles-based approach would be more flexible."

这是一个复杂句,其中的连词是“though”,表示转折关系。主语是“the EU’s model”,谓语是“is closest”,宾语是“to the mark”,意思是“最接近标准”。整个句子的意思是,因此,欧盟的模型最接近标准,尽管它的分类系统有点复杂,但基于原则的方法会更加灵活。

 

10. "Compelling disclosure about how systems are trained, how they operate and how they are monitored, and requiring inspections, would be comparable to similar rules in other industries."

这是一个复杂句,其中的连词是“and”,表示并列关系。主语是“Compelling disclosure”,谓语是“would be comparable”,意思是“将与其他行业的类似规定相媲美”。整个句子的意思是,强制披露有关系统的培训方式、运行方式和监控方式,并要求进行检查,将与其他行业的类似规定相媲美。

 

11. "To monitor that risk, governments could form a body modelled on CERN, a particle-physics laboratory, that could also study AI safety and ethics—areas where companies lack incentives to invest as much as society might wish."

这是一个复杂句,其中的连词是“that”,表示定语从句。主语是“a body”,谓语是“could form”,意思是“可以组建一个机构”。定语从句中,CERN是一个粒子物理学实验室的比喻,说明这个机构可以监测AI的风险,并研究AI的安全性和道德伦理问题,这些是企业缺乏投资动力的领域,而社会却期望能够得到更多的投资。整个句子的意思是,为了监测这种风险,政府可以组建一个类似于CERN的机构,这个机构也可以研究AI的安全性和伦理道德等问题。

外刊| 经济学人 如何明智地担忧AI的评论 (共 条)

分享到微博请遵守国家法律