欢迎光临散文网 会员登陆 & 注册

Daily Translation #11

2023-09-19 21:26 作者:Glaaaacier  | 我要投稿

我们可以像阻止核灾难一样预防AI灾难

1945716日,这一天永远地改变了世界。由罗伯特·奥本海默领导的曼哈顿计划三位一体实验,使得人类首次具有了自我毁灭的能力:在新墨西哥州的洛斯阿拉莫斯往南210英里处,一颗原子弹被成功引爆。

194586日,美军向广岛投放了原子弹,三天后又向长崎投放了一颗原子弹。这两颗原子弹释放出了前所未有的破坏力,并为第二次世界大战带来了脆弱的和平,同样也让世界笼罩在这种新的危险之下。

尽管核技术的应用为我们带来了大量的能源,但它也预示着人类文明的一种未来,即在核战争中毁灭。核技术的爆炸式发展已经扩大至全球范围。通过国际合作来管控核技术进而避免全球性灾难的趋势愈发明显。如果想要建立一个强有力的机构管控核技术,那么时间就是关键。

1952年,欧洲11个国家组建了欧洲核子研究委员会(CERN),目的是进行纯基础性质的科学核研究合作,表明该组织的研究是为了造福大众。国际原子能机构(IAEA)于1957年成立,旨在监测全球铀储备和限制核扩散。还有其他的一些组织,它们帮助我们平安度过了过去70年。

我们相信现在,人类正又一次面临着技术的爆炸式发展:先进人工智能的发展。这项强劲的技术如果不加限制,一定会给人类带来毁灭。但如果对其进行合理的安全管控,它也能够创造一个美好的未来。

 

对人工通用智能的恐惧

专家们一直在警告人工通用智能(AGI)的发展。包括OpenAICEO山姆·阿尔特曼和谷歌DeepMindCEO戴密斯·哈萨比斯在内,许多杰出的人工智能科学家和主要人工智能公司的领袖都签署了人工智能安全中心的一份声明:减轻人工智能带来的人类灭绝风险,应当像其他社会性风险(比如大规模流行病和核战争)一样,成为全球优先事项。几个月前,另一份呼吁停止大规模人工智能实验的倡导书获得了超2.7万人的签名,其中包括图灵奖获得者约书亚·本吉奥和杰弗里·欣顿。

这是因为一个由人工智能公司(包括OpenAI,谷歌DeepMindAnthropic)组成的小组正致力于研发AGI:它不仅像ChatGPT一样是个聊天机器人,还是“自主的,并且在大多数经济活动中表现胜于人类”的AI。投资者,现任英国基金会模型工作组主席的伊恩·霍加斯,称这些AI“如神一般”并且恳求政府减缓AI的研发进度。即使是那些人工智能的开发者也从中预见到了危险。OpenAICEO山姆·阿尔特曼说“超人机械智能(SMI)的发展可能是人类存续所面临的最大挑战。”

世界领袖们正呼吁建立一个国际性机构来应对AGI带来的威胁:一个管控AI的“CERN”或者“IAEA”。在6月,美国总统拜登和英国首相苏纳克就相关事宜进行了讨论。联合国秘书长安东尼奥·古特雷斯也认为我们需要这样的机构。鉴于通过国际合作应对人工智能风险的共识日益增强,我们应当切实地思考一下这样的机构该如何建立。

 

MAGIC会成为AICERN吗?

MAGIC(多边AGI联盟)会成为全球唯一一个专注于安全研究和发展先进人工智能的高级安全AI机构。像CERN一样,MAGIC会把AGI的开发从企业手中移交给致力于AI安全发展的国际组织。

在高风险研究和先进技术开发中,MAGIC将有独占权。其他实体独自进行AGI开发将被视为非法。这并不会对绝大多数的AI研究和发展造成影响,只会影响AGI相关的前沿技术,就像我们在其他科技领域中对危险的研发做的那样。对致命病原体的研究设计是绝对禁止的,或者仅限于高级别生物安全实验室。同时,大多数的药品研究会在监管机构(比如美国食品药品监督管理局(FDA))的监督下进行。

MAGIC只会关注并阻止前沿AI系统(如神AI)发展岁带来的高风险。只有经过安全证明,MAGIC才会将研究突破向全球共享。

为了确保高风险AI研究保持安全,并在MAGIC的严格监管下,全球停止创造使用超过一定计算能力的人工智能(关于为什么计算能力如此重要,这里有一个很好的概述https://time.com/6300942/ai-progress-charts/)。这与国际上处理铀的方式类似。铀是核武器与核能中使用的主要资源。

失去了竞争压力,MAGIC能够确保为这项变革性技术提供充分的安全与保障,并且使所有签署国都能够获益。MAGIC会像CERN一样获得成功。

美国和英国正大力推动这一多边合作,并在今年11月即将举行的全球人工智能峰会后促成其建立。

防范AGI带来的生存风险是一项艰巨的任务,而把这一任务留给私营企业则是一个更加危险的举动。我们不会让个体或企业为私人用途而研发核武器,我们也不会允许同样的事情发生在危险强大的人工智能身上。我们成功地避免了核灾难,我们也能再一次保卫我们的未来,除非我们碌碌无为。我们必须把先进AI研发交到一个全球性、可信赖的机构手中,这样才能为每个人创造一个安全的未来。

二战后的机构通过管控核技术发展使我们免受核战争的摧残。现在,人类正面临着新的全球性威胁——失控的人工通用智能——我们必须再一次采取措施保卫我们的未来。

 

译者注:说实话,这篇文章极具煽动性和感染力,但其目的很明显。明面上渲染AI威胁,实则搞技术垄断。而且文章通过多方面强调AI威胁,但通篇没有给出一个例子。拿AI和核弹作类比,核弹的破坏力是显而易见的(当然,往霓虹投核弹时我没在场,建议再投几颗让大家开开眼),AI的危险性全是那些科技巨头说出来的,都是些侧面描写(比如说原文中的“godlike AIs”),哪怕你讲个天网我也信了。所以单就这篇文章来说,其说服力是不够的。

但退一万步讲,我宁愿让AI毁灭人类,也不愿将它送入资本家手中。


Original Article:

We Can Prevent AI Disaster Like We Prevented Nuclear Catastrophe

On 16th July 1945 the world changed forever. The Manhattan Project’s ‘Trinity’ test, directed by Robert Oppenheimer, endowed humanity for the first time with the ability to wipe itself out: an atomic bomb had been successfully detonated 210 miles south of Los Alamos, New Mexico.

On 6th August 1945 the bomb was dropped on Hiroshima and three days later, Nagasaki— unleashing unprecedented destructive power. The end of World War II brought a fragile peace, overshadowed by this new, existential threat.

While nuclear technology promised an era of abundant energy, it also launched us into a future where nuclear war could lead to the end of our civilization. The ‘blast radius’ of our technology had increased to a global scale. It was becoming increasingly clear that governing nuclear technology to avoid a global catastrophe required international cooperation. Time was of the essence to set up robust institutions to deal with this.

In 1952, 11 countries set up CERN and tasked it with “collaboration in scientific [nuclear] research of a purely fundamental nature”—making clear that CERN’s research would be used for the public good. The International Atomic Energy Agency (IAEA) was also set up in 1957 to monitor global stockpiles of uranium and limit proliferation. Among others, these institutions helped us to survive over the last 70 years.

We believe that humanity is facing once more an increase in the ‘blast radius’ of technology: the development of advanced artificial intelligence. A powerful technology that could annihilate humanity if left unrestrained, but, if harnessed safely, could change the world for the better.

 

The specter of artificial general intelligence

Experts have been sounding the alarm on artificial general intelligence (AGI) development. Distinguished AI scientists and leaders of the major AI companies, including Sam Altman of OpenAI and Demis Hassabis of Google DeepMind, signed a statement from the Center for AI Safety that reads: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” A few months earlier, another letter calling for a pause in giant AI experiments was signed over 27,000 times, including by Turing Prize winners Yoshua Bengio and Geoffrey Hinton.

This is because a small group of AI companies (OpenAI, Google Deepmind, Anthropic) are aiming to create AGI: not just chatbots like ChatGPT, but AIs that are “autonomous and outperform humans at most economic activities”. Ian Hogarth, investor and now Chair of the UK’s Foundation Model Taskforce, calls these ‘godlike AIs’ and implored governments to slow down the race to build them. Even the developers of the technology themselves expect great danger from it. Altman, CEO of the company behind ChatGPT, said that the “Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity.”

World leaders are calling for the establishment of an international institution to deal with the threat of AGI: a ‘CERN’ or ‘IAEA for AI’. In June, President Biden and U.K. Prime Minister Sunak discussed such an organization. The U.N. Secretary-General, Antonio Guterres thinks we need one, too. Given this growing consensus for international cooperation to respond to the risks from AI, we need to lay out concretely how such an institution might be built.

 

Would a ‘CERN for AI’ look like MAGIC?

MAGIC (the Multilateral AGI Consortium) would be the world’s only advanced and secure AI facility focused on safety-first research and development of advanced AI. Like CERN, MAGIC will allow humanity to take AGI development out of the hands of private firms and lay it into the hands of an international organization mandated towards safe AI development.

MAGIC would have exclusivity when it comes to the high-risk research and development of advanced AI. It would be illegal for other entities to independently pursue AGI development. This would not affect the vast majority of AI research and development, and only focus on frontier, AGI-relevant research, similar to how we already deal with dangerous R&D with other technologies. Research on engineering lethal pathogens is outright banned or confined to very high biosafety level labs. At the same time, the vast majority of drug research is instead supervised by regulatory agencies like the FDA.

MAGIC will only be concerned with preventing the high-risk development of frontier AI systems - godlike AIs. Research breakthroughs done at MAGIC will only be shared with the outside world once proven demonstrably safe.

To make sure high risk AI research remains secure and under strict oversight at MAGIC, a global moratorium on creation of AIs using more than a set amount of computing power be put in place (here’s a great overview of why computing power matters). This is similar to how we already deal with uranium internationally, the main resource used for nuclear weapons and energy.

Without competitive pressures, MAGIC can ensure the adequate safety and security needed for this transformative technology, and distribute the benefits to all signatories. CERN exists as a precedent for how we can succeed with MAGIC.

The U.S. and the U.K. are in a perfect position to facilitate this multilateral effort, and springboard its inception after the upcoming Global Summit on Artificial Intelligence in November this year.

Averting existential risk from AGI is daunting, and leaving this challenge to private companies is a very dangerous gamble. We don’t let individuals or corporations develop nuclear weapons for private use, and we shouldn’t allow this to happen with dangerous, powerful AI. We managed to not destroy ourselves with nuclear weapons, and we can secure our future again - but not if we remain idle. We must place advanced AI development into the hands of a new global, trusted institution and create a safer future for everyone.

Post-WWII institutions helped us avoid nuclear war by controlling nuclear development. As humanity faces a new global threat—uncontrolled artificial general intelligence (AGI)—we once again need to take action to secure our future.

 

原网址:

https://time.com/6314045/prevent-ai-disaster-nuclear-catastrophe/




Daily Translation #11的评论 (共 条)

分享到微博请遵守国家法律