《暂停人工智能实验》(全球公开信)
全球知名的科学家、资本家马斯克、图灵奖得主等千名专家发表联名公开信,呼吁暂停训练任何比GPT-4更强的人工智能。
原文:Pause Giant AI Experiments: An Open Letter
此为中文原文
正如广泛的研究[1]所表明以及顶尖人工智能实验室所承认[2]的那样,等同或者强于人类智慧的人工智能可能给社会和人类文明带来巨大的风险。已被广泛认可的阿西洛马人工智能原则提出,先进的人工智能将给地球生命的发展历程带来深远的改变,而相应地,也应以足够的资源和社会关注度来对其进行规划和管理。但不幸的是,我们当下对人工智能的规划和管理远远达不到这种程度,而与此同时,各个人工智能实验室都被主动或被动地拖入一场失控的竞赛中,争相开发和部署越来越强大的人工智能,但包括它们的创造者在内的所有人,都无法完全理解、预测和控制这些数字大脑。
当下,人工智能系统已经慢慢可以在一般的任务处理中与人类相媲美[3],我们必须要问自己:我们 要让机器使用各种宣传和虚假内容来主导我们的信息渠道吗?我们 要把我们所有的工作都交给机器来自动化处理吗,即便有些工作本可以赋予我们独特的成就感?我们 要创造一种最终可能在数量和能力上都超过我们的非人类的智慧,并让它们淘汰和取代我们吗?我们 要冒这种失去对我们自身文明控制的风险吗?这些问题的答案不应该由未经人民选举和承认的科技行业的控制人来代替我们回答。只有当我们确信它们带来的影响将是积极的,它们承载的风险将是可控的,我们才可以发展强大的人工智能系统。 而这份自信的来源必须有所依据,并且随着人工智能对人类社会影响的扩大而增强。OpenAI在最近关于通用人工智能的声明中说,“在某个时间之后,对先进人工智能的训练预先进行独立审查可能会变得很重要,同时,领域内的顶尖力量要达成一个约定,以限制用于新模型的计算资源的增长速度。” 我们同意这个观点,而那个时间就是现在。
因此,我们呼吁所有人工智能实验室立即暂停训练任何强于GPT-4的人工智能,时间为至少六个月。 这个暂停应该是公开的,可以经过核查确认的,并且要涵盖所有主要的人工智能开发组织和团体。如果此暂停不能在短期内得到实施和确认,政府应该介入并颁布暂停法令。
人工智能实验室和相关领域专家应利用这一暂停时间,共同制定和部署一套针对高等级人工智能设计和研发的共享安全协议,并由独立的专家进行严格的监督和审查。这份协议应该确保任何遵守该协议的人工智能在除合理的风险[4]之外是安全的。这并不 代表暂停人工智能的总体发展,只是在这个危险的竞赛中暂时退一小步,以避免它将我们带向越来越大、拥有突现能力且不可预测的人工智能模型黑箱。
人工智能的研究和开发应该重新聚焦于使当前最强大和先进的人工智能系统更加准确、安全、可解释、透明、稳健、一致、值得信赖和忠诚。
与此同时,人工智能开发者应该与政策制定者一同,大幅加快开发稳健可靠的人工智能管理系统。该系统至少应当包括:新设立的专门针对人工智能的强有力的监管机构;对高等级人工智能系统以及可用于人工智能的大型计算池的监督和跟踪;来源和水印标记系统,以区分真实和生成内容,同时跟踪模型泄露;强有力的审查、认证和生态管理系统;关于人工智能所造成危害的责任制度;用于人工智能安全技术研究的公共资金;应对人工智能可能导致的对经济和政治(尤其是民主)的巨大破坏的机构和充足资源。
在人工智能的帮助下,人类将迎来一个繁荣的未来。在成功创造出强大的人工智能系统之后,我们步入了生机勃勃的人工智能之夏,我们收获成果,利用它们为所有人创造美好生活,同时给社会一个适应和喘息的机会。我们的社会曾经为一些可能带来灾害性影响的技术按下了暂停键[5],现在,我们也可以这么做。让我们慢慢享受一个漫长而美好的人工智能之夏,而不是毫无准备地匆忙踏入危亡之秋。
Pause Giant AI Experiments: An Open Letter
AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.[2] As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.
Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now.
Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.
AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.
AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.
In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.
Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.[5] We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall.
注释和引用
[1]
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?🦜. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610-623).
Bostrom, N. (2016). Superintelligence. Oxford University Press.
Bucknall, B. S., & Dori-Hacohen, S. (2022, July). Current and near-term AI as a potential existential risk factor. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (pp. 119-129).
Carlsmith, J. (2022). Is Power-Seeking AI an Existential Risk?. arXiv preprint arXiv:2206.13353.
Christian, B. (2020). The Alignment Problem: Machine Learning and human values. Norton & Company.
Cohen, M. et al. (2022). Advanced Artificial Agents Intervene in the Provision of Reward. AI Magazine, 43(3) (pp. 282-293).
Eloundou, T., et al. (2023). GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models.
Hendrycks, D., & Mazeika, M. (2022). X-risk Analysis for AI Research. arXiv preprint arXiv:2206.05862.
Ngo, R. (2022). The alignment problem from a deep learning perspective. arXiv preprint arXiv:2209.00626.
Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking.
Tegmark, M. (2017). Life 3.0: Being Human in the Age of Artificial Intelligence. Knopf.
Weidinger, L. et al (2021). Ethical and social risks of harm from language models. arXiv preprint arXiv:2112.04359.
[2]
Ordonez, V. et al. (2023, March 16). OpenAI CEO Sam Altman says AI will reshape society, acknowledges risks: 'A little bit scared of this'. ABC News.
Perrigo, B. (2023, January 12). DeepMind CEO Demis Hassabis Urges Caution on AI. Time.
[3]
Bubeck, S. et al. (2023). Sparks of Artificial General Intelligence: Early experiments with GPT-4. arXiv:2303.12712.
OpenAI (2023). GPT-4 Technical Report. arXiv:2303.08774.
[4]
Ample legal precedent exists – for example, the widely adopted OECD AI Principles require that AI systems "function appropriately and do not pose unreasonable safety risk".
[5]
Examples include human cloning, human germline modification, gain-of-function research, and eugenics.








