【翻译】停止大型AI实验:一封公开信
AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.
大量来自顶级AI实验室的研究证明:与人类具有竞争性的AI系统会对影响人类社会造成负面影响。根据被广泛认可的“Asilomar AI 原则”开创性的AI能够对地球生命的历史造成深刻地改变,并且其应该被用于其相称的资源来规划管理。但不幸的是这样预期中的管理并没有被实现,即使近几月AI实验室们已经陷入了失去控制的“军备竞赛”关于升级和部署更加强大的数字意识,而这种数字意识的进步是无人可预测或理解的,即使是它的创造者也不行。
Contemporary AI systems are now becoming human-competitive at general tasks,and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.
当代的AI系统变得在一般任务上与人类有竞争性,而我们有必要扪心自问:我们应该让机械将谎言充斥我们的信息渠道吗?我们应该让所有工作,包括那些让我们感到满足的工作都自动化吗?我们应该发展过于聪明,很显然替代我们的非人意识吗?我们会失去对文明的掌控吗?这样的决定一定不会被同意。升级这种强而有力的AI系统应该只在我们有自信能确保其影响是积极地并且控制其危险时才能进行。
Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.
因此,我们呼吁所有AI实验室立即停止训练强于GPT-4的AI系统至少6个月。对其的暂停应为公开透明可被验证的。如果暂停不够迅速,政府应该采取行动去干涉使其停止。
AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.
AI实验室和非独立的专家应该共同实行暂停并共享被外部独立专家严格审查的AI的安全协议。协议应该确保系统安全的处在有理由的怀疑下。这并不意味着AI迭代研究就此停止,而只是从危险的不可预测的竞赛中退出来。
AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.
AI研究应该重新聚焦于让这个当今最有力的系统变得准确、安全、可解释、透明、稳定、一致、可信赖且忠诚。
In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.
同时的,AI研究者在有力的AI管理政策下研究。其应该至少包括:专门负责人工智能的有力的监管机构; 能够监督和跟踪高性能人工智能系统的大量计算力; 用以帮助区分真实与AI合成并跟踪模型泄漏的溯源和水印系统; 强大的审计和认证相关生态系统; 关于人工智能所造成的伤害的责任分配体系; 为人工智能安全研究提供足够的基金; 以及有充足的资源的机构来应对人工智能可能对经济和政治造成的巨大破坏。(尤其是对民主的破坏)。
Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall.
人类可以享受AI带来的繁荣的未来。通过对强大AI系统的成功构筑,我们可以在收获其回报,设计明显有益于所有人的系统,并给予社会一个适应的机会的过程中享受一个“AI之夏”。许多同样对社会有灾难性影响的技术也被暂停了研究,而我们也可以对AI做同样的处理。就让我们享受漫长的“AI之夏”,而不是毫无准备的落入秋天。
We have prepared some FAQs in response to questions and discussion in the media and elsewhere. You can find them
我们准备了一些对常见问题的回答
:https://futureoflife.org/ai/faqs-about-flis-open-letter-calling-for-a-pause-on-giant-ai-experiments/
参考文献:
[1]
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?
Bostrom, N. (2016). Superintelligence. Oxford University Press.
Bucknall, B. S., & Dori-Hacohen, S. (2022, July). Current and near-term AI as a potential existential risk factor. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (pp. 119-129).
Carlsmith, J. (2022). Is Power-Seeking AI an Existential Risk?. arXiv preprint arXiv:2206.13353.
Christian, B. (2020). The Alignment Problem: Machine Learning and human values. Norton & Company.
Cohen, M. et al. (2022). Advanced Artificial Agents Intervene in the Provision of Reward. AI Magazine, 43(3) (pp. 282-293).
Eloundou, T., et al. (2023). GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models.
Hendrycks, D., & Mazeika, M. (2022). X-risk Analysis for AI Research. arXiv preprint arXiv:2206.05862.
Ngo, R. (2022). The alignment problem from a deep learning perspective. arXiv preprint arXiv:2209.00626.
Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking.
Tegmark, M. (2017). Life 3.0: Being Human in the Age of Artificial Intelligence. Knopf.
Weidinger, L. et al (2021). Ethical and social risks of harm from language models. arXiv preprint arXiv:2112.04359.
[2]
Ordonez, V. et al. (2023, March 16). OpenAI CEO Sam Altman says AI will reshape society, acknowledges risks: 'A little bit scared of this'. ABC News.
Perrigo, B. (2023, January 12). DeepMind CEO Demis Hassabis Urges Caution on AI. Time.
[3]
Bubeck, S. et al. (2023). Sparks of Artificial General Intelligence: Early experiments with GPT-4. arXiv:2303.12712.
OpenAI (2023). GPT-4 Technical Report. arXiv:2303.08774.
[4]
Ample legal precedent exists – for example, the widely adopted OECD AI Principles require that AI systems "function appropriately and do not pose unreasonable safety risk".
[5]
Examples include human cloning, human germline modification, gain-of-function research, and eugenics.
原网址:https://futureoflife.org/open-letter/pause-giant-ai-experiments/
(大概要科学上网