欢迎光临散文网 会员登陆 & 注册

Daily Translation #7

2023-09-07 12:23 作者:Glaaaacier  | 我要投稿

杀手AI”的确存在,以下是我们如何在新世界中保持安全、理智和强壮的建议。

人工智能(AI)的快速发展令人惊叹。从医疗保健到财政金融,AI正改变着我们的产业体系,并且有着能够将人类生产力提升到前所未有的水平的巨大潜力。然而这种振奋人心的发展前景也伴随着民众和一些专家的担忧,即“杀手AI”的出现。在这样一个创新能够以意想不到的方式改变社会的世界里,科幻作品中AI给人们带来的恐惧是否会变成现实?

为了解答类似的疑问,我们最近为乔治梅森大学的莫卡特斯中心发表了一篇政策简报,题名为“关于定义‘杀手AI’”。在简报中,我们为AI系统制定了一个新的框架来评估其对人类潜在的威胁,这是为应对AI带来的挑战所迈出的重要一步,同时也能够使AI更加安全、负责任地融入社会。

AI已经展现出了它所具有的变革性力量,其为社会上一些最棘手的问题提供了解决方案。它能够协助医疗诊断,促进科学研究,简化商业流程。通过自动化处理重复性工作,AI将人类从中解放出来,使后者能够专注于更高层次的职责和创造力。

AI的前途无量。从乐观的角度来看,想象一个由AI驱动的经济社会并非天马行空。到那时,只需经过一段时间的调整,人们会变得更加健康与富足,远不用像今天这样做如此多的工作。

然而,确保这样的未来能够安全实现是极为重要的。据我们所知,我们尝试评估AI对现实世界安全造成的风险,也是全面定义“杀手AI”现象的首次尝试。

我们将其定义为,由于设定或意外结果,AI系统对人类直接造成伤害或死亡。重点是,这一定义同时包含和区分了物理AI系统和虚拟AI系统,并且意识到AI可能会通过不同的形式对人类造成潜在伤害。

虽然相关的例子很难阐述,但科幻作品至少能够帮助人们理解物理AI和虚拟AI是如何对人体造成伤害的。电影《终结者》中的角色就是物理AI所带来风险的一个典型案例。但是虚拟AI有更大的潜在危险,最新的《碟中谍》电影就是一个较为极端的例子。的确,我们的世界正变得紧紧相连,就连关键基础实施也未能幸免。

我们所提出的框架以系统化的方式评估AI,其重点是优先考虑多数人的福利。我们对AI的安全性和风险因素进行严格评估,不仅关注其对人造成伤害的可能性,还关注其造成伤害的严重性。此种框架能够揭示先前未被注意到的威胁,也能够增强与AI相关的风险处理能力。

为实现这一点,我们的框架需要更深入的考虑和理解AI被改变用途和滥用的可能性,以及使用AI所产生的最终影响。此外,在处理这些问题时,我们强调跨学科相关从业者评估的重要性。这样能够以一个更加全面均衡的视角审视AI的发展与应用。

这一评估框架能够为“杀手AI”的全面立法,恰当管理和伦理讨论提供基础。重点关注保障人类生命和确保大多数人的福利,我们能够帮助立法机构解决或优先处理由任何潜在的“杀手AI”所引发的最紧迫的问题。

强调多种跨学科相关从业者参与的重要性能够鼓励不同背景的人积极参与到讨论当中。通过这样,我们希望未来的立法能够更加全面,相关讨论能够更加有成效。

虽然这是决策者、行业领导者、研究人员和其他相关从业者对AI进行严格评估的富有潜力的关键工具,该框架同样强调在AI安全领域进行进一步研究、审查并积极发挥主动性的紧迫性。这在这样一个发展如此之快的领域是一项挑战。幸运的是,研究者能够有充实的机会在这项科技中学有所成。

AI应当被用于做善事,应当被用于改善人类生活,而不是置人类于危险之境。通过制定有效的政策和方法来应对AI安全问题,社会能够充分发挥这种新兴技术的潜力,同时保护自身免受潜在的危害。我们的框架是实现这一任务的有力工具。无论AI是否会真的给人们带来恐惧,如果我们能够驾驭这项振奋人心的前沿技术同时避免它所带来的意外影响,我们将会过上更好的生活。

 

重点词汇:

nothing short of remarkable:非常了不起,令人印象深刻

looming:逼近的,迫近的

policy brief:政策简报

address the challenges:应对挑战

streamlines:使成流线型,简化….的过程

rigorous:严密的,严苛的

mitigate:减轻,缓和


Original Article:

'Killer AI' is real. Here's how we stay safe, sane and strong in a brave new world

The rapid advancement of artificial intelligence (AI) has been nothing short of remarkable. From health care to finance, AI is transforming industries and has the potential to elevate human productivity to unprecedented levels. However, this exciting promise is accompanied by a looming concern among the public and some experts: the emergence of "Killer AI." In a world where innovation has already changed society in unexpected ways, how do we separate legitimate fears from those that should still be reserved for fiction?

To help answer questions like these, we recently released a policy brief for the Mercatus Center at George Mason University titled "On Defining ‘Killer AI.'" In it, we offer a novel framework to assess AI systems for their potential to cause harm, an important step towards addressing the challenges posed by AI and ensuring its responsible integration into society.

AI has already shown its transformative power, offering solutions to some of society's most pressing problems. It enhances medical diagnoses, accelerates scientific research, and streamlines processes across the business world. By automating repetitive tasks, AI frees up human talent to focus on higher-level responsibilities and creativity.

The potential for good is boundless. While optimistic, it’s not particularly unreasonable to imagine an AI-fueled economy where, after a period of adjustment, people are significantly healthier and more prosperous while working far less than we do today.

It is important, however, to ensure this potential is achieved safely. To our knowledge, our attempt to assess AI’s real-world safety risks also marks the first attempt to comprehensively define the phenomenon of "Killer AI."

We define it as AI systems that directly cause physical harm or death, whether by design or due to unforeseen consequences. Importantly, the definition both encompasses and distinguishes between physical and virtual AI systems, recognizing that harm could potentially arise from various forms of AI.

Although their examples are complex to understand, science fiction can at least help illustrate the concept of physical and virtual AI systems leading to tangible physical harm. The Terminator character has long been used as an example of the risks of physical AI systems. However, potentially more dangerous are virtual AI systems, an extreme example of which can be found in the newest "Mission Impossible" movie. It is realistic to say that our world is becoming increasingly interconnected, and our critical infrastructure is not exempt.

Our proposed framework offers a systematic approach to assess AI systems, with a key focus on prioritizing the welfare of many over the interests of the few. By considering not just the possibility of harm but also its severity, we allow for a rigorous evaluation of AI systems’ safety and risk factors. It has the potential to uncover previously unnoticed threats and enhance our ability to mitigate risks associated with AI.

Our framework enables this by requiring a deeper consideration and understanding of the potential for an AI system to be repurposed or misused and the eventual repercussions of an AI system’s use. Moreover, we stress the importance of interdisciplinary stakeholder assessment in approaching these considerations. This will permit a more balanced perspective on the development and deployment of these systems.

This evaluation can serve as a foundation for comprehensive legislation, appropriate regulation, and ethical discussions on Killer AI. Our focus on preserving human life and ensuring the welfare of many can help legislative efforts address and prioritize the most pressing concerns elicited by any potential Killer AIs.

The emphasis on the importance of multiple, interdisciplinary stakeholder involvement might encourage those of different backgrounds to become more involved in the ongoing discussion. Through this, it is our hope that future legislation can be more comprehensive and the surrounding discussion can be better informed.

While a potentially critical tool for policymakers, industry leaders, researchers, and other stakeholders to evaluate AI systems rigorously, the framework also underscores the urgency for further research, scrutiny, and proactivity in the field of AI safety. This will be challenging in such a fast-moving field. Fortunately, researchers will be motivated by the ample opportunities to learn from the technology.

AI should be a force for good—one that enhances human lives, not one that puts them in jeopardy. By developing effective policies and approaches to address the challenges of AI safety, society can harness the full potential of this emerging technology while safeguarding against potential harm. The framework presented here is a valuable tool in this mission. Whether or not fears about AI prove true or unfounded, we’ll be left better off if we can navigate this exciting frontier while avoiding its unintended consequences.

 

原网址:

https://www.foxnews.com/opinion/killer-ai-real-safe-sane-strong-brave-new-world


Daily Translation #7的评论 (共 条)

分享到微博请遵守国家法律