外刊阅读:人工智能造成的人身伤害,谁来负责?
Source: March 2023, Scientific American
Who is liable when AI kills?
人工智能造成的人身伤害,谁来负责?
“We need to protect people from faulty AI without curbing innovation”.
“我们需要在不抑制创新的情况下保护人们免受有缺陷的人工智能的影响”。
Authors: George Maliha is a third-year internal medicine resident at the University of Pennsylvania Health System. Ravi B. Parikh is an oncologist and policy researcher at the University of Pennsylvania who develops ways to integrate AI into clinical care.
George Maliha是宾夕法尼亚大学卫生系统内科住院医生。Ravi B. Parikh是宾夕法尼亚大学(University of Pennsylvania)的肿瘤学家和政策研究员,致力于开发将AI整合到临床治疗中的方法。
正文:
Who is responsible when artificial intelligence harms someone? A California jury may soon have to decide. In December 2019 a person driving a Tesla with an AI navigation system killed two people in an accident. The driver faces up to 12 years in prison. Several federal agencies are investigating Tesla crashes, and the U.S. Department of Justice has opened a criminal probe into how Tesla markets its selfdriving system. And California’s Motor Vehicles Department is examining its use of AIguided driving features.
当人工智能伤害某人时,谁该负责?加州陪审团可能很快就要做出决定。2019年12月,在一起事故中,一名驾驶装有人工智能导航系统的特斯拉的人导了两人丧生。这名司机面临最高12年的监禁。几个联邦机构正在调查特斯拉的车祸,美国司法部已经对特斯拉如何营销其自动驾驶系统展开了刑事调查。加州机动车辆部门正在研究其使用人工智能引导的驾驶功能。
AI navigation system 人工智能导航系统
U.S. Department of Justice美国司法部
opened a criminal probe into展开了刑事调查
注意这里probe是一个名词,意思是调查。
Our current liability system—used to determine responsibility and payment for injuries—is unprepared for AI. Liability rules were designed for a time when humans caused most injuries. But with AI, errors may occur without any direct human input. The liability system needs to adjust accordingly. Bad liability policy won’t just stifle AI innovation. It will also harm patients and consumers.
我们目前的责任体系——用来确定伤害的责任和赔偿——还没有为人工智能做好准备。责任规则是为人类造成大多数伤害的时代设计的。但对于人工智能,在没有任何直接人工输入的情况下,错误可能会发生。责任制度需要相应调整。糟糕的责任政策不仅会扼杀人工智能创新。它还会伤害病人和消费者。
The time to think about liability is now—as AI becomes ubiquitous but remains underregulated. AIbased systems have already contributed to injuries. In 2019 an AI algorithm misidentified a suspect in an aggravated assault, leading to a mistaken arrest. In 2020, during the height of the COVID pandemic, an AIbased mental health chatbot encouraged a simulated suicidal patient to take her own life.
现在是考虑责任的时候了——人工智能变得无处不在,但仍然监管不足。基于人工智能的系统已经造成了伤害。2019年,一个人工智能算法在一起恶意伤害中错误识别了一名嫌疑人,导致错误逮捕。2020年,在新冠肺炎大流行最严重的时期,一个基于人工智能的心理健康聊天机器人鼓励一名模拟自杀的患者结束自己的生命。
ubiquitous: being everywhere, very common 无处不在
algorithm(尤指计算机)算法
aggravated assault恶意伤害
chatbot聊天机器人
Getting the liability landscape right is essential to unlocking AI’s potential. Uncertain rules and the prospect of costly litigation will discourage the investment, development and adoption of AI in industries ranging from health care to autonomous vehicles.
正确把握责任格局对于释放人工智能的潜力至关重要。不确定的规则和代价高昂的诉讼前景,将阻碍从医疗保健到自动驾驶汽车等行业对人工智能的投资、开发和采用。
litigation诉讼,起诉
Currently liability inquiries usually start—and stop—with the person who uses the algorithm. Granted, if someone misuses an AI system or ignores its warnings, that person should be liable. But AI errors are often not the fault of the user. Who can fault an emergency room physician for an AI algorithm that misses papilledema—swelling of a part of the retina? An AI’s failure to detect the condition could delay care and possibly cause a patient to lose their sight. Yet papilledema is challenging to diagnose without an ophthalmologist’s examination.
目前,责任调查通常由使用算法的人开始,也由使用算法的人停止。当然,如果有人误用了人工智能系统或忽视了它的警告,这个人应该承担责任。但人工智能的错误往往不是用户的错。谁能因为人工智能算法漏诊了视神经乳头水肿(视网膜的一部分视乳头水肿)而去指责急诊室医生?人工智能未能检测到病情可能会延误治疗,并可能导致患者失明。然而,在没有眼科医师检查的情况下,很难诊断视神经乳头水肿。
AI is constantly selflearning, meaning it takes information and looks for patterns in it. It is a “black box,” which makes it challenging to know what variables contribute to its output. This further complicates the liability question. How much can you blame a physician for an error caused by an unexplainable AI? Shifting the blame solely to AI engineers does not solve the issue. Of course, the engineers created the algorithm in question. But could every Tesla Autopilot accident be prevented by more testing before product launch?
人工智能在不断地自我学习,这意味着它获取信息并在其中寻找模式。它是一个“黑箱”,因此很难知道哪些变量对其输出有贡献。这使责任问题进一步复杂化。一个无法解释的人工智能造成的错误,你能责怪医生多少?把责任完全推卸给人工智能工程师并不能解决问题。当然,工程师们创造了这个算法。但是否每一次特斯拉自动驾驶事故都可以通过在产品发布前进行更多的测试来预防吗?
The key is to ensure that all stakeholders—users, developers and everyone else along the chain—bear enough liability to ensure AI safety and effectiveness, though not so much they give up on AI. To protect people from faulty AI while still promoting innovation, we propose three ways to revamp traditional liability frameworks.
关键是要确保所有利益相关者——用户、开发人员和其他所有人——承担足够的责任来确保人工智能的安全性和有效性,尽管他们不会放弃人工智能。为了在促进创新的同时保护人们免受有缺陷的人工智能的影响,我们提出了三种方法来改进传统的责任框架。
First, insurers must protect policyholders from the costs of being sued over an AI injury by testing and validating new AI algorithms prior to use. Car insurers have similarly been comparing and testing automobiles for years. An independent safety system can provide AI stakeholders with a predictable liability system that adjusts to new technologies and methods.
首先,保险公司必须在使用新的人工智能算法之前测试和验证,以保护投保人免受因人工智能伤害而被起诉的成本。多年来,汽车保险公司也一直在对汽车进行类似的比较和测试。 一个独立的保障系统可以为AI利益相关者提供一个可预测的责任系统,以适应新技术和方法。
Second, some AI errors should be litigated in courts with expertise in these cases. These tribunals could specialize in particular technologies or issues, such as dealing with the interaction of two AI systems (say, two autonomous vehicles that crash into each other). Such courts are not new: in the U.S., these courts have adjudicated vaccine injury claims for decades.
第二,一些人工智能错误应该在有相关专业知识的法庭上提起诉讼。这些法庭可以专门处理特定的技术或问题,比如处理两个人工智能系统的交互(比如两辆相撞的自动驾驶汽车)。这样的法院并不新鲜:在美国,这些法院裁决疫苗伤害索赔已有几十年的历史。
Third, regulatory standards from federal authorities such as the U.S. Food and Drug Administration or the National Highway Traffic Safety Administration could offset excess liability for developers and users. For example, federal regulations and legislation have replaced certain forms of liability for medical devices. Regulators ought to proactively focus on standard processes for AI development. In doing so, they could deem some AIs too risky to introduce to the market without testing, retesting or validation. This would allow agencies to remain nimble and prevent AIrelated injuries, without AI developers incurring excess liability.
第三,来自联邦当局的监管标准,如美国食品和药物管理局或国家高速公路交通安全管理局可以抵消开发商和用户的超额责任。例如,联邦法规和立法已经取代了医疗器械的某些形式的责任。监管机构应该积极关注人工智能开发的标准流程。在这种情况下,他们可能认为一些人工智能风险太大,在没有测试、重新测试或验证的情况下引入市场。这将使机构保持灵活,防止与人工智能相关的伤害,而人工智能开发人员不会承担过多的责任。