欢迎光临散文网 会员登陆 & 注册

【中英双语】部署AI的正确方式

2022-02-21 13:59 作者:哈佛商业评论  | 我要投稿

长期以来,专家一直都鼓励人们“发挥自己的长处”。我们怎么会不想把自身最强的能力展示出来呢?但是这样的事却是谈何容易。之所以这么说,不是因为我们很难辨识出自己擅长什么,而是因为通常情况下,我们低估了自己凭着本能就做得很好的事情。   

2018年劳动力研究院(Workforce Institute)针对8个工业国家的300多名管理者进行了一项调研,参与者中大多数认为人工智能是有价值的生产力工具。

这点不难理解:AI在处理速度、准确性和持续性(机器不会因疲倦犯错)方面带来了显而易见的好处,很多职业人士都在使用AI。比如一些医务人员利用AI辅助诊断,给出治疗方案。

In a 2018 Workforce Institute survey of 3,000 managers across eight industrialized nations, the majority of respondents described artificial intelligence as a valuable productivity tool.

It’s easy to see why: AI brings tangible benefits in processing speed, accuracy, and consistency (machines don’t make mistakes because they’re tired), which is why many professionals now rely on it. Some medical specialists, for example, use AI tools to help make diagnoses and decisions about treatment.


但参与者也表示担心自己会被AI取代。担心这件事的还不只是参与这项研究的管理者。《卫报》最近报道称,英国600多万员工担心自己被机器取代。我们在各种会议和研讨会上遇到的学者和高管也有同样的担心。AI的优势在一些人眼中更具负面色彩:如果机器能更好地完成工作,还要人类干吗?

But respondents to that survey also expressed fears that AI would take their jobs. They are not alone. The Guardian recently reported that more than 6 million workers in the UK fear being replaced by machines. These fears are echoed by academics and executives we meet at conferences and seminars. AI’s advantages can be cast in a much darker light: Why would humans be needed when machines can do a better job?


这种恐惧感的蔓延表明,公司在为员工提供AI辅助工具时需要注意方式。2020年1月离职的埃森哲前首席信息官安德鲁·威尔逊( Andrew Wilson)说,“企业如果更多地关注AI和人类如何互相帮助,可以实现的价值会更大。”埃森哲发现,如果企业明确表示使用AI的目的是辅助而非取代员工,情况会比那些没有设立这一目标或对使用AI的目的语焉不详的公司好得多,这种差别体现在多个管理生产率维度,特别是速度、延展性和决策有效性。

The prevalence of such fears suggests that organizations looking to reap the benefits of AI need to be careful when introducing it to the people expected to work with it. Andrew Wilson, until January 2020 Accenture’s CIO, says, “The greater the degree of organizational focus on people helping AI, and AI helping people, the greater the value achieved.” Accenture has found that when companies make it clear that they are using AI to help people rather than to replace them, they significantly outperform companies that don’t set that objective (or are unclear about their AI goals) along most dimensions of managerial productivity—notably speed, scalability, and effectiveness of decision-making.


换言之,AI就像加入团队的新人才,企业必须使之发挥积极作用,而不是有意令其失败。明智的企业会先给新员工一些简单的任务,创造包容的环境,帮助他们积累实战经验,并安排导师为其提供帮助和建议。这样一来,新人可以在其他人负责更高价值的工作的时候学习。新人不断累积经验,证明自身工作能力,导师逐步在更关键的决策上信任他们的意见。学徒逐渐成为合作伙伴,为企业贡献技能和想法。

In other words, just as when new talent joins a team, AI must be set up to succeed rather than to fail. A smart employer trains new hires by giving them simple tasks that build hands-on experience in a noncritical context and assigns them mentors to offer help and advice. This allows the newcomers to learn while others focus on higher-value tasks. As they gain experience and demonstrate that they can do the job, their mentors increasingly rely on them as sounding boards and entrust them with more-substantive decisions. Over time an apprentice becomes a partner, contributing skills and insight.


我们认为这一方式也适用于人工智能。下文我们将结合自身及其他学者针对AI和信息系统应用的研究和咨询工作,以及公司创新及工作实践方面的研究,提出应用AI的一种方式,分四个阶段。通过这种方式,企业可以逐步培养员工对AI的信任(这也是接纳AI的关键条件),致力于构建人类和AI同时不断进步的分布式人类-AI认知系统。很多企业都已尝试过第一阶段,部分企业进行到了第二、三阶段;迄今为止第四阶段对多数企业来说还是“未来式”,尚处在早期阶段,但从技术角度来说可以实现,能够为利用人工智能的企业提供更多价值。

We believe this approach can work for artificial intelligence as well. In the following pages we draw on our own and others’ research and consulting on AI and information systems implementation, along with organizational studies of innovation and work practices, to present a four-phase approach to implementing AI. It allows enterprises to cultivate people’s trust—a key condition for adoption—and to work toward a distributed human-AI cognitive system in which people and AI both continually improve. Many organizations have experimented with phase 1, and some have progressed to phases 2 and 3. For now, phase 4 may be mostly a “future-casting” exercise of which we see some early signs, but it is feasible from a technological perspective and would provide more value to companies as they engage with artificial intelligence.

第一阶段  助手

普及人工智能的第一阶段和培训助手的方式十分相似。你教给新员工一些关于AI的基本规则,将自己手头上一些基础但耗时的工作(如填写网络表格或者汇总文档)分配给他,这样你就有时间处理更重要的工作内容。受训者通过观察你不断学习,完成工作,提出问题。

Phase 1: The Assistant

This first phase of onboarding artificial intelligence is rather like the process of training an assistant. You teach the new employee a few fundamental rules and hand over some basic but time-consuming tasks you normally do (such as filing online forms or summarizing documents), which frees you to focus on more-important aspects of the job. The trainee learns by watching you, performing the tasks, and asking questions.


AI助手的常见任务之一是整理数据。20世纪90年代中期,一些提供推荐系统的企业帮助用户过滤数千种产品,找到他们最需要的——亚马逊和奈飞在应用这项技术方面处于领先地位。

One common task for AI assistants is sorting data. An example is the recommendation systems companies have used since the mid-1990s to help customers filter thousands of products and find the ones most relevant to them—Amazon and Netflix being among the leaders in this technology.


现在越来越多的商业决定要用到这种数据分类。例如,资产组合经理在决定投资哪些股票时,要处理的信息量超出了人类的能力,而且还有源源不断的新信息。软件可以根据预先定义的投资标准迅速筛选股票,降低任务难度。自然语言处理技术可以搜集和某公司最相关的新闻,并通过分析师报告评估未来企业活动的舆论情绪。位于伦敦、成立于2002年的马布尔资产管理公司(MBAM)较早将这项技术应用到职场。公司打造了世界一流的RAID(研究分析&信息数据库)平台帮助资产组合经理过滤关于企业活动、新闻走势和股票动向的海量信息。

More and more business decisions now require this type of data sorting. When, for example, portfolio managers are choosing stocks in which to invest, the information available is far more than a human can feasibly process, and new information comes out all the time, adding to the historical record. Software can make the task more manageable by immediately filtering stocks to meet predefined investment criteria. Natural-language processing, meanwhile, can identify the news most relevant to a company and even assess the general sentiment about an upcoming corporate event as reflected in analysts’ reports. Marble Bar Asset Management (MBAM), a London-based investment firm founded in 2002, is an early convert to using such technologies in the workplace. It has developed a state-of-the-art platform, called RAID (Research analysis & Information Database), to help portfolio managers filter through high volumes of information about corporate events, news developments, and stock movements.


AI还可以通过模拟人类行为提供辅助。用过谷歌搜索的人都知道,在搜索框输入一个词,会自动出现提示信息。智能手机的预测性文本也通过类似方式加快打字速度。这种用户模拟技术出现在30多年前,有时叫做判断引导,也可以应用在决策过程中。AI根据员工的决策历史,判定员工在面对多个选择时最有可能做出的选择,并提出建议——帮助人类加快工作速度,而非代替人类完成工作。

Another way AI can lend assistance is to model what a human might do. As anyone who uses Google will have noticed, prompts appear as a search phrase is typed in. Predictive text on a smartphone offers a similar way to speed up the process of typing. This kind of user modeling, related to what is sometimes called judgmental bootstrapping, was developed more than 30 years ago; it can easily be applied to decision-making. AI would use it to identify the choice an employee is most likely to make, given that employee’s past choices, and would suggest that choice as a starting point when the employee is faced with multiple decisions—speeding up, rather than actually doing, the job.


我们来看一个具体的例子。航空公司员工在决定每架飞机的配餐数量时,会根据过往航班经验得出的假设进行计算,填写餐饮订单。计算错误会增加公司成本:预订量不足可能激怒消费者不再选择这家公司;超额预订则代表多余的餐食将会被扔掉,而且飞机会因此储备不必要的燃油。

Let’s look at this in a specific context. When airline employees are deciding how much food and drink to put on a given flight, they fill out catering orders, which involve a certain amount of calculation together with assumptions based on their experience of previous flights. Making the wrong choices incurs costs: Underordering risks upsetting customers who may avoid future travel on the airline. Overordering means the excess food will go to waste and the plane will have increased its fuel consumption unnecessarily.


这种情况下,人工智能可以派上用场。AI可以通过分析航空公司餐饮经理过往的选择,或者经理设置的规则,预测他会如何下单。通过分析相关历史数据,包括该航线餐饮消耗量及航班乘客的历史购物行为,每趟航线都可以定制这种“自动填写”的“推荐订单”。但是,就像预测性输入一样,人类拥有最后的决定权,可以根据需要随时覆盖。AI仅仅通过模拟或预测他们的决策风格起到辅助作用。

An algorithm can be very helpful in this context. AI can predict what the airline’s catering manager would order by analyzing his or her past choices or using rules set by the manager. This “autocomplete” of “recommended orders” can be customized for every flight using all relevant historical data, including food and drink consumption on the route in question and even past purchasing behavior by passengers on the manifest for that flight. But as with predictive typing, human users can freely overwrite as needed; they are always in the driver’s seat. AI simply assists them by imitating or anticipating their decision style.


如果管理者通过这种方式逐步引入AI,应该不会太困难。我们已经在生活中采用这种方式,网上填写表格的时候允许AI自动补全信息。在职场,管理者可以制定AI助手在填表格时遵守的具体规则。很多企业在工作中使用的软件(例如信用评级程序)正是人类定义的决策规则汇总。AI助手可以通过汇总管理者遵守这些规则的情境,进一步提炼规则。此类机器学习无需管理者采取任何行为,更不用“教导”AI助手。

It should not be a stretch for managers to work with AI in this way. We already do so in our personal lives, when we allow the autocomplete function to prefill forms for us online. In the workplace a manager can, for example, define specific rules for an AI assistant to follow when completing forms. In fact, many software tools currently used in the workplace (such as credit-rating programs) are already just that: collections of human-defined decision rules. The AI assistant can refine the rules by codifying the circumstances under which the manager actually follows them. This learning needn’t involve any change in the manager’s behavior, let alone any effort to “teach” the assistant.


第二阶段  监测者

下一步需要设定AI程序,为人类提供实时反馈。机器学习程序使得人类可以训练AI,准确预测某种情境下(例如由于过度自信或疲劳导致的缺乏理性)用户的决策。假如用户即将做出的选择有悖于过去的选择记录,系统会标记出矛盾之处。在决策量很大的工作中,人类员工可能因为劳累或分心出错,这种方式可以起到很大的助益。

Phase 2: The Monitor

The next step is to set up the AI system to provide real-time feedback. Thanks to machine-learning programs, AI can be trained to accurately forecast what a user’s decision would be in a given situation (absent lapses in rationality owing to, for example, overconfidence or fatigue). If a user is about to make a choice that is inconsistent with his or her choice history, the system can flag the discrepancy. This is especially helpful during high-volume decision-making, when human employees may be tired or distracted.


心理学、行为经济学和认知科学的研究表明,人类的推理能力有限,而且有缺陷,特别是在商业活动中无处不在的统计学和概率性问题上。一些针对法庭审判决定的研究(本文作者之一陈参与了研究)表明,法官在午餐前更容易通过申请政治避难的案件;如果法官支持的美国职业橄榄球联盟球队在开庭前一天获胜,他们在开庭当天的判罚会更轻;如果被告当天生日,法官会对其手下留情。很明显,如果软件可以告诉决策者他们即将做出的决定与之前有所矛盾,或者不符合纯粹从司法角度分析的预测结果,也许更能体现公平公正。

Research in psychology, behavioral economics, and cognitive science shows that humans have limited and imperfect reasoning capabilities, especially when it comes to statistical and probabilistic problems, which are ubiquitous in business. Several studies (of which one of us, Chen, is a coauthor) concerning legal decisions found that judges grant political asylum more frequently before lunch than after, that they give lighter prison sentences if their NFL team won the previous day than if it lost, and that they will go easier on a defendant on the latter’s birthday. Clearly justice might be better served if human decision makers were assisted by software that told them when a decision they were planning to make was inconsistent with their prior decisions or with the decision that an analysis of purely legal variables would predict.


AI可以做到这点。另外一项研究(陈参与其中)表明,加载了由基本法律变量组成的模型的AI程序,在申请避难的案件开庭当天,可以对结果做出准确率达80%的预测。作者为程序加入了机器学习功能,AI可以根据法官过去的决定模拟每位法官的决策过程。

AI can deliver that kind of input. Another study (also with Chen as a coauthor) showed that AI programs processing a model made up of basic legal variables (constructed by the study’s authors) can predict asylum decisions with roughly 80% accuracy on the date a case opens. The authors have added learning functionality to the program, which enables it to simulate the decision-making of an individual judge by drawing on that judge’s past decisions.


这一方法也适用于其他情境。例如,马布尔资产管理公司的资产组合经理(PM)在做出可能提升整体资产组合风险的投资决定时,例如提高对某特定领域或某地区的曝光,系统会在电脑控制的交易流中弹出对话框提醒他们,可以适当调整。PM也许会对这样的反馈视而不见,但起码知道了公司的风险限制,这种反馈仍然有助于PM的决策。

The approach translates well to other contexts. For example, when portfolio managers (PMs) at Marble Bar Asset Management consider buy or sell decisions that may raise the overall portfolio risk—for example, by increasing exposure to a particular sector or geography—the system alerts them through a pop-up during a computerized transaction process so that they can adjust appropriately. A PM may ignore such feedback as long as company risk limits are observed. But in any case the feedback helps the PM reflect on his or her decisions.


AI当然并不总是“正确的”。AI的建议往往不会考虑到人类决策者才掌握的可靠的私人信息,因此也许并不会纠正潜在的行为偏差,而是起到反作用。所以对AI的使用应该是互动式的,算法根据数据提醒人类,而人类教会AI为什么自己忽略了某个提醒。这样做提高了AI的效用,也保留了人类决策者的自主权。

Of course AI is not always “right.” Often its suggestions don’t take into account some reliable private information to which the human decision maker has access, so the AI might steer an employee off course rather than simply correct for possible behavioral biases. That’s why using it should be like a dialogue, in which the algorithm provides nudges according to the data it has while the human teaches the AI by explaining why he or she overrode a particular nudge. This improves the AI’s usefulness and preserves the autonomy of the human decision maker.


可惜很多AI系统的应用方式侵占了人类的自主权。例如,算法一旦将某银行交易标记为潜在诈骗,职员必须请主管甚至外部审计员确认后,才能批准这一交易。有时,人类几乎不可能撤销机器做出的决定,客户和客服人员一直对此感到挫败。很多情况下AI的决策逻辑很模糊,即便犯错员工也没有资格表示质疑。

Unfortunately, many AI systems are set up to usurp that autonomy. Once an algorithm has flagged a bank transaction as possibly fraudulent, for example, employees are often unable to approve the transaction without clearing it with a supervisor or even an outside auditor. Sometimes undoing a machine’s choice is next to impossible—a persistent source of frustration for both customers and customer service professionals. In many cases the rationale for an AI choice is opaque, and employees are in no position to question that choice even when mistakes have been made.


机器搜集人类决策数据时,还有一大问题是隐私权。除了在人类和AI的互动中给予人类控制权,我们还要确保机器搜集的数据都是保密的。工程师团队和管理团队间应该互不干涉,否则员工也许会担心自己和系统不设防的交互如果犯了错,之后会受到惩罚。

Privacy is another big issue when machines collect data on the decisions people make. In addition to giving humans control in their exchanges with AI, we need to guarantee that any data it collects on them is kept confidential. A wall ought to separate the engineering team from management; otherwise employees may worry that if they freely interact with the system and make mistakes, they might later suffer for them.


此外,企业应该在AI设计和互动方面制定规则,确保公司规范和实践的一致性。这类规则要详细描述在预测准确性达到何种程度的情况下需要做出提醒,何时需要给出提醒原因,确定提醒的标准,以及员工在何时应当听从AI指令、何时该请主管决定如何处理。

Also, companies should set rules about designing and interacting with AI to ensure organizational consistency in norms and practices. These rules might specify the level of predictive accuracy required to show a nudge or to offer a reason for one; criteria for the necessity of a nudge; and the conditions under which an employee should either follow the AI’s instruction or refer it to a superior rather than accept or reject it.


为了让员工在第二阶段保有控制感,我们建议管理者和系统设计人员在设计时请员工参与:请他们作为专家,定义将要使用的数据,并决定基本的事实;让员工在研发过程中熟悉模型;应用模型后为员工提供培训和互动机会。这一过程中,员工会了解建模过程、数据管理方式和机器推荐的依据。

To help employees retain their sense of control in phase 2, we advise managers and systems designers to involve them in design: Engage them as experts to define the data that will be used and to determine ground truth; familiarize them with models during development; and provide training and interaction as those models are deployed. In the process, employees will see how the models are built, how the data is managed, and why the machines make the recommendations they do.


第三阶段  教练

普华永道最近一项调研表明,参与者中60%称希望获得每日或每周一次的工作表现反馈。原因并不复杂。彼得·德鲁克(Peter Drucker)2005年在著名的《哈佛商业评论》文章《管理自己》(“Managing Oneself”)中指出,人们一般都不知道自己擅长什么。当他们觉得自己知道时,往往是错误的。

Phase 3: The Coach

In a recent PwC survey nearly 60% of respondents said that they would like to get performance feedback on a daily or a weekly basis. It’s not hard to see why. As Peter Drucker asserted in his famous 2005 Harvard Business Review article “Managing Oneself,” people generally don’t know what they are good at. And when they think they do know, they are usually wrong.


问题在于,发现自身优势、获得改进机会的唯一方式是通过关键决策和行为的缜密分析。而这需要记录自己对结果的预期,9到12个月后再将现实和预期进行比较。因此,员工获得的反馈往往来自上级主管在工作总结时的评价,无法自己选择时间和形式。这个事实很可惜,因为纽约大学的特莎·韦斯特(Tessa West)在近期神经科学方面的研究中发现,如果员工感到自主权受保护,可以自行掌控对话(例如能选择收到反馈的时间),就能更好地对反馈做出反应。

The trouble is that the only way to discover strengths and opportunities for improvement is through a careful analysis of key decisions and actions. That requires documenting expectations about outcomes and then, nine months to a year later, comparing those expectations with what actually happened. Thus the feedback employees get usually comes from hierarchical superiors during a review—not at a time or in a format of the recipient’s choosing. That is unfortunate, because, as Tessa West of New York University found in a recent neuroscience study, the more people feel that their autonomy is protected and that they are in control of the conversation—able to choose, for example, when feedback is given—the better they respond to it.


AI可以解决这一问题。前文描述的程序可以给员工提供反馈,让他们自查绩效,反省自己的错误。每月一次根据员工历史表现提取的分析数据,也许可以帮助他们更好地理解决策模式和实践。几家金融公司正在采用这一措施。例如MBAM的资产组合经理接受来自数据分析系统的反馈,该系统会统计每个人的投资决定。

AI could address this problem. The capabilities we’ve already mentioned could easily generate feedback for employees, enabling them to look at their own performance and reflect on variations and errors. A monthly summary analyzing data drawn from their past behavior might help them better understand their decision patterns and practices. A few companies, notably in the financial sector, are taking this approach. Portfolio managers at MBAM, for example, receive feedback from a data analytics system that captures investment decisions at the individual level.


数据展现了资产组合经理有趣且多变的偏见。一些经理更厌恶损失,对表现不佳的投资迟迟不肯止损。另一些则过度自信,可能对某项投资持仓过重。AI分析会发现这些行为,像教练一样为其提供定制化反馈,标记行为随时间的变化,给出改进决策建议。但最终由PM决定如何处理这些反馈。MBAM的领导团队认为,这种“交易优化”正逐渐成为公司核心的差异化因素,帮助资产组合经理的发展,也让公司变得更有吸引力。

The data can reveal interesting and varying biases among PMs. Some may be more loss-averse than others, holding on to underperforming investments longer than they should. Others may be overconfident, possibly taking on too large a position in a given investment. The analysis identifies these behaviors and—like a coach—provides personalized feedback that highlights behavioral changes over time, suggesting how to improve decisions. But it is up to the PMs to decide how to incorporate the feedback. MBAM’s leadership believes this “trading enhancement” is becoming a core differentiator that both helps develop portfolio managers and makes the organization more attractive.


更重要的是,好导师可以从被指导者身上学到东西,机器学习的“教练程序”也可以从有自主权的人类员工的决策中学习。上述关系中,人类可以反对“教练程序”,由此产生的新数据会改变AI的隐含模型。例如,如果由于近期公司事件,资产组合经理决定不对某个标记股票进行交易,他可以给系统做出解释。有了这种反馈,系统可以持续搜集分析数据并得出洞见。

What’s more, just as a good mentor learns from the insights of the people who are being mentored, a machine-learning “coachbot” learns from the decisions of an empowered human employee. In the relationship we’ve described, a human can disagree with the coachbot—and that creates new data that will change the AI’s implicit model. For example, if a portfolio manager decides not to trade a highlighted stock because of recent company events, he or she can provide an explanation to the system. With feedback, the system continually captures data that can be analyzed to provide insights.


如果员工能理解并控制和AI的互动,就更能将其视为获得反馈的安全渠道,目标是帮助人类提升绩效而不是评估绩效。想要实现这点,要选择正确的界面。例如MBAM的交易提升工具(如视觉界面)是根据PM的偏好定制的。

If employees can relate to and control exchanges with artificial intelligence, they are more likely to see it as a safe channel for feedback that aims to help rather than to assess performance. Choosing the right interface is useful to this end. At MBAM, for example, trading enhancement tools—visuals, for instance—are personalized to reflect a PM’s preferences.


第二阶段中,让员工参与设计系统很关键。AI做教练时,人们会更害怕权力被夺走。有人将AI视为合作伙伴就有人将其视为竞争对手——谁愿意被机器比下去呢?自主权和隐私的担忧也许会更强烈。和教练共事需要诚实,但人们也许并不愿意对一个之后会把自己表现不佳的数据分享给HR的“教练”敞开心扉。

As in phase 2, involving employees in designing the system is essential. When AI is a coach, people will be even more fearful of disempowerment. It can easily seem like a competitor as well as a partner—and who wants to feel less intelligent than a machine? Concerns about autonomy and privacy may be even stronger. Working with a coach requires honesty, and people may hesitate to be open with one that might share unflattering data with the folks in HR.

前三阶段部署AI的方式当然有不足之处。长远来看,新技术创造出的工作比毁掉的多,但就业市场的颠覆过程可能会很痛苦。马特·比恩(Matt Beane)在《人机共生:组织新生态》(“Learning to Work with Intelligent Machines” ,2019年《哈佛商业评论》9月刊)一文中称,部署AI的企业给员工亲身实践以及导师指导的机会更少。

Deploying AI in the ways described in the first three phases does of course have some downsides. Over the long term new technologies create more jobs than they destroy, but meanwhile labor markets may be painfully disrupted. What’s more, as Matt Beane argues in “Learning to Work with Intelligent Machines” (HBR, September–October 2019), companies that deploy AI can leave employees with fewer opportunities for hands-on learning and mentorship.


因此,风险的确存在,人类不仅失去了初级职位(由于数字助手可以有效取代人类),还可能牺牲未来决策者自主决策的能力。但这并非不可避免,比恩在文章中指出,企业可以在利用人工智能为员工创造不同和更好的学习机会的同时提升系统透明度,并给员工更多控制权。未来的职场新人都将成长于人力加机器的工作环境,肯定比“前AI时代”的同事更能快速发现创新、增加价值和创造工作的机会。这把我们带到了最后一个阶段。

There is some risk, therefore, not only of losing entry-level jobs (because digital assistants can effectively replace human ones) but also of compromising the ability of future decision makers to think for themselves. That’s not inevitable, however. As Beane suggests, companies could use their artificial intelligence to create different and better learning opportunities for their employees while improving the system by making it more transparent and giving employees more control. Because future entrants to the workforce will have grown up in a human-plus-machine workplace, they will almost certainly be faster than their pre-AI colleagues at spotting opportunities to innovate and introduce activities that add value and create jobs—which brings us to the final phase.


第四阶段  队友

认知人类学家埃德温·赫钦斯(Edwin Hutchins)研发出分布式认知理论。该理论基于他对舰船导航的研究,结合了水手、路线图、标尺、指南针和绘图工具。该理论总体上和意识延伸的概念相关,假定认知过程、信仰和动机等头脑活动并不一定仅限于大脑甚至身体。外部工具和仪器在正确的条件下,可以对认知过程起到重要作用,创造出所谓的耦合系统。

Phase 4: The Teammate

Edwin Hutchins, a cognitive anthropologist, developed what is known as the theory of distributed cognition. It is based on his study of ship navigation, which, he showed, involved a combination of sailors, charts, rulers, compasses, and a plotting tool. The theory broadly relates to the concept of extended mind, which posits that cognitive processing, and associated mental acts such as belief and intention, are not necessarily limited to the brain, or even the body. External tools and instruments can, under the right conditions, play a role in cognitive processing and create what is known as a coupled system.


和这一思路一致,AI应用的最后一个阶段(就我们所知尚未有企业达到这个水平),企业应该打造一个人类和机器同时贡献专长的耦合网络。我们认为,随着AI和人类用户不断交互,搜集专家历史决策及行为数据,分析并建模,不断完善,在完全整合了AI教练程序的企业中自然会出现一个专家社群。举例来说,采购经理在决策时只需轻轻一点,就能看到其他人可能的报价——定制化的专家团体可能会对采购经理有所帮助。

In line with this thinking, in the final phase of the AI implementation journey (which to our knowledge no organization has yet adopted) companies would develop a coupled network of humans and machines in which both contribute expertise. We believe that as AI improves through its interactions with individual users, analyzing and even modeling expert users by drawing on data about their past decisions and behaviors, a community of experts (humans and machines) will naturally emerge in organizations that have fully integrated AI coachbots. For example, a purchasing manager who—with one click at the moment of decision—could see what price someone else would give could benefit from a customized collective of experts.


尽管技术已经能够实现这样的集体智慧,但这一阶段仍然充满挑战。例如,任何此类AI整合都要避免建立在偏见(旧的或者新的)基础上,必须尊重人类隐私,人类才能像信任同类一样信任AI,这本身已经充满挑战,因为无数研究证明人类信任彼此都很难。

Although the technology to create this kind of collective intelligence now exists, this phase is fraught with challenges. For example, any such integration of AI must avoid building in old or new biases and must respect human privacy concerns so that people can trust the AI as much as they would a human partner. That in itself is a pretty big challenge, given the volume of research demonstrating how hard it is to build trust among humans.


在职场建立信任的最佳方式是增进理解。卡内基梅隆大学戴维·丹克斯(David Danks)和同事就这一主题进行了研究,根据其模型,一个人信任某人的原因是理解对方的价值观、欲望和目的,对方也表明始终关心我的利益。理解一直是人类彼此信任的基础,也很适合人类和AI发展关系,因为人类对人工智能的恐惧通常也是由于对AI运作方式的不理解。

The best approaches to building trust in the workplace rely on the relationship between trust and understanding—a subject of study by David Danks and colleagues at Carnegie Mellon. According to this model, I trust someone because I understand that person’s values, desires, and intentions, and they demonstrate that he or she has my best interests at heart. Although understanding has historically been a basis for building trust in human relationships, it is potentially well suited to cultivating human–AI partnerships as well, because employees’ fear of artificial intelligence is usually grounded in a lack of understanding of how AI works.


建立信任时,一个很困难的问题是如何定义“解释”,更不用说“好的解释”。很多研究都在关注这个难题。例如,本文作者之一伊维纽正尝试通过所谓“反事实解释”的方式揭示机器学习的“黑匣子”。“反事实解释”通过找出决定决策方向的交易特征列表,阐明AI系统做出某个决定(例如批准某个交易)的原因。如果交易不符合某项特征(或者和事实相反),系统就会做出不同决定(拒绝交易)。

In building understanding, a particular challenge is defining what “explanation” means—let alone “good explanation.” This challenge is the focus of a lot of research. For example, one of us (Evgeniou) is working to open up machine-learning “black boxes” by means of so-called counterfactual explanations. A counterfactual explanation illuminates a particular decision of an AI system (for example, to approve credit for a given transaction) by identifying a short list of transaction characteristics that drove the decision one way or another. Had any of the characteristics been different (or counter to the fact), the system would have made a different decision (credit would have been denied).


伊维纽还希望了解人们觉得什么样的解释是对AI决策的优秀解释。例如,人们是会觉得按逻辑列出特征(“因为具备X、Y、Z三个特征,所以这一交易获批”)更好,还是说明该决定和其他决定的相关性(这一交易获批是因为和其他获批交易相似,你可以比较一下)更好。随着针对AI解释的研究继续深入,AI系统会变得更透明,有助于赢得更多信任。

Evgeniou is also exploring what people perceive as good explanations for AI decisions. For example, do they see an explanation as better when it’s presented in terms of a logical combination of features (“The transaction was approved because it had X,Y,Z characteristics”) or when it’s presented relative to other decisions (“The transaction was approved because it looks like other approved transactions, and here they are for you to see”)? As research into what makes AI explainable continues, AI systems should become more transparent, thus facilitating trust.


新技术应用一直是重大挑战。一项技术的影响力越大,挑战就越大。人工智能技术的潜在影响让人们感到很难将其付诸实践。但如果我们谨慎行事,这一过程可以相对顺利。这也是为什么企业必须有责任地设计和发展AI,特别注意透明度、决策自主权和隐私,而且要让使用AI技术的人参与进来,否则,不清楚机器做决策的方式,人们害怕被机器限制甚至取代也是理所当然的。

Adopting new technologies has always been a major challenge—and the more impact a technology has, the bigger the challenge is. Because of its potential impact, artificial intelligence may be perceived as particularly difficult to implement. Yet if done mindfully, adoption can be fairly smooth. That is precisely why companies must ensure that AI’s design and development are responsible—especially with regard to transparency, decision autonomy, and privacy—and that it engages the people who will be working with it. Otherwise they will quite reasonably fear being constrained—or even replaced—by machines that are making all sorts of decisions in ways they don’t understand.


关键在于克服恐惧,建立信任。本文描述的四个阶段都是由人类制定基本规则。通过负责任的设计,AI可以成为人类工作中真正的合作伙伴——一以贯之地快速处理大量各式数据,提升人类的直觉和创造力,让人类反过来指导机器。

Getting past these fears to create a trusting relationship with AI is key. In all four phases described in these pages, humans determine the ground rules. With a responsible design, AI may become a true partner in the workplace—rapidly processing large volumes of varied data in a consistent manner to enhance the intuition and creativity of humans, who in turn teach the machine.


鲍里斯·巴比克是欧洲工商管理学院决策科学助理教授。丹尼尔·陈是图卢兹经济学院高级研究所教授,世界银行司法改革计划数据和证据首席研究员。赛奥佐罗斯·伊维纽是欧洲工商管理学院决策科学和技术管理教授,马布尔资产管理公司顾问。安妮-劳伦·法雅德是纽约大学坦登工程学院创新、设计和企业研究副教授。

牛文静 | 译    蒋荟蓉 | 校    时青靖 | 编辑



 

【中英双语】部署AI的正确方式的评论 (共 条)

分享到微博请遵守国家法律