欢迎光临散文网 会员登陆 & 注册

外刊听读| 经济学人 人类与AI的关系

2023-02-07 14:29 作者:狂奔的外刊  | 我要投稿

Bartleby

巴特尔比

Machine learnings

机器学习

How do employees and customers feel about artificial intelligence?

员工和用户对AI的感受如何?

 

IF YOU ASK something of ChatGPT, an artificial-intelligence (AI) tool that is all the rage, the responses you get back are almost instantaneous, utterly certain and often wrong. It is a bit like talking to an economist. The questions raised by technologies like ChatGPT yield much more tentative answers. But they are ones that managers ought to start asking.

 

如果你向ChatGPT这个风靡一时的人工智能(AI)工具询问一些事情,你得到的回答几乎是即时的、完全肯定的,而且往往是错误的。这有点像与经济学家交谈。像ChatGPT这样的技术所产生的问题有着更多试探性的答案。但这些是经理们应该开始思考的。

 

One issue is how to deal with employees’ concerns about job security. Worries are natural. An AI that makes it easier to process your expenses is one thing; an AI that people would prefer to sit next to at a dinner party quite another. Being clear about how workers would redirect time and energy that is freed up by an AI helps foster acceptance. So does creating a sense of agency: research conducted by MIT Sloan Management Review and the Boston Consulting Group found that an ability to override an AI makes employees more likely to use it.

 

一个问题是如何处理员工对工作保障的担忧。担忧是自然的。一个让你更容易处理开支的AI是一回事;一个人们在晚宴上更愿意坐在它旁边的AI则完全不同。清楚员工将如何重新分配AI所释放的时间和精力有助于培养员工接受度。创造人类的主人翁感也是如此: 麻省理工学院斯隆管理学院评论和波士顿咨询公司进行的研究发现,保有驾驭AI的能力使员工更有可能使用它。

 

Whether people really need to understand what is going on inside an AI is less clear. Intuitively, being able to follow an algorithm’s reasoning should trump being unable to. But a piece of research by academics at Harvard University, the Massachusetts Institute of Technology and the Polytechnic University of Milan suggests that too much explanation can be a problem.

 

人们是否真的需要了解AI的原理还有待商榷。直觉上,能够了解算法的原理应该好过不能。但是哈佛大学、麻省理工学院和米兰理工大学的学者进行的一项研究表明,过多的理解可能是一个问题。

 

Employees at Tapestry, a portfolio of luxury brands, were given access to a forecasting model that told them how to allocate stock to stores. Some used a model whose logic could be interpreted; others used a model that was more of a black box. Workers turned out to be likelier to overrule models they could understand because they were, mistakenly, sure of their own intuitions. Workers were willing to accept the decisions of a model they could not fathom, however, because of their confidence in the expertise of people who had built it. The credentials of those behind an AI matter.

 

奢侈品集团泰佩思琦的员工可以访问一个预测模型,告诉他们如何为商店分配库存。一些人使用了一个可以理解其逻辑的模型;其他人使用的模型更像是一个内部结构不详的黑箱。事实证明,工人更有可能否决他们能够理解的模型,因为他们错误地相信自己的直觉。然而,工人们愿意接受他们无法理解的模型的决定,因为他们对建造它的人的专业知识有信心。AI背后的资格证明很重要。

 

The different ways that people respond to humans and to algorithms is a burgeoning area of research. In a recent paper Gizem Yalcin of the University of Texas at Austin and her co-authors looked at whether consumers responded differently to decisions—to approve someone for a loan, for example, or a country-club membership—when they were made by a machine or a person. They found that people reacted the same when they were being rejected. But they felt less positively about an organisation when they were approved by an algorithm rather than a human. The reason? People are good at explaining away unfavourable decisions, whoever makes them. It is harder for them to attribute a successful application to their own charming, delightful selves when assessed by a machine. People want to feel special, not reduced to a data point.

 

人们对人类和算法的不同反应是一个新兴的研究领域。在最近的一篇论文中,德克萨斯大学奥斯丁分校的吉泽姆·亚尔琴和她的合著者研究了当决定由机器或人做出时,消费者是否会有不同的反应——例如,批准某人贷款或乡村俱乐部会员资格。他们发现当被拒绝时,人们的反应是一样的。但当他们被算法而不是人类认可时,他们对一个组织的感觉就不那么积极了。原因是什么?人们善于为不利的决定辩解,不管是谁做出的决定。当被机器评估时,他们很难将成功的申请归因于自己迷人、令人愉快的自我。人们想要感觉特别,而不是被简化成一个数据点。

 

In a forthcoming paper, meanwhile, Arthur Jago of the University of Washington and Glenn Carroll of the Stanford Graduate School of Business investigate how willing people are to give rather than earn credit—specifically for work that someone did not do on their own. They showed volunteers something attributed to a specific person—an artwork, say, or a business plan—and then revealed that it had been created either with the help of an algorithm or with the help of human assistants. Everyone gave less credit to producers when they were told they had been helped, but this effect was more pronounced for work that involved human assistants. Not only did the participants see the job of overseeing the algorithm as more demanding than supervising humans, but they did not feel it was as fair for someone to take credit for the work of other people.

 

与此同时,华盛顿大学的Arthur Jago和斯坦福大学商学院的Glenn Carroll在一篇即将发表的论文中调查了人们有多愿意给予而不是赢得称赞——特别是对于那些不是他们自己做的工作。他们向志愿者展示某个特定人物的作品——比如一件艺术品,或者一份商业计划书——然后告诉他们这些作品是在算法或者人工助手的帮助下创作出来的。当志愿者被告知原作者得到了帮助时,每个人对他们的称赞减少了,但这种影响在涉及人类助手的工作中更加明显。参与者不仅认为监督算法的工作比监督人类的工作要求更高,而且他们也不认为有人拿其他人的工作成果来邀功是公平的。

 

Another paper, by Anuj Kapoor of the Indian Institute of Management Ahmedabad and his co-authors, examines whether AIs or humans are more effective at helping people lose weight. The authors looked at the weight loss achieved by subscribers to an Indian mobile app, some of whom used only an AI coach and some of whom used a human coach, too. They found that people who also used a human coach lost more weight, set themselves tougher goals and were more fastidious about logging their activities. But people with a higher body mass index did not do as well with a human coach as those who weighed less. The authors speculate that heavier people might be more embarrassed by interacting with another person.

 

印度管理学院艾哈迈达巴德分校的Anuj Kapoor和他的合著者的另一篇论文研究了AI和人类在帮助人们减肥方面是否更有效。作者观察了一个印度移动App用户实现的减肥效果,其中一些人只使用AI教练,另一些人也使用人工教练。他们发现,同时使用人工教练的人减掉了更多的体重,为自己设定了更严格的目标,并且对记录他们的活动更加严谨。但是BMI指数较高的人在人工教练指导下的表现不如体重较轻的人。作者推测,体重较重的人在与他人互动时可能会更加尴尬。

 

The picture that emerges from such research is messy. It is also dynamic: just as technologies evolve, so will attitudes. But it is crystal-clear on one thing. The impact of ChatGPT and other AIs will depend not just on what they can do, but also on how they make people feel.

 

这类研究呈现出的图景是混乱的。它也是动态的:随着技术的发展,人们对AI的态度也会发生变化。但有一点是非常清楚的。ChatGPT和其他AI的影响不仅取决于它们能做什么,还取决于它们给人的感觉。
















外刊听读| 经济学人 人类与AI的关系的评论 (共 条)

分享到微博请遵守国家法律