欢迎光临散文网 会员登陆 & 注册

经济学人 | Machine learnings(ChatGPT)机器学习(20

2023-02-12 09:31 作者:荟呀荟学习  | 我要投稿


Machine learnings(ChatGPT)

机器学习


What questions do technologies like ChatGPT raise for employees and customers?

像ChatGPT这样的技术给员工和客户带来了什么问题?


Feb 2nd 2023


If you ask something of ChatGPT, an artificial-intelligence (AI) tool that is all the rage, the responses you get back are almost instantaneous, utterly certain and often wrong. It is a bit like talking to an economist. The questions raised by technologies like ChatGPT yield much more tentative answers. But they are ones that managers ought to start asking.

如果你向风靡一时的人工智能工具ChatGPT询问一些问题,你得到的回答几乎是即时的、完全确定的,而且经常是错误的。这有点像和经济学家谈话。ChatGPT等技术带来的问题产生了更多的初步答案。但管理者们应该开始问这些问题。


One issue is how to deal with employees’ concerns about job security. Worries are natural. An AI that makes it easier to process your expenses is one thing; an AI that people would prefer to sit next to at a dinner party quite another. Being clear about how workers would redirect time and energy that is freed up by an AI helps foster acceptance. So does creating a sense of agency: research conducted by MIT Sloan Management Review and the Boston Consulting Group found that an ability to override an AI makes employees more likely to use it.

其中一个问题是如何处理员工对工作保障的担忧。担心是很自然的。人工智能让你更容易处理你的开支,这是一回事; 在晚宴上,人们更愿意坐在人工智能旁边则完全是另一回事。明确员工将如何重新分配人工智能所释放出来的时间和精力,有助于培养员工的接受度。创造一种代理感也是如此:《麻省理工斯隆管理评论》和波士顿咨询集团进行的研究发现,拥有手动控制人工智能的能力让员工更有可能使用人工智能。


Whether people really need to understand what is going on inside an AI is less clear. Intuitively, being able to follow an algorithm’s reasoning should trump being unable to. But a piece of research by academics at Harvard University, the Massachusetts Institute of Technology and the Polytechnic University of Milan suggests that too much explanation can be a problem.

人们是否真的需要了解人工智能内部发生了什么,尚不清楚。直观地说,能够遵循算法的推理比不遵循好。但哈佛大学、麻省理工学院和米兰理工大学的学者们的一项研究表明,过多的解释可能是个问题。


Employees at Tapestry, a portfolio of luxury brands, were given access to a forecasting model that told them how to allocate stock to stores. Some used a model whose logic could be interpreted; others used a model that was more of a black box. Workers turned out to be likelier to overrule models they could understand because they were, mistakenly, sure of their own intuitions. Workers were willing to accept the decisions of a model they could not fathom, however, because of their confidence in the expertise of people who had built it. The credentials of those behind an AI matter.

奢侈品牌投资组合泰佩思琦的员工可以使用一个预测模型,让模型告诉他们如何向门店分配库存。一些人使用了一个逻辑可以被解释的模型; 其他人使用的模型更像是一个黑箱。事实证明,员工们更有可能否决他们能理解的模型,因为他们错误地相信自己的直觉。然而,员工们愿意接受一个他们无法理解的模型的决定,因为他们对构建模型的人的专业知识有信心。人工智能背后的人的资历很重要。


The different ways that people respond to humans and to algorithms is a burgeoning area of research. In a recent paper Gizem Yalcin of the University of Texas at Austin and her co-authors looked at whether consumers responded differently to decisions—to approve someone for a loan, for example, or a country-club membership—when they were made by a machine or a person. They found that people reacted the same when they were being rejected. But they felt less positively about an organisation when they were approved by an algorithm rather than a human. The reason? People are good at explaining away unfavourable decisions, whoever makes them. It is harder for them to attribute a successful application to their own charming, delightful selves when assessed by a machine. People want to feel special, not reduced to a data point.

人们对人类和算法的不同反应是一个新兴的研究领域。在最近的一篇论文中,德克萨斯大学奥斯汀分校的Gizem Yalcin和她的共同作者研究了消费者对由机器或人做出的决定(例如批准某人贷款或乡村俱乐部会员资格)的反应是否不同。他们发现,当人们被拒绝时,他们的反应是一样的。但当他们被算法而非人类批准时,他们对一家组织的感觉就不那么积极了。为什么呢? 人们善于为不利的决定辩解,不管是谁做的决定。当机器对他们进行评估时,他们更难把成功的申请归功于他们自身迷人且令人愉快的个性。人们希望自己是特别的,而不是被简化成一个数据点。


In a forthcoming paper, meanwhile, Arthur Jago of the University of Washington and Glenn Carroll of the Stanford Graduate School of Business investigate how willing people are to give rather than earn credit—specifically for work that someone did not do on their own. They showed volunteers something attributed to a specific person—an artwork, say, or a business plan—and then revealed that it had been created either with the help of an algorithm or with the help of human assistants. Everyone gave less credit to producers when they were told they had been helped, but this effect was more pronounced for work that involved human assistants. Not only did the participants see the job of overseeing the algorithm as more demanding than supervising humans, but they did not feel it was as fair for someone to take credit for the work of other people.

与此同时,在一篇即将发表的论文中,华盛顿大学的Arthur Jago和斯坦福商学院的Glenn Carroll调查了人们称赞他人的意愿,特别是那些不是自己做的工作。他们向志愿者展示了一些属于某个人的东西,比如一件艺术品或一份商业计划,然后告诉他们这些东西是在算法的帮助下或在人类助手的帮助下创作的。当被告知有人帮忙时,每个人对制作人的认可度都降低了,但这种影响在有人类助手的工作中更为明显。参与者不仅认为监督算法的工作比监督人类的工作要求更高,而且他们认为有人把别人的工作归功于自己是不公平的。


Another paper, by Anuj Kapoor of the Indian Institute of Management Ahmedabad and his co-authors, examines whether AIs or humans are more effective at helping people lose weight. The authors looked at the weight loss achieved by subscribers to an Indian mobile app, some of whom used only an AIcoach and some of whom used a human coach, too. They found that people who also used a human coach lost more weight, set themselves tougher goals and were more fastidious about logging their activities. But people with a higher body mass index did not do as well with a human coach as those who weighed less. The authors speculate that heavier people might be more embarrassed by interacting with another person.

另一篇由印度管理学院艾哈迈达巴德分校的Anuj Kapoor及其共同作者撰写的论文,该论文研究了人工智能和人类在帮助人们减肥方面哪个更有效。作者研究了一款印度移动应用的订阅者的减肥效果,其中一些人只使用AI教练,另一些人也使用真人教练。他们发现,使用真人教练的人体重减轻得更多,给自己设定了更严格的目标,对记录自己的活动也更挑剔。但BMI较高的人在真人教练的指导下表现不如体重较轻的人。作者推测,体重较重的人在与他人交流时可能会感到更尴尬。


The picture that emerges from such research is messy. It is also dynamic: just as technologies evolve, so will attitudes. But it is crystal-clear on one thing. The impact of ChatGPT and other AIs will depend not just on what they can do, but also on how they make people feel.

从这类研究中得出的结论是混乱的。它也是动态的: 正如技术进步一样,态度也会‘进步’。不过有一件事是非常清楚的。ChatGPT和其他人工智能的影响不仅取决于它们能做什么,还取决于它们给人们带来的感觉。

经济学人 | Machine learnings(ChatGPT)机器学习(20的评论 (共 条)

分享到微博请遵守国家法律