ChatGPT 教英语 Generative AI: what is it good for?
以下由AI自动生成
source: https://www.youtube.com/watch?v=gCDacaohqaA
Generative AI is the technology behind the wave of new online tools used by millions around the world. Some can answer queries on a huge range of topics in conversational language. Others can generate realistic looking photographs from short text prompts. As the technology is ever more widely deployed, what are its current strengths and its weaknesses? The Economist's science editor Alok Jha is joined by deputy editor Tom Standage, science correspondent Abbey Vertex and global business and economics correspondent Arjun Ramani to discuss this new era of AI. What happened in 2017 was that some researchers at Google came up with a better attention mechanism called the transformer. And that's what the T in GPT stands for. And so, essentially, that made these systems a lot better. They could come up with longer pieces of coherent output, whether that's text or computer code or whatever. So there was a technical break through, and that took a while to ripple through the community. So that's one of the things that changed. But the other thing that changed is this technology became much more visible. What happened last year is that a much more capable model, GPT-3.5, was launched as chat-GPT, which anyone could sign up for. Once you got to the front of the waiting list, you could go and talk to it. And you'll have heard these numbers, and 100 million people tried it within the first two months. And that's reckoned to be the fastest adoption of a consumer technology in history. So the thing that really changed is that suddenly there was a way that anyone could use this technology. And they came up with all sorts of amazing uses for it and asked it to do all sorts of extraordinary things. And that was what really put it on the map. I think one of the huge strengths of these large language models is that they're able to crunch and churn through such scads of unlabelled data. So in the past with AI, you always needed your thing and a label. So that required humans to go through and label them. But with large language models, you just get a blurry picture that is basically taken of hundreds of millions of words. And it honestly just seems to do well. I think a lot of people are still baffled. It generates convincing text. It's very good at pattern matching. Style transfer is one of the other things. You ask it to write a love letter in the style of a pirate from the 14th century with an Irish accent from the Bahamas. It's also pretty good at passing standardised tests. It passed the US Medical Licensing exam. It's passed some legal tests. Basically very good at text things. At the moment, one of the big opportunities is writing code. The great thing about writing code with these systems, and I do still write some code, is that if the code is slightly wrong, you find out straight away. Because either the interpreter or the compiler chokes on it, or the output of the code isn't quite what you were expecting. So you have this very tight feedback loop. And if it's slightly wrong, you find out pretty quickly. In terms of weaknesses, I think one of them is the lack of transparency. It's kind of a black box. You can have access to what's going on inside, the attention weights, what those are. But they don't mean a lot to us. There's over 100 billion of these weights. That's very hard for us to understand what they're doing. I think the main weakness is that it's such a complex system. We don't understand it fully. If your job is to find out new facts, that's not something these systems are in a good position to do. I was talking to people in the British government's foreign office the other day. I was saying, our job, whether we work in government, the intelligence services or journalism, is to find new facts. And they've got to be right. You don't want to just take any old stuff coming out of one of these systems. If the accuracy matters, then these systems are maybe not so great. The reliability of the models need to be improved before they start automating huge amounts of processes. There's a huge amount of economic activity that will get affected by this. The paper put out by some economists at Open AI said around 20% of the US workforce could have around 50% of their tasks affected by generative AI in the next few years. A lot of tasks we do on a day-to-day basis could be helped by these models. There's some research in the Economics of Innovation that talks about how if you want to get an intelligence explosion, or exponentially increasing economic growth in any given domain, you need to automate the entire process. If you've only automated 90% of it, or 99%, that doesn't get you nearly the same effect. The slowest part of the process, which is the human, acts as a rate-determining step. We'll end up slowing things down. That's what is likely to happen in my mind. We use AI to help us with research, which we are already doing, but it still is not able to get 100% of the way. So ultimately the pace of progress continues as it has been. They would have become superintelligent and turned it into paperclips if it wasn't for pesky humans getting in the way. Thank you.
Section 1: Important Words
1. Generative - 产生的,具有创造性的。Example sentence: Generative AI has become a popular topic in the tech industry.
2. Coherent - 连贯的,一致的。Example sentence: The AI's output was coherent and easy to understand.
3. Churn - 大量混杂的。Example sentence: The language model was able to churn through a mountain of unlabelled data.
4. Baffled - 迷惑的。Example sentence: Many people are still baffled by the technology behind large language models.
5. Passing - 通过的,成功的。Example sentence: The AI was able to pass the US Medical Licensing exam.
6. Code - 代码,程序。Example sentence: The AI is currently being used to write code.
7. Transparency - 透明度,清晰度。Example sentence: There is a lack of transparency when it comes to understanding how the AI works.
8. Accuracy - 准确度,精度。Example sentence: Accuracy matters when it comes to using AI in important tasks.
9. Reliability - 可靠性。Example sentence: The reliability of the models needs to be improved before they can be widely used.
10. Automating - 自动化。Example sentence: Generative AI has the potential to automate many processes.
Section 2: Important Grammars
1. Passive voice - used to describe actions that are done to the subject of the sentence, rather than the subject performing the action. Example sentence: The US Medical Licensing exam was passed by the AI.
2. Comparative adjectives - used to compare the qualities of two or more things. Example sentence: GPT-3.5 was much more capable than its predecessor.
3. Conditional statements - used to describe actions or events that are dependent on a certain condition being met. Example sentence: If the code is slightly wrong, the interpreter or compiler chokes on it.
Section 3: Full Translation
生成式人工智能是新一轮在线工具背后的技术,被全球数百万人使用。其中一些可以以对话方式回答大量话题的查询。其他人可以根据简短的文本提示生成逼真的照片。随着这种技术越来越广泛地部署,它的当前优势和劣势是什么?经济学家的科学编辑Alok Jha与副编辑Tom Standage、科学记者Abbey Vertex和全球商业和经济记者Arjun Ramani共同讨论了这个新时代的人工智能。2017年发生的一件事情是,谷歌的一些研究人员提出了更好的注意力机制,称为变压器。这就是GPT中T的含义。因此,这实际上使这些系统更好。它们可以提供更长的连贯输出,无论是文本还是计算机代码或其他任何东西。因此出现了技术突破,并花费了一段时间才在社区中传播。这是其中之一发生变化的事情。但另一件事情改变了,这项技术变得更加可见。去年发生的事情是,一种更有能力的模型GPT-3.5作为聊天-GPT推出,任何人都可以注册。一旦你到了等待列表的前面,你就可以去跟它聊天了。你会听到这些数字,根据统计,在前两个月内有1亿人尝试了它。这被认为是历史上消费技术最快的采用。因此,真正改变的是,突然之间有了任何人都可以使用这项技术的方法。他们想出了各种令人惊叹的用途,并要求它做所有非凡的事情。这真正将它放在了地图上。我认为这些大型语言模型的巨大优势之一是,它们能够处理大量未标记数据。因此,在过去的AI中,您总是需要自己的东西和一个标签。因此,这需要人员对其进行标记。但对于大型语言模型,您只是得到了大约数亿个单词的模糊图像。它们实际上看起来不错。我认为很多人仍然感到困惑。它生成了令人信服的文本。它非常擅长模式匹配。样式转移是其中的另一种方式。您可以要求它用来自14世纪的海盗的的风格和巴哈马的爱尔兰口音写情书。它也相当擅长通过标准化测试。它通过了美国医学许可考试。它通过了一些法律测试。基本上非常擅长文本方面的事情。目前,其中一个重要机会是编写代码。使用这些系统编写代码的好处是,如果代码略有错误,您会立即发现。因为解释器或编译器会对其进行解释,或者代码的输出并不完全是您预期的输出。因此,您拥有非常紧密的反馈循环。如果它略有错误,您很快就会发现。在劣势方面,我认为其中一个问题是缺乏透明度。它有点像黑匣子。您可以访问正在进行的内部情况,关注权重等。但它们对我们来说并没有什么意义。这些权重有超过100亿个。对我们来说很难理解它们在做什么。我认为主要的弱点是它是一个如此复杂的系统,我们并不完全了解它。如果您的工作是找出新的事实,那么这不是这些系统的强项。我最近与英国政府外交部门的人谈到了这一点。我说,无论我们是在政府、情报机构还是新闻界工作,我们的工作都是找到新的事实。并且它们必须是正确的。您不希望只因为这些系统中的一部分而得到任何旧东西。如果准确性很重要,则这些系统可能不是很好。在它们开始自动化大量过程之前,这些模型的可靠性需要得到改善。许多我们日常工作的任务都可以通过这些模型得到帮助。有一些创新经济学的研究谈论了一件事情,即如果你想在任何给定领域实现智能爆发,或呈指数增长的经济增长,那么你需要自动化整个过程。如果您只自动化了90%或99%,那并不能让您获得相同的效果。流程最慢的部分,即人类,将成为决定步骤的速率。我们最终会拖累事情的进展。我们使用AI来帮助我们进行研究,我们已经在这方面做得很好,但它仍然无法完全解决问题。因此最终的进展速度会像以前一样。如果不是因为麻烦的人类挡了路,它们将变得超级智能,并将其变成纸片。谢谢。
0:00:00Generative AI is the technology behind the wave of new online tools used by millions around the world.
0:00:08 Some can answer queries on a huge range of topics in conversational language.
0:00:15 Others can generate realistic looking photographs from short text prompts.
0:00:21 As the technology is ever more widely deployed, what are its current strengths and its weaknesses?
0:00:29 The Economist's science editor Alok Jha is joined by deputy editor Tom Standage, science correspondent Abbey Vertex and global business and economics correspondent Arjun Ramani to discuss this new era of AI.
0:00:59 What happened in 2017 was that some researchers at Google came up with a better attention mechanism called the transformer.
0:01:04 And that's what the T in GPT stands for. And so, essentially, that made these systems a lot better.
0:01:12 They could come up with longer pieces of coherent output, whether that's text or computer code or whatever.
0:01:18 So there was a technical break through, and that took a while to ripple through the community.
0:01:24 So that's one of the things that changed. But the other thing that changed is this technology became much more visible.
0:01:29 What happened last year is that a much more capable model, GPT-3.5, was launched as chat-GPT, which anyone could sign up for.
0:01:39 Once you got to the front of the waiting list, you could go and talk to it.
0:01:42 And you'll have heard these numbers, and 100 million people tried it within the first two months.
0:01:47 And that's reckoned to be the fastest adoption of a consumer technology in history.
0:01:51 So the thing that really changed is that suddenly there was a way that anyone could use this technology.
0:01:57 And they came up with all sorts of amazing uses for it and asked it to do all sorts of extraordinary things.
0:02:02 And that was what really put it on the map.
0:02:03 I think one of the huge strengths of these large language models is that they're able to crunch and churn through such scads of unlabelled data.
0:02:13 So in the past with AI, you always needed your thing and a label.
0:02:18 So that required humans to go through and label them.
0:02:22 But with large language models, you just get a blurry picture that is basically taken of hundreds of millions of words.
0:02:30 And it honestly just seems to do well. I think a lot of people are still baffled.
0:02:36 It generates convincing text. It's very good at pattern matching. Style transfer is one of the other things.
0:02:44 You ask it to write a love letter in the style of a pirate from the 14th century with an Irish accent from the Bahamas.
0:02:54 It's also pretty good at passing standardised tests. It passed the US Medical Licensing exam. It's passed some legal tests.
0:03:04 Basically very good at text things.
0:03:07 At the moment, one of the big opportunities is writing code.
0:03:11 The great thing about writing code with these systems, and I do still write some code, is that if the code is slightly wrong, you find out straight away.
0:03:20 Because either the interpreter or the compiler chokes on it, or the output of the code isn't quite what you were expecting.
0:03:26 So you have this very tight feedback loop. And if it's slightly wrong, you find out pretty quickly.
0:03:32 In terms of weaknesses, I think one of them is the lack of transparency. It's kind of a black box.
0:03:38 You can have access to what's going on inside, the attention weights, what those are. But they don't mean a lot to us.
0:03:46 There's over 100 billion of these weights. That's very hard for us to understand what they're doing.
0:03:54 I think the main weakness is that it's such a complex system. We don't understand it fully.
0:04:00 If your job is to find out new facts, that's not something these systems are in a good position to do.
0:04:06 I was talking to people in the British government's foreign office the other day.
0:04:12 I was saying, our job, whether we work in government, the intelligence services or journalism, is to find new facts.
0:04:18 And they've got to be right. You don't want to just take any old stuff coming out of one of these systems.
0:04:25 If the accuracy matters, then these systems are maybe not so great.
0:04:29 The reliability of the models need to be improved before they start automating huge amounts of processes.
0:04:37 There's a huge amount of economic activity that will get affected by this.
0:04:43 The paper put out by some economists at Open AI said around 20% of the US workforce could have around 50% of their tasks affected by generative AI in the next few years.
0:04:55 A lot of tasks we do on a day-to-day basis could be helped by these models.
0:05:01 There's some research in the Economics of Innovation that talks about how if you want to get an intelligence explosion,
0:05:09 or exponentially increasing economic growth in any given domain, you need to automate the entire process.
0:05:18 If you've only automated 90% of it, or 99%, that doesn't get you nearly the same effect.
0:05:26 The slowest part of the process, which is the human, acts as a rate-determining step.
0:05:33 We'll end up slowing things down. That's what is likely to happen in my mind.
0:05:40 We use AI to help us with research, which we are already doing, but it still is not able to get 100% of the way.
0:05:49 So ultimately the pace of progress continues as it has been.
0:05:53 They would have become superintelligent and turned it into paperclips if it wasn't for pesky humans getting in the way.
0:06:10 Thank you.