【TED演讲稿】ChatGPT惊人潜力的内幕【英语演讲文字稿】
TED演讲者:Greg Brockman / 格雷格·布罗克曼
演讲标题:The inside story of ChatGPT's astonishing potential / ChatGPT惊人潜力的内幕
内容概要:In a talk from the cutting edge of technology, OpenAI cofounder Greg Brockman explores the underlying design principles of ChatGPT and demos some mind-blowing, unreleased plug-ins for the chatbot that sent shockwaves across the world. After the talk, head of TED Chris Anderson joins Brockman to dig into the timeline of ChatGPT's development and get Brockman's take on the risks, raised by many in the tech industry and beyond, of releasing such a powerful tool into the world.
在一次来自技术前沿的演讲中,OpenAI联合创始人Greg Brockman探讨了ChatGPT的基本设计原理,并演示了一些令人震惊的、未发布的聊天机器人插件,这些插件在世界各地掀起了轩然大波。演讲结束后,TED负责人Chris Anderson与Brockman一起深入探讨了ChatGPT的发展时间表,并让Brockman承担科技行业内外许多人提出的向世界发布如此强大的工具的风险。
*************************************************************************
【1】We started OpenAI seven years ago because we felt like something really interesting was happening in AI and we wanted to help steer it in a positive direction.
我们七年前启动了OpenAI 因为我们觉得 人工智能中发生了非常有趣的事情 我们想帮助引导它 朝着积极的方向。
【2】It's honestly just really amazing to see how far this whole field has come since then.
真的很神奇 整个场地有多远 从那时起就来了。
【3】And it's really gratifying to hear from people like Raymond who are using the technology we are building, and others, for so many wonderful things.
听到这个消息真的很高兴 来自Raymond这样的人 谁在使用这项技术 我们正在建设,而其他人, 为了这么多美妙的事情。
【4】We hear from people who are excited, we hear from people who are concerned, we hear from people who feel both those emotions at once.
我们听到一些人很兴奋, 我们从关心此事的人那里听到, 我们从那些有这种感觉的人那里听到 这两种情绪同时出现。
【5】And honestly, that's how we feel.
老实说,这就是我们的感受。
【6】Above all, it feels like we're entering an historic period right now where we as a world are going to define a technology that will be so important for our society going forward.
最重要的是,感觉我们正在进入 现在的历史时期 我们作为一个世界 将定义一项技术 这将非常重要 为了我们的社会向前发展。
【7】And I believe that we can manage this for good.
我相信我们可以 永远管理好这件事。
【8】So today, I want to show you the current state of that technology and some of the underlying design principles that we hold dear.
所以今天,我想给你看看 该技术的现状 以及一些潜在的 我们珍视的设计原则。
【9】So the first thing I'm going to show you is what it's like to build a tool for an AI rather than building it for a human.
所以我要给你看的第一件事 就是建造的感觉 人工智能的工具 而不是为人类建造。
【10】So we have a new DALL-E model, which generates images, and we are exposing it as an app for ChatGPT to use on your behalf.
所以我们有了一个新的DALL-E模型, 其生成图像, 我们将其作为应用程序进行公开 供ChatGPT代表您使用。
【11】And you can do things like ask, you know, suggest a nice post-TED meal and draw a picture of it.
你可以做一些事情,比如问,你知道, 建议在TED演讲后吃一顿大餐 然后画一幅画。
【12】(Laughter) Now you get all of the, sort of, ideation and creative back-and-forth and taking care of the details for you that you get out of ChatGPT.
(众笑) 现在你得到了所有的,有点, 思维和创造性的来回 并为您处理细节 你离开了ChatGPT。
【13】And here we go, it's not just the idea for the meal, but a very, very detailed spread.
我们开始了,这不仅仅是 这顿饭的创意, 但是非常非常详细的传播。
【14】So let's see what we're going to get.
所以让我们看看我们会得到什么。
【15】But ChatGPT doesn't just generate images in this case -- sorry, it doesn't generate text, it also generates an image.
但ChatGPT不仅仅生成 本例中的图像-- 对不起,它不会生成文本, 它还生成图像。
【16】And that is something that really expands the power of what it can do on your behalf in terms of carrying out your intent.
这很重要 这真的扩大了力量 它能为你做些什么 在实现你的意图方面。
【17】And I'll point out, this is all a live demo.
我要指出, 这都是现场演示。
【18】This is all generated by the AI as we speak.
这都是生成的 当我们说话的时候,人工智能。
【19】So I actually don't even know what we're going to see.
所以我其实都不知道 我们将要看到的。
【20】This looks wonderful.
这看起来棒极了。
【21】(Applause) I'm getting hungry just looking at it.
(掌声) 我一看就饿了。
【22】Now we've extended ChatGPT with other tools too, for example, memory.
现在我们已经扩展了ChatGPT 使用其他工具, 例如存储器。
【23】You can say "save this for later."
你可以说“留着待会儿用。”
【24】And the interesting thing about these tools is they're very inspectable.
有趣的是 关于这些工具 它们是非常可检查的。
【25】So you get this little pop up here that says "use the DALL-E app."
所以你在这里看到了这个小弹窗 上面写着“使用DALL-E应用程序。”
【26】And by the way, this is coming to you, all ChatGPT users, over upcoming months.
顺便说一句,这一切都向你袭来 ChatGPT用户,未来几个月。
【27】And you can look under the hood and see that what it actually did was write a prompt just like a human could.
你可以看看引擎盖下面 看看它到底做了什么 正在编写提示 就像人类一样。
【28】And so you sort of have this ability to inspect how the machine is using these tools, which allows us to provide feedback to them.
所以你有点 这种检查能力 机器如何使用这些工具, 这使我们能够提供 向他们提供反馈。
【29】Now it's saved for later, and let me show you what it's like to use that information and to integrate with other applications too.
现在它被保存以备将来使用, 让我给你看看 使用这些信息是什么感觉 以及整合 以及其他应用程序。
【30】You can say, “Now make a shopping list for the tasty thing I was suggesting earlier.” And make it a little tricky for the AI.
你可以说, ??现在列一份购物清单 为了美味的东西 我之前是在建议。?? 让人工智能变得有点棘手。
【31】'"And tweet it out for all the TED viewers out there."
“并在推特上发布 TED观众。
【32】(Laughter) So if you do make this wonderful, wonderful meal, I definitely want to know how it tastes.
(众笑) 所以,如果你真的做到了这一点, 美餐, 我当然想知道它的味道。
【33】But you can see that ChatGPT is selecting all these different tools without me having to tell it explicitly which ones to use in any situation.
但你可以看到ChatGPT 正在选择所有这些不同的工具 而不需要我明确地告诉它 在任何情况下都可以使用。
【34】And this, I think, shows a new way of thinking about the user interface.
我认为,这显示了一种新的方式 思考用户界面。
【35】Like, we are so used to thinking of, well, we have these apps, we click between them, we copy/paste between them, and usually it's a great experience within an app as long as you kind of know the menus and know all the options.
就像,我们已经习惯了思考, 好吧,我们有这些应用程序, 我们在它们之间点击, 我们在它们之间复制/粘贴, 通常这是一个很棒的 应用程序内的体验 只要你知道 菜单,并了解所有选项。
【36】Yes, I would like you to.
是的,我希望你能。
【37】Yes, please.
是的,请。
【38】Always good to be polite.
礼貌总是好的。
【39】(Laughter) And by having this unified language interface on top of tools, the AI is able to sort of take away all those details from you.
(众笑) 通过统一 工具之上的语言界面, 人工智能能够带走 所有这些细节都来自你。
【40】So you don't have to be the one who spells out every single sort of little piece of what's supposed to happen.
所以你不必是那个 他把每一个字都拼出来 有点小 应该发生的事情。
【41】And as I said, this is a live demo, so sometimes the unexpected will happen to us.
正如我所说,这是一个现场演示, 所以有时意想不到 会发生在我们身上。
【42】But let's take a look at the Instacart shopping list while we're at it.
但让我们看看Instacart 购物清单。
【43】And you can see we sent a list of ingredients to Instacart.
你可以看到我们发送了一个列表 Instacart上的配料。
【44】Here's everything you need.
这是你需要的一切。
【45】And the thing that's really interesting is that the traditional UI is still very valuable, right?
还有一件非常有趣的事 是传统的用户界面吗 仍然很有价值,对吧?
【46】If you look at this, you still can click through it and sort of modify the actual quantities.
如果你看看这个, 你仍然可以点击它 并在某种程度上修改实际数量。
【47】And that's something that I think shows that they're not going away, traditional UIs.
这就是我认为的 他们不会离开, 传统UI。
【48】It's just we have a new, augmented way to build them.
只是我们有了一个新的, 以增强的方式构建它们。
【49】And now we have a tweet that's been drafted for our review, which is also a very important thing.
现在我们有一条推特 这是为我们的审查起草的, 这也是一件非常重要的事情。
【50】We can click “run,” and there we are, we’re the manager, we’re able to inspect, we're able to change the work of the AI if we want to.
我们可以点击??跑?? 我们就在这里, 我们??你是经理,我们??能够进行检查, 我们可以改变工作 如果我们愿意的话。
【51】And so after this talk, you will be able to access this yourself.
所以在这次谈话之后, 你可以自己访问这个。
【52】And there we go.
我们开始了。
【53】Cool.
凉的
【54】Thank you, everyone.
谢谢大家。
【55】(Applause) So we’ll cut back to the slides.
(掌声) 所以我们??I’我把镜头切回幻灯片。
【56】Now, the important thing about how we build this, it's not just about building these tools.
现在,重要的是 关于我们是如何构建的, 这不仅仅是构建这些工具。
【57】It's about teaching the AI how to use them.
这是关于教学的 人工智能如何使用它们。
【58】Like, what do we even want it to do when we ask these very high-level questions?
比如,我们甚至希望它做什么 当我们问这些 高级问题?
【59】And to do this, we use an old idea.
为了做到这一点,我们使用了一个古老的想法。
【60】If you go back to Alan Turing's 1950 paper on the Turing test, he says, you'll never program an answer to this.
如果你回到艾伦·图灵1950年的论文 关于图灵测试,他说, 你永远也无法对此进行编程。
【61】Instead, you can learn it.
相反,你可以学习它。
【62】You could build a machine, like a human child, and then teach it through feedback.
你可以制造一台机器, 就像人类的孩子一样, 然后通过反馈进行教学。
【63】Have a human teacher who provides rewards and punishments as it tries things out and does things that are either good or bad.
有一位提供 奖惩 当它尝试和做事情时 要么是好的,要么是坏的。
【64】And this is exactly how we train ChatGPT.
这正是我们训练ChatGPT的方式。
【65】It's a two-step process.
这是一个分两步走的过程。
【66】First, we produce what Turing would have called a child machine through an unsupervised learning process.
首先,我们生产图灵 会叫一台儿童机器 通过无监督的学习过程。
【67】We just show it the whole world, the whole internet and say, “Predict what comes next in text you’ve never seen before.”
我们只是向全世界展示, 整个互联网 并说,??预测接下来会发生什么 在文本中??我以前从未见过。??
【68】And this process imbues it with all sorts of wonderful skills.
这个过程将其融入其中 拥有各种高超的技术。
【69】For example, if you're shown a math problem, the only way to actually complete that math problem, to say what comes next, that green nine up there, is to actually solve the math problem.
例如,如果显示您 一道数学题, 唯一的方法 完成那道数学题, 要说接下来会发生什么, 上面那个绿色的九号, 就是实际解决数学问题。
【70】But we actually have to do a second step, too, which is to teach the AI what to do with those skills.
但实际上我们必须这样做 第二步也是如此, 这是为了教授人工智能 如何利用这些技能。
【71】And for this, we provide feedback.
为此,我们提供反馈。
【72】We have the AI try out multiple things, give us multiple suggestions, and then a human rates them, says “This one’s better than that one.”
我们让人工智能尝试多种方法, 给我们多个建议, 然后一个人给它们打分,他说 ??这个??It’比那个好。??
【73】And this reinforces not just the specific thing that the AI said, but very importantly, the whole process that the AI used to produce that answer.
这不仅强化了 人工智能说的话, 但非常重要的是,整个过程 人工智能用来产生这个答案。
【74】And this allows it to generalize.
这使得它可以泛化。
【75】It allows it to teach, to sort of infer your intent and apply it in scenarios that it hasn't seen before, that it hasn't received feedback.
它允许它进行教学, 来推断你的意图 并将其应用于场景 它以前从未见过, 它还没有收到反馈。
【76】Now, sometimes the things we have to teach the AI are not what you'd expect.
现在,有时候 我们必须教授人工智能 不是你所期望的。
【77】For example, when we first showed GPT-4 to Khan Academy, they said, "Wow, this is so great, We're going to be able to teach students wonderful things.
例如,当我们第一次展示 GPT-4至可汗学院, 他们说:“哇,这太棒了, 我们将能够进行教学 学生们都很棒。
【78】Only one problem, it doesn't double-check students' math.
只有一个问题,不是 仔细检查学生的数学。
【79】If there's some bad math in there, it will happily pretend that one plus one equals three and run with it."
如果里面有数学不好的地方, 它会很高兴地假装一加一 等于三,然后用它跑。
【80】So we had to collect some feedback data.
所以我们不得不收集一些反馈数据。
【81】Sal Khan himself was very kind and offered 20 hours of his own time to provide feedback to the machine alongside our team.
萨尔·汗本人非常友善 并提供自己20小时的时间 向机器提供反馈 与我们的团队并肩作战。
【82】And over the course of a couple of months we were able to teach the AI that, "Hey, you really should push back on humans in this specific kind of scenario."
在几个月的时间里 我们能够教导人工智能, “嘿,你真的应该 反击人类 在这种特定的情况下。
【83】And we've actually made lots and lots of improvements to the models this way.
事实上,我们已经赚了很多 以这种方式对模型进行改进。
【84】And when you push that thumbs down in ChatGPT, that actually is kind of like sending up a bat signal to our team to say, “Here’s an area of weakness where you should gather feedback.”
当你推动 在ChatGPT中竖起大拇指, 这实际上有点像发送 一个蝙蝠信号给我们的团队说, ??在这里??It’这是一个薄弱环节 你应该在哪里收集反馈。??
【85】And so when you do that, that's one way that we really listen to our users and make sure we're building something that's more useful for everyone.
所以当你这样做的时候, 这是我们真正的 倾听我们的用户 确保我们正在建造一些东西 这对每个人都更有用。
【86】Now, providing high-quality feedback is a hard thing.
现在,提供高质量 反馈是一件很难的事情。
【87】If you think about asking a kid to clean their room, if all you're doing is inspecting the floor, you don't know if you're just teaching them to stuff all the toys in the closet.
如果你想问一个孩子 为了清洁他们的房间, 如果你所做的一切 正在检查地板, 你不知道自己是否只是在教书 他们把所有的玩具都塞进壁橱里。
【88】This is a nice DALL-E-generated image, by the way.
这是一个很好的DALL-E-generated 图像,顺便说一句。
【89】And the same sort of reasoning applies to AI.
同样的种类 推理适用于人工智能。
【90】As we move to harder tasks, we will have to scale our ability to provide high-quality feedback.
当我们进入更艰巨的任务时, 我们必须扩大我们的能力 以提供高质量的反馈。
【91】But for this, the AI itself is happy to help.
但为此,人工智能本身 很乐意提供帮助。
【92】It's happy to help us provide even better feedback and to scale our ability to supervise the machine as time goes on.
很高兴能帮助我们提供 甚至更好的反馈 并扩大我们的监督能力 机器随着时间的推移。
【93】And let me show you what I mean.
让我告诉你我的意思。
【94】For example, you can ask GPT-4 a question like this, of how much time passed between these two foundational blogs on unsupervised learning and learning from human feedback.
例如,您可以询问GPT-4 像这样的问题, 多少时间过去了 在这两个基础博客之间 论无监督学习 以及从人类反馈中学习。
【95】And the model says two months passed.
模型显示两个月过去了。
【96】But is it true?
但这是真的吗?
【97】Like, these models are not 100-percent reliable, although they’re getting better every time we provide some feedback.
比如,这些型号 不是100%可靠的, 尽管他们??你越来越好了 每次我们提供一些反馈。
【98】But we can actually use the AI to fact-check.
但我们实际上可以使用 人工智能进行事实核查。
【99】And it can actually check its own work.
它实际上可以检查自己的工作。
【100】You can say, fact-check this for me.
你可以说,为我核实一下事实。
【101】Now, in this case, I've actually given the AI a new tool.
现在,在这种情况下,我实际上 给人工智能一个新的工具。
【102】This one is a browsing tool where the model can issue search queries and click into web pages.
这是一个浏览工具 模型可以在其中发布搜索查询 然后点击进入网页。
【103】And it actually writes out its whole chain of thought as it does it.
它实际上写出来了 它的整个思想链。
【104】It says, I’m just going to search for this and it actually does the search.
上面写着,我??I’我只是想找这个 它实际上是在进行搜索。
【105】It then it finds the publication date and the search results.
然后它会找到发布日期 以及搜索结果。
【106】It then is issuing another search query.
然后它发出另一个搜索查询。
【107】It's going to click into the blog post.
它将点击进入博客文章。
【108】And all of this you could do, but it’s a very tedious task.
所有这些你都可以做到, 但是??It’这是一项非常乏味的任务。
【109】It's not a thing that humans really want to do.
这不是一件事 人类真正想做的事。
【110】It's much more fun to be in the driver's seat, to be in this manager's position where you can, if you want, triple-check the work.
这更有趣 坐在驾驶座上, 担任这个经理的职位 如果你愿意,你可以在哪里, 对工作进行三次检查。
【111】And out come citations so you can actually go and very easily verify any piece of this whole chain of reasoning.
随之而来的是引文 这样你就可以去了 并且非常容易地验证任何一件 在整个推理链中。
【112】And it actually turns out two months was wrong.
事实证明 两个月是错误的。
【113】Two months and one week, that was correct.
两个月零一周, 这是正确的。
【114】(Applause) And we'll cut back to the side.
(掌声) 我们会切到一边。
【115】And so thing that's so interesting to me about this whole process is that it’s this many-step collaboration between a human and an AI.
所以我很感兴趣的事情 关于整个过程 是这样吗??这是多步骤合作吗 在人类和人工智能之间。
【116】Because a human, using this fact-checking tool is doing it in order to produce data for another AI to become more useful to a human.
因为一个人,使用 这个事实核查工具 是为了产生数据 让另一个人工智能成为 对人类更有用。
【117】And I think this really shows the shape of something that we should expect to be much more common in the future, where we have humans and machines kind of very carefully
我认为这确实表明了 某物的形状 我们应该期待的 在未来更加普遍, 我们有人类的地方 和机器
【118】and delicately designed in how they fit into a problem and how we want to solve that problem.
设计精致 在他们如何融入问题中 以及我们想要的方式 来解决这个问题。
【119】We make sure that the humans are providing the management, the oversight, the feedback, and the machines are operating in a way that's inspectable and trustworthy.
我们确保人类提供 管理、监督, 反馈, 机器正在运行 以一种可检查的方式 值得信赖。
【120】And together we're able to actually create even more trustworthy machines.
我们能够一起创造 甚至更值得信赖的机器。
【121】And I think that over time, if we get this process right, we will be able to solve impossible problems.
我认为随着时间的推移, 如果我们把这个过程做好, 我们将能够解决 不可能的问题。
【122】And to give you a sense of just how impossible I'm talking, I think we're going to be able to rethink almost every aspect of how we interact with computers.
给你一种感觉 我说话是多么的不可能, 我想我们会 重新思考几乎所有方面 我们如何与计算机交互。
【123】For example, think about spreadsheets.
例如,想想电子表格。
【124】They've been around in some form since, we'll say, 40 years ago with VisiCalc.
从那以后,他们一直以某种形式存在, 我们会说,40年前的VisiCalc。
【125】I don't think they've really changed that much in that time.
我不认为他们真的 在那段时间里改变了那么多。
【126】And here is a specific spreadsheet of all the AI papers on the arXiv for the past 30 years.
这是一个具体的电子表格 所有关于arXiv的人工智能论文 在过去的30年里。
【127】There's about 167,000 of them.
大约有16.7万人。
【128】And you can see there the data right here.
你可以在这里看到数据。
【129】But let me show you the ChatGPT take on how to analyze a data set like this.
但让我给你看看ChatGPT的拍摄 关于如何分析这样的数据集。
【130】So we can give ChatGPT access to yet another tool, this one a Python interpreter, so it’s able to run code, just like a data scientist would.
所以我们可以给ChatGPT 访问另一个工具, 这是一个Python解释器, 所以它??s能够运行代码, 就像数据科学家一样。
【131】And so you can just literally upload a file and ask questions about it.
所以你可以 上传文件 并就此提出问题。
【132】And very helpfully, you know, it knows the name of the file and it's like, "Oh, this is CSV," comma-separated value file, "I'll parse it for you."
而且非常有用,你知道,它知道 文件的名称,就像, “哦,这是CSV,” 逗号分隔的值文件, “我帮你分析一下。”
【133】The only information here is the name of the file, the column names like you saw and then the actual data.
这里唯一的信息 是文件的名称, 你看到的列名 然后是实际数据。
【134】And from that it's able to infer what these columns actually mean.
由此可以推断 这些列的实际含义。
【135】Like, that semantic information wasn't in there.
比如,语义信息 不在里面。
【136】It has to sort of, put together its world knowledge of knowing that, “Oh yeah, arXiv is a site that people submit papers and therefore that's what these things are and that these are integer values and so therefore it's a number of authors in the paper,"
必须把它放在一起 它的世界知识知道, ??哦,是的,arXiv是一个网站 人们提交论文 因此,这些东西就是这样 并且这些是整数值 因此它是一个数字 论文作者,”
【137】like all of that, that’s work for a human to do, and the AI is happy to help with it.
就像所有这些一样??s工作 对于人类来说, 人工智能很乐意提供帮助。
【138】Now I don't even know what I want to ask.
现在我甚至不知道我想问什么。
【139】So fortunately, you can ask the machine, "Can you make some exploratory graphs?"
所以幸运的是,你可以问机器, “你能做一些探索性的图表吗?”
【140】And once again, this is a super high-level instruction with lots of intent behind it.
再一次,这是一个超高水平的 背后有很多意图的指令。
【141】But I don't even know what I want.
但我甚至不知道自己想要什么。
【142】And the AI kind of has to infer what I might be interested in.
人工智能必须进行推断 我可能感兴趣的东西。
【143】And so it comes up with some good ideas, I think.
于是它出现了 我想有一些好主意。
【144】So a histogram of the number of authors per paper, time series of papers per year, word cloud of the paper titles.
所以这个数字的直方图 每篇论文的作者数量, 每年论文的时间序列, 论文标题的单词云。
【145】All of that, I think, will be pretty interesting to see.
所有这些,我认为, 将会非常有趣。
【146】And the great thing is, it can actually do it.
最棒的是, 它实际上可以做到。
【147】Here we go, a nice bell curve.
我们开始了,一个漂亮的钟形曲线。
【148】You see that three is kind of the most common.
你看到了吗 是最常见的。
【149】It's going to then make this nice plot of the papers per year.
然后它会制作出一个很好的情节 每年的论文数量。
【150】Something crazy is happening in 2023, though.
有点疯狂 不过,这将发生在2023年。
【151】Looks like we were on an exponential and it dropped off the cliff.
看起来我们正处于指数级 然后它从悬崖上掉了下来。
【152】What could be going on there?
那里可能发生了什么?
【153】By the way, all this is Python code, you can inspect.
对了,所有这些 是Python代码,您可以检查。
【154】And then we'll see word cloud.
然后我们将看到单词cloud。
【155】So you can see all these wonderful things that appear in these titles.
所以你可以看到所有这些美妙的东西 出现在这些标题中。
【156】But I'm pretty unhappy about this 2023 thing.
但我很不高兴 关于2023年的事情。
【157】It makes this year look really bad.
这让今年看起来很糟糕。
【158】Of course, the problem is that the year is not over.
当然,问题是 这一年还没有结束。
【159】So I'm going to push back on the machine.
所以我要重新启动机器。
【160】[Waitttt that's not fair!!!
这不公平!!!
【161】2023 isn't over.
2023年还没有结束。
【162】What percentage of papers in 2022 were even posted by April 13?] So April 13 was the cut-off date I believe.
2022年论文占比 甚至在4月13日之前发布?] 所以4月13日是截止日期 日期我相信。
【163】Can you use that to make a fair projection?
你能用它做吗 一个公平的预测?
【164】So we'll see, this is the kind of ambitious one.
所以我们拭目以待,这是 那种雄心勃勃的人。
【165】(Laughter) So you know, again, I feel like there was more I wanted out of the machine here.
(众笑) 所以你知道, 再说一次,我觉得我想要更多 从机器里出来。
【166】I really wanted it to notice this thing, maybe it's a little bit of an overreach for it to have sort of, inferred magically that this is what I wanted.
我真的很想让它注意到这件事, 也许有点 对它的过度反应 有点,神奇地推断 这就是我想要的。
【167】But I inject my intent, I provide this additional piece of, you know, guidance.
但我注入了我的意图, 我提供这个附加件 你知道,指导。
【168】And under the hood, the AI is just writing code again, so if you want to inspect what it's doing, it's very possible.
在引擎盖下, AI只是再次编写代码, 所以,如果你想检查它在做什么, 这是很有可能的。
【169】And now, it does the correct projection.
现在,它做了正确的投影。
【170】(Applause) If you noticed, it even updates the title.
(掌声) 如果你注意到了,它甚至会更新标题。
【171】I didn't ask for that, but it know what I want.
我没有要求, 但它知道我想要什么。
【172】Now we'll cut back to the slide again.
现在我们将再次回到幻灯片。
【173】This slide shows a parable of how I think we ...
这张幻灯片展示了一个寓言 我认为我们。。。
【174】A vision of how we may end up using this technology in the future.
我们最终会如何 在未来使用这项技术。
【175】A person brought his very sick dog to the vet, and the veterinarian made a bad call to say, “Let’s just wait and see.”
一个人带来 他那只病得很重的狗去看兽医, 兽医打了个糟糕的电话 也就是说,??允许??让我们拭目以待。??
【176】And the dog would not be here today had he listened.
狗不会 如果他听了的话,今天就在这里。
【177】In the meanwhile, he provided the blood test, like, the full medical records, to GPT-4, which said, "I am not a vet, you need to talk to a professional, here are some hypotheses."
与此同时, 他提供了血液测试, 比如完整的医疗记录,到GPT-4, 上面写着:“我不是兽医, 你需要和专业人士谈谈, 以下是一些假设。
【178】He brought that information to a second vet who used it to save the dog's life.
他带来了这些信息 给第二个兽医 谁用它救了狗的命。
【179】Now, these systems, they're not perfect.
现在,这些系统并不完美。
【180】You cannot overly rely on them.
你不能过度依赖他们。
【181】But this story, I think, shows that a human with a medical professional and with ChatGPT as a brainstorming partner was able to achieve an outcome that would not have happened otherwise.
但我认为,这个故事表明 一个有医学专业知识的人 和ChatGPT 作为头脑风暴的合作伙伴 能够取得成果 否则就不会发生这种情况。
【182】I think this is something we should all reflect on, think about as we consider how to integrate these systems into our world.
我觉得这很重要 我们都应该反思, 考虑到我们的想法 如何集成这些系统 进入我们的世界。
【183】And one thing I believe really deeply, is that getting AI right is going to require participation from everyone.
有一件事我深信不疑, 让人工智能变得正确吗 需要每个人的参与。
【184】And that's for deciding how we want it to slot in, that's for setting the rules of the road, for what an AI will and won't do.
这是为了决定 我们希望它如何进入, 这是为了制定道路规则, 人工智能会做什么,不会做什么。
【185】And if there's one thing to take away from this talk, it's that this technology just looks different.
如果有一件事 为了摆脱这场谈话, 就是这项技术 只是看起来不一样。
【186】Just different from anything people had anticipated.
与任何东西都不一样 人们早就预料到了。
【187】And so we all have to become literate.
因此,我们都必须识字。
【188】And that's, honestly, one of the reasons we released ChatGPT.
老实说,这是一个 我们发布ChatGPT的原因之一。
【189】Together, I believe that we can achieve the OpenAI mission of ensuring that artificial general intelligence benefits all of humanity.
一起,我相信我们可以 实现OpenAI任务 确保 一般情报 造福全人类。
【190】Thank you.
非常感谢。
【191】(Applause)
(掌声)
【192】(Applause ends) Chris Anderson: Greg.
(掌声结束) 克里斯·安德森:格雷格。
【193】Wow.
哇!
【194】I mean ...
我的意思是。。。
【195】I suspect that within every mind out here there's a feeling of reeling.
我怀疑在这里的每个人心中 有一种摇摇欲坠的感觉。
【196】Like, I suspect that a very large number of people viewing this, you look at that and you think, “Oh my goodness, pretty much every single thing about the way I work, I need to rethink."
比如,我怀疑 观看该节目的人数, 你看着它就会想, ??天哪, 几乎每一件事 关于我的工作方式,我需要重新思考。
【197】Like, there's just new possibilities there.
就像,只是 那里有新的可能性。
【198】Am I right?
我说得对吗?
【199】Who thinks that they're having to rethink the way that we do things?
谁认为他们必须重新思考 我们做事的方式?
【200】Yeah, I mean, it's amazing, but it's also really scary.
是的,我的意思是,这太神奇了, 但这也很可怕。
【201】So let's talk, Greg, let's talk.
让我们谈谈,格雷格,让我们谈谈。
【202】I mean, I guess my first question actually is just how the hell have you done this?
我的意思是,我想 我的第一个问题实际上只是 你到底是怎么做到的?
【203】(Laughter) OpenAI has a few hundred employees.
(众笑) OpenAI有几百名员工。
【204】Google has thousands of employees working on artificial intelligence.
谷歌有数千名员工 致力于人工智能。
【205】Why is it you who's come up with this technology that shocked the world?
为什么上来的是你 利用这项技术 震惊了世界?
【206】Greg Brockman: I mean, the truth is, we're all building on shoulders of giants, right, there's no question.
格雷格·布罗克曼:我的意思是,事实是, 我们都在肩上建设 巨人,对吧,这是毫无疑问的。
【207】If you look at the compute progress, the algorithmic progress, the data progress, all of those are really industry-wide.
如果你看看计算进度, 算法的进展, 数据进度, 所有这些都是全行业的。
【208】But I think within OpenAI, we made a lot of very deliberate choices from the early days.
但我认为在OpenAI中, 我们做了很多非常深思熟虑的事情 早期的选择。
【209】And the first one was just to confront reality as it lays.
第一个只是 面对现实。
【210】And that we just thought really hard about like: What is it going to take to make progress here?
我们只是想 真的很难,比如: 需要什么 在这里取得进展?
【211】We tried a lot of things that didn't work, so you only see the things that did.
我们尝试了很多不起作用的方法, 所以你只能看到发生的事情。
【212】And I think that the most important thing has been to get teams of people who are very different from each other to work together harmoniously.
我认为最重要的是 一直在争取团队成员 彼此非常不同 和谐地合作。
【213】CA: Can we have the water, by the way, just brought here?
CA:我们可以喝水吗, 顺便问一下,刚被带到这里?
【214】I think we're going to need it, it's a dry-mouth topic.
我认为我们需要它, 这是一个口干舌燥的话题。
【215】But isn't there something also just about the fact that you saw something in these language models that meant that if you continue to invest in them and grow them, that something at some point might emerge?
但不是也有什么吗 差不多就是事实 你看到了什么 在这些语言模型中 这意味着如果你继续 投资并发展它们, 那是什么 在某个时刻可能会出现?
【216】GB: Yes.
GB:是的。
【217】And I think that, I mean, honestly, I think the story there is pretty illustrative, right?
我认为,老实说, 我认为那里的故事 很能说明问题,对吧?
【218】I think that high level, deep learning, like we always knew that was what we wanted to be, was a deep learning lab, and exactly how to do it?
我认为高水平、深度学习, 就像我们一直知道的那样 我们想要成为什么样的人, 是一个深度学习实验室, 具体怎么做?
【219】I think that in the early days, we didn't know.
我认为在早期, 我们不知道。
【220】We tried a lot of things, and one person was working on training a model to predict the next character in Amazon reviews, and he got a result where -- this is a syntactic process,
我们尝试了很多事情, 有一个人在工作 关于模型的训练 预测下一个角色 在亚马逊评论中, 他得到了一个结果-- 这是一个句法过程,
【221】you expect, you know, the model will predict where the commas go, where the nouns and verbs are.
你期待,你知道,模型 将预测逗号的位置, 名词和动词在哪里。
【222】But he actually got a state-of-the-art sentiment analysis classifier out of it.
但他实际上拥有最先进的技术 情绪分析分类器。
【223】This model could tell you if a review was positive or negative.
这个模型可以告诉你 评论是正面的还是负面的。
【224】I mean, today we are just like, come on, anyone can do that.
我的意思是,今天我们就像, 拜托,任何人都可以做到。
【225】But this was the first time that you saw this emergence, this sort of semantics that emerged from this underlying syntactic process.
但这是第一次 你看到了这种出现, 出现的这种语义 从这个潜在的句法过程中。
【226】And there we knew, you've got to scale this thing, you've got to see where it goes.
在那里我们知道, 你必须按比例缩放这个东西, 你得看看它会去哪里。
【227】CA: So I think this helps explain the riddle that baffles everyone looking at this, because these things are described as prediction machines.
CA:所以我认为这有助于解释 使人困惑的谜题 每个人都在看这个, 因为这些东西被描述了 作为预测机器。
【228】And yet, what we're seeing out of them feels ...
然而,我们所看到的 他们觉得。。。
【229】it just feels impossible that that could come from a prediction machine.
感觉不可能 可能来自预测机器。
【230】Just the stuff you showed us just now.
只是你刚才给我们看的东西。
【231】And the key idea of emergence is that when you get more of a thing, suddenly different things emerge.
以及出现的关键理念 当你得到更多的东西时, 突然出现了不同的事情。
【232】It happens all the time, ant colonies, single ants run around, when you bring enough of them together, you get these ant colonies that show completely emergent, different behavior.
它总是发生,蚁群, 单个蚂蚁四处奔跑, 当你把足够多的人聚集在一起时, 你会看到这些蚁群 完全突发的、不同的行为。
【233】Or a city where a few houses together, it's just houses together.
或者一个有几栋房子的城市, 只是房子在一起。
【234】But as you grow the number of houses, things emerge, like suburbs and cultural centers and traffic jams.
但随着房屋数量的增长, 事情出现了,比如郊区 文化中心和交通堵塞。
【235】Give me one moment for you when you saw just something pop that just blew your mind that you just did not see coming.
给我一点时间 当你看到一些流行的东西 这让你大吃一惊 你只是没有看到它的到来。
【236】GB: Yeah, well, so you can try this in ChatGPT, if you add 40-digit numbers -- CA: 40-digit?
GB:是的,嗯, 所以你可以在ChatGPT中尝试一下, 如果你加上40位数字-- CA:40位?
【237】GB: 40-digit numbers, the model will do it, which means it's really learned an internal circuit for how to do it.
GB:40位数字, 模型会做到这一点, 这意味着它真的学到了 如何做到这一点的内部电路。
【238】And the really interesting thing is actually, if you have it add like a 40-digit number plus a 35-digit number, it'll often get it wrong.
还有真正有趣的 事实上, 如果你把它加起来像一个40位的数字 加上一个35位数字, 它经常会出错。
【239】And so you can see that it's really learning the process, but it hasn't fully generalized, right?
所以你可以看到 学习该过程, 但它还没有完全概括,对吧?
【240】It's like you can't memorize the 40-digit addition table, that's more atoms than there are in the universe.
就像你记不住一样 40位加法表, 那是更多的原子 比宇宙中存在的还要多。
【241】So it had to have learned something general, but that it hasn't really fully yet learned that, Oh, I can sort of generalize this to adding arbitrary numbers of arbitrary lengths.
所以它必须学会 一般情况下, 但事实并非如此 充分了解到, 哦,我可以概括一下 添加任意数字 任意长度。
【242】CA: So what's happened here is that you've allowed it to scale up and look at an incredible number of pieces of text.
CA:那么这里发生了什么 是你允许它扩大规模 看着一个不可思议的 文本条数。
【243】And it is learning things that you didn't know that it was going to be capable of learning.
它在学习 你不知道 将有能力学习。
【244】GB Well, yeah, and it’s more nuanced, too.
GB嗯,是的,而且??它也更微妙。
【245】So one science that we’re starting to really get good at is predicting some of these emergent capabilities.
因此,我们??重新启动 真正擅长 正在预测其中的一些 应急能力。
【246】And to do that actually, one of the things I think is very undersung in this field is sort of engineering quality.
实际上,要做到这一点, 我认为 在这个领域表现很差 是一种工程质量。
【247】Like, we had to rebuild our entire stack.
就像,我们必须重建我们的整个堆栈。
【248】When you think about building a rocket, every tolerance has to be incredibly tiny.
当你考虑建造火箭时, 每一个容忍度都必须非常小。
【249】Same is true in machine learning.
机器学习也是如此。
【250】You have to get every single piece of the stack engineered properly, and then you can start doing these predictions.
你必须得到每一件 正确设计的堆叠, 然后你就可以开始了 做这些预测。
【251】There are all these incredibly smooth scaling curves.
所有这些都令人难以置信 平滑缩放曲线。
【252】They tell you something deeply fundamental about intelligence.
他们深深地告诉你一些事情 智力的基础。
【253】If you look at our GPT-4 blog post, you can see all of these curves in there.
如果你看看我们的GPT-4博客文章, 你可以在那里看到所有这些曲线。
【254】And now we're starting to be able to predict.
现在我们开始了 能够预测。
【255】So we were able to predict, for example, the performance on coding problems.
因此我们能够预测,例如, 编码问题的性能。
【256】We basically look at some models that are 10,000 times or 1,000 times smaller.
我们基本上看一些模型 那是一万次 或者小1000倍。
【257】And so there's something about this that is actually smooth scaling, even though it's still early days.
所以这里面有些东西 这实际上是平滑缩放, 尽管现在还为时过早。
【258】CA: So here is, one of the big fears then, that arises from this.
CA:所以,这是当时最大的恐惧之一, 由此产生的。
【259】If it’s fundamental to what’s happening here, that as you scale up, things emerge that you can maybe predict in some level of confidence, but it's capable of surprising you.
如果它??s基础 到什么??发生在这里, 随着规模的扩大, 事情发生了 你也许可以预测 在某种程度的置信度中, 但它能让你大吃一惊。
【260】Why isn't there just a huge risk of something truly terrible emerging?
为什么不存在巨大的风险 真正可怕的事情出现了吗?
【261】GB: Well, I think all of these are questions of degree and scale and timing.
GB:嗯,我认为所有这些 是学位问题 以及规模和时间安排。
【262】And I think one thing people miss, too, is sort of the integration with the world is also this incredibly emergent, sort of, very powerful thing too.
我认为人们也会错过一件事, 是一种和世界的融合 也是这种令人难以置信的紧急情况, 也是一种非常强大的东西。
【263】And so that's one of the reasons that we think it's so important to deploy incrementally.
这也是原因之一 我们认为这很重要 以增量部署。
【264】And so I think that what we kind of see right now, if you look at this talk, a lot of what I focus on is providing really high-quality feedback.
所以我认为我们所看到的 现在,如果你看看这个演讲, 我专注于提供 真正高质量的反馈。
【265】Today, the tasks that we do, you can inspect them, right?
今天,我们所做的任务, 你可以检查一下,对吧?
【266】It's very easy to look at that math problem and be like, no, no, no, machine, seven was the correct answer.
看那个数学很容易 问题,然后说,不,不, 机器,七是正确答案。
【267】But even summarizing a book, like, that's a hard thing to supervise.
但即使是总结一本书, 就像,这是一件很难监督的事情。
【268】Like, how do you know if this book summary is any good?
比如,你怎么知道 这本书的摘要有什么好的吗?
【269】You have to read the whole book.
你必须通读整本书。
【270】No one wants to do that.
没有人愿意那样做。
【271】(Laughter) And so I think that the important thing will be that we take this step by step.
(众笑) 所以我认为重要的是 我们将一步一步地采取这一行动。
【272】And that we say, OK, as we move on to book summaries, we have to supervise this task properly.
我们说,好吧, 当我们继续阅读书籍摘要时, 我们必须正确地监督这项任务。
【273】We have to build up a track record with these machines that they're able to actually carry out our intent.
我们必须建立 使用这些机器的记录 他们能够 实现我们的意图。
【274】And I think we're going to have to produce even better, more efficient, more reliable ways of scaling this, sort of like making the machine be aligned with you.
我认为我们必须生产 甚至更好、更高效, 更可靠的缩放方式, 有点像制造机器 与你保持一致。
【275】CA: So we're going to hear later in this session, there are critics who say that, you know, there's no real understanding inside, the system is going to always --
CA:所以我们将听到 在该会话的稍后, 有评论家说, 你知道,没有真正的 理解内在, 系统将始终--
【276】we're never going to know that it's not generating errors, that it doesn't have common sense and so forth.
我们永远不会知道 它不会产生错误, 它没有 常识等等。
【277】Is it your belief, Greg, that it is true at any one moment, but that the expansion of the scale and the human feedback that you talked about is basically going to take it on that journey
这是你的信仰吗,Greg, 在任何一刻都是真的, 但规模的扩大 以及人类的反馈 你所说的基本上是 将带着它踏上旅程
【278】of actually getting to things like truth and wisdom and so forth, with a high degree of confidence.
真正去做事情 就像真理和智慧等等, 以高度的自信。
【279】Can you be sure of that?
你能确定吗?
【280】GB: Yeah, well, I think that the OpenAI, I mean, the short answer is yes, I believe that is where we're headed.
GB:是的,我认为OpenAI, 我的意思是,简短的回答是肯定的, 我相信这就是我们前进的方向。
【281】And I think that the OpenAI approach here has always been just like, let reality hit you in the face, right?
我认为OpenAI方法 这里一直都是这样, 让现实打在你脸上吧?
【282】It's like this field is the field of broken promises, of all these experts saying X is going to happen, Y is how it works.
就好像这个领域就是这个领域 违背承诺, 所有这些专家都说 X是会发生的,Y是它的运作方式。
【283】People have been saying neural nets aren't going to work for 70 years.
人们一直在说神经网络 70年内都不会工作。
【284】They haven't been right yet.
他们还没有说对。
【285】They might be right maybe 70 years plus one or something like that is what you need.
他们可能是对的 也许70年加1年 或者类似的东西就是你需要的。
【286】But I think that our approach has always been, you've got to push to the limits of this technology to really see it in action, because that tells you then, oh, here's how we can move on to a new paradigm.
但我认为我们的方法 一直以来, 你必须突破极限 这项技术 为了真正看到它的作用, 因为这告诉了你,哦,这是 我们如何才能进入一个新的范式。
【287】And we just haven't exhausted the fruit here.
我们只是还没有筋疲力尽 这里的水果。
【288】CA: I mean, it's quite a controversial stance you've taken, that the right way to do this is to put it out there in public and then harness all this, you know, instead of just your team giving feedback, the world is now giving feedback.
CA:我的意思是,很不错 你所采取的有争议的立场, 这是正确的做法 就是在公众面前 然后利用这一切,你知道, 而不仅仅是你的团队给出反馈, 现在全世界都在给予反馈。
【289】But ...
但是
【290】If, you know, bad things are going to emerge, it is out there.
如果,你知道,不好的事情 即将出现, 它就在那里。
【291】So, you know, the original story that I heard on OpenAI when you were founded as a nonprofit, well you were there as the great sort of check on the big companies doing their unknown, possibly evil thing with AI.
所以,你知道,最初的故事 我在OpenAI上听到的 当你作为一个非营利组织成立时, 好吧,你在那里是伟大的 对大公司进行某种检查 做他们未知的事情, 人工智能可能是邪恶的东西。
【292】And you were going to build models that sort of, you know, somehow held them accountable and was capable of slowing the field down, if need be.
然后你要建立模型 那种,你知道的, 以某种方式追究他们的责任 并且能够减速 如果需要的话,把场地放下。
【293】Or at least that's kind of what I heard.
或者至少我是这么听说的。
【294】And yet, what's happened, arguably, is the opposite.
然而,发生了什么, 可以说恰恰相反。
【295】That your release of GPT, especially ChatGPT, sent such shockwaves through the tech world that now Google and Meta and so forth are all scrambling to catch up.
你发布的GPT, 尤其是ChatGPT, 发出这样的冲击波 通过科技世界 现在谷歌和Meta等等 都在争先恐后地追赶。
【296】And some of their criticisms have been, you are forcing us to put this out here without proper guardrails or we die.
他们的一些批评是, 你强迫我们把这个放在这里 没有合适的护栏,否则我们会死。
【297】You know, how do you, like, make the case that what you have done is responsible here and not reckless.
你知道,你怎么会, 证明你所做的 在这里是负责任的,而不是鲁莽的。
【298】GB: Yeah, we think about these questions all the time.
GB:是的,我们考虑过这些 总是有问题。
【299】Like, seriously all the time.
就像,一直很认真。
【300】And I don't think we're always going to get it right.
我不认为我们总是 会把事情做好的。
【301】But one thing I think has been incredibly important, from the very beginning, when we were thinking about how to build artificial general intelligence, actually have it benefit all of humanity, like, how are you supposed to do that, right?
但有一件事我认为 非常重要, 从一开始, 当我们思考 关于如何构建 通用人工智能, 让它造福全人类, 比如,你好吗 应该这么做,对吧?
【302】And that default plan of being, well, you build in secret, you get this super powerful thing, and then you figure out the safety of it and then you push “go,” and you hope you got it right.
而默认的计划是, 好吧,你在秘密中建立, 你得到了这个超级强大的东西, 然后你就知道了它的安全性 然后你推??去?? 你希望你做对了。
【303】I don't know how to execute that plan.
我不知道如何执行那个计划。
【304】Maybe someone else does.
也许其他人会这样做。
【305】But for me, that was always terrifying, it didn't feel right.
但对我来说,这总是很可怕, 感觉不对劲。
【306】And so I think that this alternative approach is the only other path that I see, which is that you do let reality hit you in the face.
所以我认为 替代方法 是我唯一看到的另一条路, 你让哪个 现实打在了你的脸上。
【307】And I think you do give people time to give input.
我认为你确实给了人们 提供意见的时间。
【308】You do have, before these machines are perfect, before they are super powerful, that you actually have the ability to see them in action.
你确实有,在这些之前 机器是完美的, 在它们超级强大之前, 你真的有能力 看看他们在行动。
【309】And we've seen it from GPT-3, right?
我们已经从GPT-3中看到了,对吧?
【310】GPT-3, we really were afraid that the number one thing people were going to do with it was generate misinformation, try to tip elections.
GPT-3,我们真的很害怕 第一件事 人们会接受它 产生错误信息, 试图为选举提供线索。
【311】Instead, the number one thing was generating Viagra spam.
相反,第一件事 正在生成伟哥垃圾邮件。
【312】(Laughter) CA: So Viagra spam is bad, but there are things that are much worse.
(众笑) CA:所以伟哥垃圾邮件很糟糕, 但有些事情要糟糕得多。
【313】Here's a thought experiment for you.
这是给你的一个思维实验。
【314】Suppose you're sitting in a room, there's a box on the table.
假设你坐在一个房间里, 桌子上有一个盒子。
【315】You believe that in that box is something that, there's a very strong chance it's something absolutely glorious that's going to give beautiful gifts to your family and to everyone.
你相信在那个盒子里 是这样的东西, 有很大的机会 这真是太棒了 那会送上漂亮的礼物 献给你的家人和每一个人。
【316】But there's actually also a one percent thing in the small print there that says: “Pandora.” And there's a chance that this actually could unleash unimaginable evils on the world.
但实际上也有百分之一 小字上的东西 上面写着:??潘多拉。?? 还有机会 这实际上可以释放 世界上难以想象的邪恶。
【317】Do you open that box?
你打开那个盒子了吗?
【318】GB: Well, so, absolutely not.
GB:嗯,所以,绝对不是。
【319】I think you don't do it that way.
我认为你不会那样做。
【320】And honestly, like, I'll tell you a story that I haven't actually told before, which is that shortly after we started OpenAI,
老实说,我给你讲个故事 我以前没有告诉过, 就是这么快 在我们启动OpenAI之后,
【321】I remember I was in Puerto Rico for an AI conference.
我记得我在波多黎各 参加人工智能会议。
【322】I'm sitting in the hotel room just looking out over this wonderful water, all these people having a good time.
我就坐在酒店房间里 望着这美妙的水面, 所有这些人都玩得很开心。
【323】And you think about it for a moment, if you could choose for basically that Pandora’s box to be five years away or 500 years away, which would you pick, right?
你想一想, 如果你基本上可以选择 那个潘多拉??s框 还有五年 或者500年后, 你会选哪一个,对吧?
【324】On the one hand you're like, well, maybe for you personally, it's better to have it be five years away.
一方面,你喜欢, 好吧,也许对你个人来说, 最好是在五年后。
【325】But if it gets to be 500 years away and people get more time to get it right, which do you pick?
但如果它在500年后 人们有更多的时间把事情做好, 你选哪个?
【326】And you know, I just really felt it in the moment.
你知道,我只是 此刻真的感觉到了。
【327】I was like, of course you do the 500 years.
我想,当然 你做了500年。
【328】My brother was in the military at the time and like, he puts his life on the line in a much more real way than any of us typing things in computers and developing this technology at the time.
我哥哥当时在军队服役 他冒着生命危险 以一种更真实的方式 比我们任何人在电脑上打字都要多 以及开发 当时的技术。
【329】And so, yeah, I'm really sold on the you've got to approach this right.
所以,是的,我真的很受欢迎 在上,你必须正确处理这个问题。
【330】But I don't think that's quite playing the field as it truly lies.
但我不这么认为 按照真正的谎言来比赛。
【331】Like, if you look at the whole history of computing, I really mean it when I say that this is an industry-wide or even just almost like a human-development- of-technology-wide shift.
比如,如果你从整体上看 计算历史, 我说这话的时候是认真的 这是一个全行业的 甚至差不多 人类的发展- 技术范围的转变。
【332】And the more that you sort of, don't put together the pieces that are there, right, we're still making faster computers, we're still improving the algorithms, all of these things, they are happening.
你越是这样, 不要把碎片拼凑起来 就在那里,对吧, 我们仍在制造速度更快的计算机, 我们仍在改进算法, 所有这些事情都在发生。
【333】And if you don't put them together, you get an overhang, which means that if someone does, or the moment that someone does manage to connect to the circuit, then you suddenly have this very powerful thing, no one's had any time to adjust, who knows what kind of safety precautions you get.
如果你不把它们放在一起, 你有一个悬挑, 这意味着如果有人这样做, 或者有人成功的那一刻 为了连接到电路, 然后你突然 这个非常强大的东西, 没有人有时间调整, 谁知道是哪种 你得到的安全预防措施。
【334】And so I think that one thing I take away is like, even you think about development of other sort of technologies, think about nuclear weapons, people talk about being like a zero to one, sort of, change in what humans could do.
所以我认为 我拿走的一件东西 就像,即使你考虑发展 在其他种类的技术中, 想想核武器, 人们谈论存在 像零对一, 在某种程度上,改变了人类的能力。
【335】But I actually think that if you look at capability, it's been quite smooth over time.
但实际上我认为 如果你看能力, 随着时间的推移,一切都很顺利。
【336】And so the history, I think, of every technology we've developed has been, you've got to do it incrementally and you've got to figure out how to manage it for each moment that you're increasing it.
因此,我认为历史, 我们开发的每一项技术 已经,你已经 以增量方式进行 你必须弄清楚 如何管理它 每增加一刻。
【337】CA: So what I'm hearing is that you ...
CA:所以我听到的是你。。。
【338】the model you want us to have is that we have birthed this extraordinary child that may have superpowers that take humanity to a whole new place.
你想让我们拥有的模型 是因为我们生了孩子 这个非凡的孩子 可能有超能力 将人类带到一个全新的地方。
【339】It is our collective responsibility to provide the guardrails for this child to collectively teach it to be wise and not to tear us all down.
这是我们的集体责任 提供护栏 为了这个孩子 集体教导它要明智 不要把我们都打倒。
【340】Is that basically the model?
这基本上就是模型吗?
【341】GB: I think it's true.
GB:我认为这是真的。
【342】And I think it's also important to say this may shift, right?
我认为这也很重要 说这可能会改变,对吧?
【343】We've got to take each step as we encounter it.
我们必须迈出每一步 正如我们遇到的那样。
【344】And I think it's incredibly important today that we all do get literate in this technology, figure out how to provide the feedback, decide what we want from it.
我觉得这太不可思议了 今天很重要 我们都能识字 在该技术中, 弄清楚如何提供反馈, 决定我们想要从中得到什么。
【345】And my hope is that that will continue to be the best path, but it's so good we're honestly having this debate because we wouldn't otherwise if it weren't out there.
我希望这会 继续成为最佳路径, 但我们真的很好 进行这场辩论 因为否则我们不会 如果它不在那里的话。
【346】CA: Greg Brockman, thank you so much for coming to TED and blowing our minds.
CA:Greg Brockman,非常感谢 感谢你来到TED,让我们大吃一惊。