Source: https://x.com/hwchung27/status/1945355238187393257
I’m Hyung Won, and it’s good to be here today. I have been working on AI for a few years. At OpenAI, I’ve been mainly focusing on O1 preview, O1, and most recently Deep Research. Now I’m more on the agent side of things. I’m very excited about reasoning and agents.
All right, let me get started. I will talk about some AI stuff today, obviously. But before that, let’s start by looking at this ChatGPT-generated image of a flower, a budding flower. If you stare at this for a minute or ten minutes, you don’t really see any change. Does that mean there is no change underlying this process? There is, and if you wait long enough, then you’ll see some big changes to a full-blown rose.
What I’m trying to get at with this toy image is that we’re really not good at perceiving the changes that occur over minutes or even days or years. But we’re pretty good at minute-to-minute scale changes. I think there’s probably an evolutionary explanation for this. If you can perceive the changes in the environment in minutes, that’s probably very helpful for survival. But if it’s changes over a year, not so much. I think that’s probably some built-in deficiency that we need to correct.
Why am I talking about this? I think AI is probably the fastest moving technology of all time. Even then, it doesn’t move in minutes or hours. It still moves over a few years’ timeframe or even a decade. Given this deficiency I just talked about, we might be underestimating the change, especially the magnitude of the change this AI is bringing about.
At this point, I don’t have to convince anyone that AI is important. Even a few years ago, I started my talk like that, but now I think that’s a given. But I do want to emphasize that maybe what everyone’s thinking about—how big this change is—is very different. If anything, I think we might be underestimating it. Especially for AI, which I’m going to argue is a leverage mechanism for individuals and humanity as a whole, that aspect might be somewhat underestimated.
I’ll get into the details of what I mean by that, but let’s build intuition slowly, starting with this keyword: leverage. It’s a very important concept, used in a somewhat casual way informally and sometimes overloaded depending on the context. Especially in the Silicon Valley Bay Area, this term is used a lot. I think this is such an important word that it’s worth spending some time building an intuition around it.
For me, the first encounter of this concept was probably in classical mechanics where we have this lever. In this case, let’s think about trying to put some pressure force down on the left-hand side of the screen, and then as an output of that, we’re lifting up the one kilogram mass on the right. If we enlarge this lever, we can actually lift up a heavier object. What this means is that the input downward force is the same, but by increasing this leverage, the output is increased by 3x.
This is actually a very general concept, and I’d like to call out—this is my working definition of leverage: it’s a mechanism through which a small change or no change in input results in a larger change or very large change in output. This is a very general thing, and it’s not just about the classical mechanics context that we just saw. I think it can be applied to many other places.
This is a very important concept. Many people want to increase the output. I want to contribute more, I want to generate more. The first thing, the most natural thing that comes to our mind is probably “How do I work harder? How can I increase my input? I want to sleep less,” things like that. But I think there’s a certain limit to it. Instead, the more important question is: How do I increase the output without actually increasing the input? Or how can I disconnect the relationship between the input and the output, or the linear relationship between them? That is getting to the core of this leverage mechanism. What you’re looking for is: what leverage mechanism do I have and can I have? If you want to increase the output, that’s the question we have to think about.
As a general concept, there are many different ways of thinking about leverage. My personal favorite is by Naval Ravikant. From this book—actually I have it here—I strongly recommend it. This is not by his book, but it’s just a collection of many of his thoughts. According to Naval, there are three types of leverage: human labor, capital, and code and media.
Let’s think about these things individually. The first type is human labor, and this is the oldest type of leverage and as such probably the most familiar one. As an example, let’s think about a scenario where I want to build a pyramid. Without leverage, I will be building alone, and that’s probably quite difficult. With leverage, I can hire thousands of human workers. My input is probably the same or even less. I now don’t have to work as much, but the output is much higher because there are thousands of people working on this. This is a kind of leverage type with permission because I need to ask the permissions of these people. We still have this human labor as one of the main leverage mechanisms in society.
The second type is capital. Let’s think about a scenario where I want to invest in real estate that’s worth a million dollars. I only have 200k, so I borrow 800k from a bank. Let’s say I get lucky and this thing gets doubled in valuation to two million. It just doubled, but my return went up by a lot more because I borrowed more from it. This is the second type, and I think it’s more of a common thing for the 20th century and so on.
The third type is more recent, especially more common in the area, which is code and software. If I write a code for an app, I build it, and there’s one user who gets value N—let’s suppose that number is positive. While I’m sleeping, one more user downloads it and installs it and gets another value N. The output just doubled without me having to do any additional work. It’s possible because it’s software, which can be copied and pasted, which is a very interesting nature of this. A lot of the recent value has been created leveraging this.
Media is similar. Let’s say I give a lecture to 200 people. Those people got some value—again, assuming that’s positive. I post it on YouTube. Any additional view of the same lecture, I don’t have to do any work, but somehow the value goes up, and the limit is actually pretty much endless. That’s the new type of leverage too.
Historically, large wealth creation has utilized these forms of leverage. Like the 20th century’s wealth—a lot of them in the financial industry leveraged a lot of this capital leverage. Recently, especially in this Bay Area, tech companies have leveraged the fact that if you write code, the output can be multiplied pretty much indefinitely. Those are the wealth creation mechanisms. If you look at the history of wealth, big wealth generated, you can probably identify such leverage.
But the upside of the leverage mechanism is also competed away. What I mean by that is: if something is good, then many people realize it, and there’s going to be competition. If you start a company that is only leveraging the fact that software doesn’t—that is only leveraging software without other technologies—then it’s probably much harder to succeed now compared to, say, 20 years ago. Or if you want to become a YouTuber now, that’s probably more difficult than ten years ago just based on the competition.
What I’m trying to get at is: when this leverage mechanism just became possible because of some technology, much larger values and returns are possible. And then it’s going to be competed away. I think it’s very important to think about what are the new leverage mechanisms that are becoming available. Obviously, I’m going to argue that AI is the relatively new one that is coming into the picture, and it’s slowly expanding its scope and reach from individual levels to groups of people and to the point where it can benefit entire humanity.
Let’s start looking at this at an individual level. I personally use AI or ChatGPT in an education context. That’s probably the biggest one for me. I spent a lot of time, especially weekends, just learning about new concepts and asking questions. I think I talk about this even for hours, just learning about new things.
Here, again, if you think about leverage, think about AI as leverage. For this learning, what is the input and output? The input is my time and effort to understand some concept that I don’t understand, and the output is the conceptual understanding happening in my brain, and maybe also some knowledge, but that’s less important.
With AI, the given input results in a larger output. Let’s say if I’m trying to learn something specific about distributed systems. Before this kind of generative AI, I would have to Google this and then probably find a Wikipedia page, which is typically not beginner-friendly. I’d be reading it and I don’t understand and probably don’t feel good, and then I’ll try to find an introductory course, maybe a textbook. But then I want only one concept out of this, but I kind of have to build the context, like at least the terminologies, and this is very time-consuming.
Now, because AI can contextualize everything I know and generate dynamically just the right amount of materials at the right difficulty, I can learn much easier. I think that’s the big lesson from learning. What I’m saying is: the barrier to learning a new area is collapsing essentially.
This is good, and you probably heard about this a lot. But is this good just everywhere? I think we have to be careful and have a comprehensive look. When everything is easier to learn and people are learning everything, then the opportunity cost of not learning is getting higher. As an extreme example, if you don’t use AI, you just don’t think about it at all, and you just do your thing—you’re not lazy, just doing your own thing—and everyone else is just learning new things and getting better and so on, then you’re kind of behind in society. That’s the opportunity cost. You don’t contribute to it, but just because it relatively decides what the valuable skill is, that’s what’s happening.
I think in society, the valuable skill is determined based on supply and demand and what is scarce, as opposed to the objective value it provides. I think one extreme example is human vision. From an objective perspective, this is an extremely complicated and advanced feature. If you study computer vision, you probably know that human vision is incredible. You sometimes recognize your friends in a setting where it’s probably very difficult, and you sometimes get surprised. But this incredible capability is so abundant that having it doesn’t help you excel in modern society.
Similarly, scarcity is a very important, necessary condition for any highly valuable skill. I often think about what are some great skills or opportunities to have. A good rule of thumb is: whatever evolution did not equip us with, that’s kind of a good starting point. Because if it did and everyone has it built into their DNA, that’s probably not that scarce.
Those are some implications of learning getting easier, which are probably not that obvious. The acquisition of new knowledge gets cheap. The scarce factor is the motivation to explore, and curiosity is the characteristic that will probably be more important. I mean, it has always been important, but I think it’s getting more and more so. Because the learning cost is going down but not to zero. You still have to overcome this barrier when learning a new concept, which is not something that I would say everyone finds pleasing because you feel challenged, and this cognitive challenge is not comfortable.
To overcome that, curiosity can be like, “I know there’s pain, but I need to get this because I’m so curious about it.” That’s a really strong force. If you’re not curious, I think there’s a way to have a correction mechanism, which is: “Okay, I’m going to go through this short-term pain, but there will be a long-term reward, some fulfilling things happening.” If you build enough of this rewards cycle, I think you can get over it.
More broadly, technology changes what is scarce. Just being aware of such changes, even if you’re not directly contributing to it, is very important.
That’s one way—learning—AI is acting as leverage. The other, maybe more intuitive one, is the AI agent. This is probably the most interesting research area in 2025 and maybe more.
Here, AI agents are kind of combining the two types of leverage mechanisms that we saw before. The first one is human labor because AI agents are doing the work for you. That’s the human labor part, as if you hired them. The second part is that at least the current AI agents are software only, and so you can copy and paste. If you want ten outputs, ten agents working together, just do it. If you want twelve, then just copy two more. You don’t have to ask for permissions. This permissionless form of composite leverage mechanism is quite profound if you think about it.
I think that’s going to be the main source of wealth generation going forward. This thing is very new. I think it just got started. If you have used Deep Research, that’s to me the most well-functioning AI agent as of now. There will be more probably, but that’s at least the first working agent for me.
That increases my output by a lot, and probably many others’ too. Individuals are quite supercharged, and this means small teams consisting of individuals generating really big value. That’s becoming more and more common. You might have heard about the startups with 10 people, 20 people generating hundreds of millions of dollars of revenue. That probably was unimaginable ten years ago. But now, still uncommon, but we’re seeing this. I think behind the scene, it’s more AI acting as leverage and individuals are just generating a lot more output.
Previously, if you wanted to increase the output, then again you wanted to think about the leverage mechanism. After raising funds and whatever, if the capital leverage is out of the question, then you have to really think about this human labor leverage and hire more people. But human collaboration, especially at larger scale, has quite a bit of overhead. There’s communication—it’s a very difficult problem—and maybe some people don’t get along with other people. Just adding one person to, say, a 100-person group doesn’t mean the output goes up by one percent. It can even be negative or anything.
Now with supercharged individuals, this overhead is becoming less favorable, and maybe we’ll see more and more smaller teams generating quite a bit of value, and more companies might be of this size. Obviously, there will be big companies, but I think this might be a more common thing.
I think so far this has been at an individual level and some implications for the group as a result. I think there’s slowly—again, this is like the flower analogy from the beginning—this is a change that is very big, but it’s so slow that I think it’s underestimated by many people. And it’s acting at the humanity level.
Let’s think about this. If we think about all of humans here, what are the tasks or goals we might have? I think there’s no right answer, but to me, one of the most important things is to continue to generate value and to thrive. What is the most sustainable engine of growth and value creation? There are many probably, but for me, the most sustainable engine is scientific advances—discovering new knowledge. Suddenly, what you thought about as a non-resource just becomes a resource because now you have new knowledge to leverage that.
Oil, for example, is just sticky liquid. Now you know how to do this—burn this with thermodynamic understanding—you can make it such a valuable resource. Many instances like that.
If we think about this from a historical perspective, since the 17th century or roughly that time, the scientific revolution, the wealth creation just really took off, like a hockey stick shape of economic metrics since then. Back then, there were probably a lot of lower-hanging fruits in terms of scientific progress, because if you’re the first one to do science, then probably there’s a lot of easy things to do. I’m not saying everything was easy because there were other challenges given the context, but still from an objective complexity perspective, this is probably much easier than what is happening now.
Advancing science in modern society is a lot more complicated. Maybe we can think about Newtonian versus quantum mechanics, or if you want to make advanced computer chips—that is probably beyond any single human’s capability, way beyond that. It’s getting a lot more complicated and involves—sometimes involves—larger collaborations among people and more capital and so on.
Also, in addition to this increasing complexity of the technologies, cutting-edge technologies, human intelligence is not growing. I don’t even know if it’s growing, but it’s kind of stagnant compared to the rate at which the scientific complexity increases.
These factors put together I think are the bottlenecks in further advancing this core mission of keeping scientific progress going. We have done a great job whenever such a bottleneck happens. We find a way to get out of it and we build the tools to unblock ourselves from achieving the mission. This time I think we should do the same thing with AI being this tool that will be the most useful thing and maybe even better—superhuman in research capabilities—so that we can continue this scientific advance.
I think there are many purposes of AI, but to me this is the single most important purpose of it: to augment in continuing this grand mission of scientific advances.
Now again, just from the perspective of leverage, let’s think about the input and the output. The input is collective human effort—scientists here and there, just working together or implicitly together—and then the output is the scientific progress of all.
How can AI act as leverage here? I think I’m going to mention two different things.
First one is: we highly encourage being a specialist, especially in the scientific community. There are a small number of people who have specialized knowledge and they’re kind of segregated in different locations and communities and so on. So it’s hard to collaborate. You might not even know what is available as an option for collaboration across different expertise areas.
To me, if you think about this human knowledge, the mental picture I have is very sharp in a high-dimensional space. It’s here and there and there’s so much space between them. I think AI is acting as kind of an envelope around this spiky space and connecting all this specialist knowledge. If you’re familiar with optimization, this is like the convex hull—this envelope around these sharp corners here and there. I think this is one of the roles of AI.
When I was working on Deep Research, this wasn’t really obvious, but as I work more and more and get more value out of it, this is the kind of mental picture I started building. Here I’m trying to get at this: human experts are very specialized, but their cooperation has communication, physical separations, all these overheads, and AI is making that much more efficient.
What I think might be happening already is: because of those separations and being unable to cooperate in an efficient manner, we might have a huge overhang of existing knowledge synthesis. Even with just combining the existing knowledge, we might be able to get a lot of value, and maybe we can call that new knowledge. Those are, I think, completely uncharted territory just because of how experts have grown and the communication bottlenecks of many people.
These are, I think, the low-hanging fruits of AI acting as leverage to advance the science mission. But I don’t think that’s enough. We should probably go beyond. Going forward, we can expect advanced reasoning, maybe even better than human scientists, and the ability to generate new ideas and knowledge.
I think this is still rare, and maybe I’m hearing some vague anecdotes that these are possible—like O3 helping some scientists generate new ideas, as a brainstorming partner—but I think it can go a lot more than that. I would expect that this kind of ability will emerge in the future generations of the model, if not already there. Once that happens, this will be a non-stopping research engine that works all the time, and they can work together across humans and agents and so on. This will be the main leverage factor going forward: how AI can help this mission of scientific progress.
That’s all I have today. I’ve talked about many different concepts, but I think AI is an important thing—everyone knows that. But I would invite you to think about: how big of a change am I thinking about? Is there a possibility that I might be underestimating that magnitude, especially now that you think about it from a new form of leverage? I would invite you to think about this. Yeah, that’s it. Thanks.
Edited by Claude (claude-sonnet-4-5-20250929)