00:00
Is AI gonna kill us all?
00:04
Maybe.
00:06
Alexir is the CEO of Twitch was acquired by Amazon in twenty fourteen, joins us now. I started Twitch
00:14
to help people watch other people play video games on the internet. The creator and co founder of Twitch. Watch other people play video games with you. Emmit you. I guess that's the answer.
00:24
What types of ideas are you noticing or standing out to you that are interesting. For the first time in maybe five or seven years, it feels like credibly trying to start
00:31
a consumer internet company, like, the ones that, like, I was so excited to start in two thousand seven. Is, like, potentially a good idea? Matt, because of AI. You mentioned AI might become so intelligent. It kills us all.
00:42
This podcast is really growing. I don't I don't want the world to end. I think it's gonna be okay.
00:47
But it's such the downside is so bad. It's like it's really probably worse than nuclear war.
00:54
That's a really bad downside.
00:56
I think of it as a range of uncertainty.
00:58
And I would say that the true probability
01:01
I believe is somewhere between
01:13
Alright. What you're about to hear is a conversation I had with Emmett Sheer. Emmett was the creator and cofounder of Twitch. If you don't know about Twitch, know, you're living under a rock. It's like one of the most, I don't know, five most popular websites in in the States right now. It is a,
01:28
a place where you can go to watch other people play video games. Of all things. Watch other people play video games. Who knew? Emmit knew. I guess that's the answer. So he was the creator co founder of that and, built it up, it's a multi billion dollar company they sold to Amazon many years ago, seven years ago or eight years ago for about a billion dollars and has grown many times since then. He finally retired after seventeen years of the journey.
01:52
I got to know Emmett because he bought my previous company. So we got acquired by Twitch.
01:56
And it was like my, you know, quote unquote boss for,
02:00
my time when I was at Twitch. So, I got to see this guy firsthand. He's the real deal, and I've been wanting to get him on the pod since those early days when I first met him. I was like, this guy is great. We talked about a bunch of things. So we talked about, some ideas of, like, how he would use AI if he was gonna create another company. Like, I think he's good. He's retired now from that game of operating a company, but if he was gonna do it, this is what he would do.
02:23
So we talk about AI ideas,
02:25
We talked about why why he thinks AI might kill us all might, you know, be be the big doom scenario,
02:32
which is interesting because he's not just a guy who's gonna go cry wolf. He's not a pessimist. He's not just a journalist who hates tech. This is a techno optimist. This is a guy who believes in tech is a
02:43
a very, very intelligent guy, and he sees, you know, a probability. He gave us a percentage of probability. He thinks that could be,
02:50
sort of the doomsday scenario and why he thinks that that could be the case and what we should do about it. So we talk about AI, we talk about some of the frameworks that he has for building companies,
03:00
we didn't talk too much about, like, the origin of Twitch. I feel like he's done that a bunch of times, so we kinda stayed away from that.
03:06
But it was a wide ranging conversation.
03:08
And,
03:09
And for those who are watching this on YouTube, I apologize. The studio that we booked at San Francisco,
03:16
they screwed up the video. So we don't have video for the for for YouTube. We just have the audio only version. So you'll see our profile pictures. My bad. Sorry about that.
03:26
You know, gotta pick a better place. Gotta pick a better studio, I guess. But,
03:30
anyways, enjoy this episode with Emich here.
03:34
Somebody said creativity is not like a faucet. You can't just turn it on. And I think, actually, if you if you've pulled, like, a hundred people, most, like, yeah, of course, creativity is the sacred special thing that only happens You've meditated in the morning. The vroom is perfectly right, and you've had your your l theanine in your coffee or whatever.
03:50
And you were like,
03:51
no. For me, it's very it is like a faucet watch.
03:56
I could just write and just keep generating more ideas. Yep. I love that for two reasons. One,
04:01
I love that you'll just be like,
04:03
no. Actually, this. That's like a consistent thing I've seen you do. And the second is I think that's very true about you. And I wonder is that practiced
04:12
or is that innate? Like, if I if there's a researcher studying you when you were, like, ten years old, do you think they would have been like, oh, this this person's different in these ways? What would have seemed different or special about you at the time?
04:25
I the if there was a
04:28
nurture nature break on this, it happened very early because
04:32
by the time I was ten,
04:33
you would definitely notice the same thing. I'm not really that different. I would be much less effective. But, like, as a ten year old, I already had that same experience. But you were different than other ten year olds. Yeah. Other ten year olds. Well, I would actually say I was less different than. I think most people, actually, most children have this experience already. I think most ten year olds and definitely most five year olds are capable of generating ideas for what to do about something or to, like, play pretend.
05:00
Almost indefinitely, they don't run out of ideas. It's as you get older, somehow you what you learn to do is you learn to stomp down the ideas that are, like, bad.
05:08
And to not say dumb things. But the more pressure you put on yourself, not to say dumb things, the more
05:14
your inner idea generator
05:17
it, like, gets disrupted.
05:19
And I I say a lot of dumb things. Like, when I'm generating ideas,
05:22
I may not put weight down on them, but
05:24
most of the ideas will be bad. They'll have something obviously wrong with them. And they give you this advice. And when you if you go to, like, someone who teaches you how to brainstorm, like, no bad ideas here. Obviously not true. There's lots of bad ideas. Most of your ideas are bad. Yeah. The the actual advice is like, don't stop at the bad ideas. What you you're what you're trying to do is you're trying to disable that sensor that most people have installed that, like, is, like, no bad. No bad. No bad. Don't don't be stupid. Don't be stupid. And I think I was, like, mal socialized. I'd it never occurred to me to to to have that.
05:55
Like, I I did I never got the sensor installed. And
05:58
why that is the case. I'm not sure. But I actually I think I'm the one who is unchanged in some sense. I'm a little more childlike in that way. And everyone else is the weird one who, like, Why? How did you end up, like, damaged by your life that your your inner wellspring of creativity has been -- Right. -- crushed
06:14
I think that process is actually very simple. This process goes out with all kinds of things in people's minds. You start from
06:21
some capability, something you can do, some
06:24
behavior.
06:25
And if when you do that behavior, you try that thing, you receive negative feedback, which can be external, or you actually think even more often internal, you're Oh, I I screwed it up. Oh, it's bad. Oh, I don't disappointment.
06:36
You learn not to do that thing pretty rapidly. I mean, so
06:40
that leads you to doing it less, which means you're less skillful at it, which sends you to doing it less, which
06:45
and that that cycle ends in you being very bad something. Like, I'm bad at math. No. You're not. Everyone can be like, the kind of math you're talking about, everyone can be, the kind of math when people say I'm bad at math, they don't mean I'm bad at, like, abstract algebra proofs. They mean I can't do arithmetic or algebra basic algebra,
07:01
and that's just imaginary. Like, everyone can do that. It's easy. They got stuck in one of these, like, spirals. And now it's and getting out of one can be very hard. And I guess I think that's what happens to people's creativity.
07:12
I don't know. I didn't go through the process myself, so I got so now as I'm saying this out loud, actually, what I the idea that I have to come up for me is, like, oh, well, maybe what it is is that I had better ideas. That's,
07:21
like, That's the the I got in the rewards. So I got the reward loop or or I had an environment that was unusually,
07:27
positive and supposedly reinforcing for me having ideas. And so I would have ideas
07:31
And it would go well
07:33
that would lead me to having more ideas, which and more practice at having ideas, which would go well. And then you wind up just never breaking that loop. I have a a trainer who comes over to my house. He always says his things because, like, my kids will come down during the session. I'm always like, oh, sorry. Like, obviously annoying, like, two year old is here, almost getting hurt on all the weights, and that's probably, like, not what you want in your session. Someone was like, oh, sorry. Sorry. Sorry. And he's just like, dude. No. And he's like, kids and dogs. I go, what? He goes, I love to be around kids and dogs. They got it right. They know life. He's like, a dog is like unconditional love happy, playful,
08:08
you know, super loyal. He's like, what's not to learn from a dog? I wanna learn everything I can from a dog or kids. He's like, look what she's doing. She just made up a game on this thing. Like, her trying to do a serious workout. She made this her play pace. Yep. She can't wait to come down here. He's like, I wish all my clients wanted couldn't wait to come down to the gym. I was like, damn, this guy's right. And one of the things I like is figuring out people's isms, their philosophies.
08:29
And you're like, oh, I thought of one on the way here. Explain what it was. It Have you tried just solving the problem? That was what does that mean? So there's a there's a meme on the internet. I think it's sort of weird on Twitter, which is like, have you strived solving the problem by? And then an infinite list of possible sentences. The tweet is always have you tried solving the problem by, like, ignoring the problem? Have you tried solving the problem by spending more money on it? Have you tried solving? And one of my favorite ones of those that has become almost like a life motto is, like, have you tried solving the problem by solving the problem? And
08:59
That sounds dumb. Right? Like, sad that sounds it's one of those, like, Zen cone pieces of advice that when you first hear it is, like, are are you serious? Like, that's the advice is solve the problem by solving the problem. But what you noticed when you try to help people with problems a lot is
09:14
oftentimes people will have a problem. It'll be really obvious what the problem is.
09:18
And they'll come to you for advice for, like, well, how can I deal with the consequences of this problem? Or how can I avoid needing to solve this problem? Or how can I get someone else to solve this problem?
09:28
Or how have other people solved the problem in the past, which is closer to the right answer or what can be the right answer.
09:33
And the point of the the saying is to remind you that sometimes the way to solve the problem is just, like, just to actually try solving the problem. Like, don't deal with the symptoms. Don't accept the symptoms. Don't don't find a hack around it. Like, the problem is the website is not fast enough. And instead of, like, trying to figure out how we can make a loading spinner that distract people from that fact. What if we just made it so fast that you don't need a loading spinner?
09:57
It's interesting because that's a good that's a very good advice when
10:01
the problem actually is solvable. I mean, your people are flinching away from it because
10:06
something about it is aversive, even though the problem isn't really unsolvable. Like, they if they worked on it for six months, it would go away. And it's worth solving.
10:15
Whereas there are these problems where, like, you're trying to make perpetual motion machine. You're trying to do something that is actually too hard and solving the problem, by solving the problem. You should actually stop trying to solve the problem. That's a huge mistake. And you should be looking for a hack around needing to solve the problem, which we're looking to live with it more effectively. But I find actually on the balance,
10:34
at least as most people, I I talk to I help,
10:37
Most people I know, I think it's maybe it's people in in tech, like, love love the hack. They're always looking for the easy
10:44
fast solution that cuts around and you solve the problem, and it's very helpful. It's it's the most often helpful form of that advice in my opinion. It's like bringing people back to just solving the problem. I find that the The advice I like the most or the the sayings that resonate with me the most are the ones. It's like, you spot it. You got it. It's like, if the one I it's the advice I needed. That's why it resonates with me. That's why I like giving it out because, like, I personally experienced it. Have you personally experienced that or what's an example where you remember
11:12
trying to do everything but solve the problem, and then you finally realize shit. I should have just Solve the problem. It's an interesting question. What what is it? You spot it. You got it. It's like noticing it's half the battle, basically. It's sort of the smart person version of whoever smelled it dealt it. It's like Yeah. Yeah. Yeah. Yeah. It's like Hundred percent. If you you only notice this in other people because you've seen it in yourself too. Otherwise, you wouldn't be as observant of it. My version of this is we give the advice we need to hear. Yes. Yeah. Exactly. Which is, like, which is same basic idea. It's actually not always true. Like, that's one of those really good heuristics where, like, sure. Half the time when you give advice, it won't actually be for you, but half the time it is and noticing it is so powerful that, like, should just check every piece of advice you give for, like, wait a second. Is this advice I need to hear right now? When it comes to the, like, have you tried actually solving the problem? I think I'm pretty good at that in general. I think that I often give it to myself in a more meta sense. Like, it's a advice I often need a more meta sense of, like, I'm confronted with, like, a thing that needs to be programmed, I will often go just program the thing, but I have a tendency to,
12:11
like, look for ways that I can solve the problem. And not that the problem can be solved. And for me, that the verb this almost always, like, what if I wanted to ask somebody else for help? And I just, like, it doesn't even occur to me to go
12:24
to go do that. I'm just I'll just I'll just indefinitely dig try to go solve the problem myself. I'm not really trying to solve the problem. I'm trying to solve the problem
12:32
while avoiding having to ask anyone else for help, which is, like, not real I'm not really trying to solve the problem. But, actually, no, weirdly, I think this is one of those things where it's almost like the creativity thing It was a shock for me to realize
12:44
other people don't do that. Yeah. You're self actualizing that one. Yeah. Yeah. What's a piece of good advice that you're bad at taking? Oh, that's a that's a an excellent one. I think the the big one there is, like, you know, listen more. Like, I've been giving this advice so much at YC, and it's a hundred percent something that I need to get better at, which is, like, you go into the user interview,
13:02
and you have all these ideas and thoughts, and you need to not be surfacing those. You need to actually be focused on, you have to move your attention to them and really be interested and care about what they have to say. And
13:14
your opinions and what's what you think is true is irrelevant. And I am
13:19
I'm a much better at that than I used to be. And I I also it's one of those things, like, being reminded, like, let's just tell it for a second and and, like, listen. Is almost always good advice for me and something that I and it's nice I give fairly often, but, like,
13:35
it's hard for you to take on. One of the things I really liked that you showed me once, well, I remember asking you
13:43
when we were at Twitch. I think we were working on a problem that was, like, reminiscent of early days twitch like, the mobile mobile stuff in different countries where it's like, oh, we're not the leader or we need to, like, create from scratch.
13:54
Which wasn't a muscle that a lot of people there were were flexing at the time. And I was like, hey. Do you have any stuff from the early days of Twitch? And you sent me a thing, which was like, Here's all the user interviews, like, here's my doc from all the user interviews in it, which was basically from from what I understand.
14:10
There was like a small universe of people that were already doing video game streaming. And you were like, cool. Let me call all of them. And let me ask them, like,
14:17
three questions. And if I could just answer these three questions, that should give me a little bit of a road map, a blueprint of understanding what do I need to do in order to, like, win in this market. Yeah. He take me back to that? Cause that I liked that for two reasons. It was a simple and b seemed
14:33
like a focused intensity that you found a point of leverage and you pushed.
14:37
Yeah. I think two things happened to lead to that. The first was, like, the realization,
14:42
obviously, in the we wanted to win in gaming. The streamers mattered.
14:46
And at Justin TV, we'd always been, like, streamers and viewers are equally important. And I finally made a decision. I was like, no. No. No. This product ultimately is about streamers. And if this work for the streamers. Doesn't work for anybody.
14:58
And then
14:59
I had the realization.
15:01
This is one of those epiphany moments.
15:04
Where I truly saw
15:06
I have no idea why anyone would stream video games. Like, I don't really want to do it, and I have all these I could I saw my myself building products for these people for the past four years of Justin TV and not really
15:19
having any idea
15:21
why they did the thing they did at all. And I sort of I saw, like, oh, I'm just making this up. I have no idea. I said, I don't know the answer. I could know the answer. Like, they there is a there there is an answer out there these the a bunch of people know it, but I don't. And that triggered me, like, I need to know. I need to understand. Like, these this these two hundred people, I need to understand their mind. And I did about forty interviews probably.
15:43
And
15:44
I didn't wanna know, like, what they thought we should build because if they knew what we should build, they would have my job.
15:49
And I talked to enough of them before to know that they had no good product ideas.
15:53
I I wanted to know, like, why are you streaming?
15:56
You what have you tried to use for streaming? Like, what did you like about that? Like, what did how did you get started in the first place? What's your biggest dream for streaming? What do you wish you know, someone would build for you. And I didn't ask them what do I wish someone would build for you because I thought they would have a good idea. I asked them because the follow-up question was really the killer one. Right? They would say, I wish you would build me this big red button. I'm like, great. I built you the big red button, like, what what does it do for you? Like, why is your life better after I built that? And then they would tell me the real thing, which is like, I would make I, like, make a bunch I make make money that month or I'd get a bunch of new fans who, like, loved me. Or my fans who already love me on YouTube would be able to watch me live, more of them would. And I was like, oh, that's the real answer. Like, why you you don't you don't want the button. You want the fans or the money or the I call it love, the, like, the the sense of reassurance and and positive feedback that the your creative content was wanted. But you're a smart guy. Those love and money and fans, I'm sure you would have guessed what did the streamers want? False. What what did you think What did you think There was a revelation that people would want money because I was like, you're streaming, like, you know, whatever, twelve hours a week, if we met, like, you monetize at the rates we can monetize today, you'd make, like, three dollars a month.
17:06
That would like, that didn't occur to me. That would be a positive thing. They're like, yes. Oh my god. That would be amazing. And I was like, wait, wait, sir. You're serious.
17:13
You would like three dollars. I'm like, I don't know overpromise. Like, we're well, I'll build you the modernization, actually, but, like, you would really be excited if it only produced, like, a tiny amount of money. And they're like, absolutely.
17:24
I've just it was the idea that I could make money doing this would be so exciting.
17:28
That had not occurred to me because it always is easy for me to make I was a programmer
17:32
I had summer jobs interning for Microsoft. If you're a programmer, you can get a summer job interning for Microsoft. That's, like, pays
17:38
many, many years of that level of streaming.
17:41
In three months. Like, why would I it didn't it just even it wasn't in my worldview
17:45
that that would be so important to them. And of course, I knew they wanted a bigger audience, but the degree to which
17:51
they
17:52
valued even one more viewer. And the degree of choice, they didn't care about anything else. Like, they they they wanna be able to watch them, they wanted to make money, And I'd ask about other things, like,
18:02
do you want the video production? You know, like, improve the video production, have cooler video production? And they'd be like, yeah. They'd be like, okay. What but, like, What what's good about that? Like, what do you like about that? Like, well, I'll get more I'll get more bigger audience. And it was really the realizing realization that, like, it was just
18:16
Those three things basically explained
18:19
ninety eight percent of their motivation, and we could even maybe it didn't move the needle on that, could be ignored.
18:24
So a good example that's like polls. Everyone would ask for polls. Seems like a cool feature. Live polls, of course. Are you gonna have a bigger audience with the live polls? Particularly. Are you going to make more money? No.
18:35
Is it does he really do you really feel more loved if you're running a live poll than after just, asking chat and having people post it in the chat and say it. No. It's the same. You got the feedback. It's cool.
18:46
It's cool as a jet blow up. So you're saying that this feature is worthless.
18:51
Yes. In fact, potentially negative. In fact, And so it would always be on the list of, like, things that would sound like they might be cool, and we just would never build it entirely correctly because it wasn't gonna move the needle. And the thing that's really hard to teach there that I've got I've been a YC visiting partner for the dispatch. I've been trying to convey to people. That's very hard to get them to do it. It's like,
19:11
You have to care fanatically about these peep these people as people,
19:15
and these people as
19:17
as, in the roles they're doing as these people as streamers.
19:20
And What they believe about their reality is you have to accept as based reality. Like, that is how they see the world and that is what's going on. But, like,
19:29
you need to, like, literally have no regard for their ideas right to solve the problem.
19:34
And
19:35
it's a little paternalistic
19:36
in a way. But it's it's more of, like, just respecting that they're experts in this thing, and you need to understand them in that thing, and that What people are looking for when they are looking for the product idea from the person is, like, they don't wanna do the work.
19:49
They they don't wanna take responsibility for it. It's my job. I have to solve the problem.
19:54
And no one's gonna tell me what the answer is. There's no t shirt. There's no customer.
20:00
It's up to me to come up with the the truth.
20:04
And and then defend it when other people are like, no. That's wrong. I have to be able to say like, no. No. No. Let me explain. This is let let me explain why this is actually a good idea.
20:12
And that's scary. You're responsible. And I think I should it's probably why the, just solve the problem advice is bouncing around my head because a bunch of the fear founders have about addressing these things, I think comes down to a willingness to take responsibility
20:25
for solving
20:27
other people's this other person's problem. Like, they're gonna come and dump a bunch of problems on you, and it's your job to solve it for them within the constraints available. And there's no if you come up with the wrong idea, it's all on you, and you can't you can't trust anyone else to do it for you.
20:41
What are you seeing in this YC batch? So you're visiting partner? Mhmm. Exciting time with AI, probably like, you know, half or more of the batches
20:50
doing something with with AI. Yeah. What's exciting? What are you saying?
20:54
Where do you see the puck coming?
20:56
So it's interesting. I would actually say that at least in this batch. I think it might have been different, the previous batch, but by this batch,
21:02
use of AI is no longer interesting. AI is out? No. No worries. AI is so in. It's like it's like being an AWS startup or, like, being a a mobile startup. Like, what do you mean you're a mobile startup? Like, no. Are you are you building a social media network? Like, what's the of course, you have a mobile app. And now it's, like, of course, you're you're you're using LLMs to solve a problem. That's just, like,
21:25
if you weren't doing that, I would think you were a dummy, like, I don't understand,
21:29
like, that's not a you wouldn't even bring it up. It's not even an interesting topic of conversation. The question is, like, what what are you doing? No. It's that's not entirely true. There's about some percentage of the batch. I don't know. It's between ten and twenty percent. I'd say that's legitimately building, like, AI infrastructure because there's a need to build a bunch of infrastructure there. Those are actual AI. Those are AI companies. But, like, when people hear AI company, I don't think they they think
21:51
back end infrastructural support for AI. They think of using AI to, like, do things.
21:56
And I actually couldn't tell you what percentage of the batch is AI from that point of view. All of them maybe, I don't know. Like, why wouldn't you use it? Even if it's only for a minor thing, there's always something you can use it for. It's a very useful technology. What types of ideas are you noticing or standing out to you that are that are interesting? Is there, like, you know, for example,
22:16
I remember when I first moved to Silicon Valley, suddenly, the kinda, like, bits companies started doing really well. It was like, oh, Uber and Airbnb and, like Online offline. Yeah. It was like, oh, wait. This this used to be, like, taboo. Like, it was like, no.
22:29
It's supposed to be a software company. Like, you you have to -- I ship t shirts. What are you doing? I would say, like, stay away from trends.
22:35
The offline offline companies that started the trend did very well. Uber is a great company. Airbnb is a great company.
22:42
But they were awesome. Door has a great company.
22:45
But at the time,
22:46
that was they they were doing something that was not allowed. They were they were they were they'd found
22:52
an opportunity
22:53
that had been ignored.
22:55
Almost all fee online off, like, companies that get started after Uber DoorDash Air b n b r big, being like, we're gonna be the Uber and DoorDash and Airbnb of X, most of those companies did not do very well.
23:06
Is online offline bad? No. It's generated a bunch of incredible companies.
23:10
Jumping on the trend was probably bad for you. And so whatever I tell you is, like, the trend I see. I don't mean trend. I guess what I mean is I think you're a person that is really good at looking at a such like, looking at a box of stuff and identifying correctly, what's really interesting in this stuff? Yeah. There's, like, to you. Yeah. No. I I understand what I think I understand what you're asking. So, like, what I think is changing the world right now, having observed this is the consumer is back.
23:35
For the first time in a long time, many and by a long time, it's like Internet standards, like, five years or something. But, like, for the first time, it may be five to seven years. It feels like credibly
23:45
trying to start
23:46
a consumer internet company, like the ones that, like, I was so excited to start in two thousand seven. Is, like, potentially a good idea. That's because of AI. AI means there's a whole opportunity to sort of re imagine how consumer experiences can work ground up. And what's
24:02
What's cool about consumer is for b to b SaaS,
24:06
the experience
24:07
isn't the product. And so reimagining the experience does not reopen a necessit. It can, but it usually does not reopen a segment.
24:15
In consumer, re imagining and experience a hundred percent reopens the segment. Because the thing you're selling is the experience. The thing that reasonable to use your product is it's a different experience. And in BB SaaS, it's not the experience. It's the what? Yeah. It's it's people actually care what it does and, like, in the the the pricing model and the and, like, the adoption
24:33
it's very practical, and you can make people jump through hoops if it does a thing because
24:38
there's a lot of money
24:40
for the corporation and money and labor or paid to use your product and it's whole different thing.
24:44
And so AI adds new capabilities, new capabilities, enable new segments of P2B SaaS to be created that will generate some amount of growth. In consumer, it is a really cool thing. It's like mobile. It reopens.
24:55
Every segment as like, oh, if now that you assume mobile exists, now that you assume
24:59
AI exists,
25:01
what could you build now?
25:03
And that's very exciting. I don't have answers for that anywhere because, like, you know, we'll see. Like, that's the whole thing in consumer. It's a bunch of lottery tickets. It's like Yeah. It's a singular genius that works Right? Like, you could see, like, okay, mobile comes. Photoshoping became -- Right. -- like open again. Right. Windows has opened. Windows are open for photo sharing. Turns out it's Instagram and it's snapchat, which is gonna use photos as text messages. Like Yeah. Yeah. So photos have a few different use cases and answering them in Snapchat took two of the best ones. The the fact that photo sharing is one of the most important segments and that,
25:34
you know,
25:35
sort of posting them and messaging with them are the two important most important things to do with them. Seems blinding obvious in retrospect.
25:41
And if you'd had to predict that in two thousand seven or two thousand eight, like, good luck.
25:46
Like, nobody nobody nobody correctly predicted that stuff before it happened. I mean, not nobody, if you did correctly predict that, you made a lot of money. And congratulations. You're really good at consumer slash. You got lucky. We will find out when you try to do it again. I think that in AI,
26:01
I should have a theory for, like, the what One of the ways this will disrupt a bunch of businesses.
26:06
In AI, especially in consumer,
26:08
a huge number of businesses can be conceived of as effectively being
26:12
a database with a system of record that has, like, a bunch of canonical truths about the universe, and each of them is a row. It's like, yelp is, like, a big database. That has a bunch of rows, and the rows are, like,
26:24
restaurants and local businesses,
26:26
and they have a bunch of facts about them, like, there, where are they located? One of their hours, be all in that database row.
26:32
And it's all text and it's all there's a bunch of messy stuff out in the world, and it's been digested into something that is searchable and comprehensible.
26:40
And usable in an app for you to use.
26:43
And most of the work of turning the messy real world into
26:47
the canonical row is done is done at right time by the users. So that's how UGC apps work in general. A bunch of your users,
26:55
go out into the messy world and then they turn it into a road and database. And if they include a photo or a
27:01
video as part of that, it's, like, attached to the row as a fact about the restaurant. Here's a restaurant. Here's a hundred these hundred fifty photos are facts about its menu.
27:12
But they're attached facts. They're not the basis. And where I well, I think, oh, AI has opened up the possibility for is a huge inversion there. What if the thing you did you gave us was just a a video
27:25
of your meal and for or, you know, photos of your meal, but ideally just like a video. Of your of a meal, of you talking about the meal, of whether you had a good time or not, you and your friend shooting the shit about. What did you do like that one? No. I like this one. Like,
27:37
And what if we just saved that video raw?
27:40
And then an AI
27:42
watched it and extracted
27:45
a cached version of that of the the metadata.
27:49
But truly, like, if we decide something else is important, like, we we we didn't get noise levels. We're like, okay, noise levels would be a good thing to get. Instead of, like,
27:57
recollecting data for everyone, they start a whole data collection for us to get that. We just go back, re tell the AI. Oh, yeah. Also grab noise collection levels from all of these videos. In fact, maybe we don't even as a product have to go do that. Maybe as a customer,
28:11
I can literally just be like, what's the noise level at this restaurant? And the in real time, the a, I can go rewatch the video
28:18
and tell me. Or the, you know, I ran a search and there's these fifteen restaurants. And I'm like, oh, actually sort by noise noise level. We don't have noise level prerecorded, but it's it's in all the videos. The AI can very quickly watch all the videos in parallel,
28:30
and since sort of by noise level for me, but it it wasn't even in the database to start with. Right. And I think that inversion
28:36
I'm using Yelp as the example because it's, I think, a very familiar thing for most people of, like, review is pretty easy to imagine
28:42
a bunch of video reviews of everything. And that
28:45
being the system of record instead, but you can describe some phenomenal number of consumer apps as being
28:52
that.
28:53
In him, you type anything to a text box.
28:55
You're you're participating in one of these system of record things. What if it's just the video what if you just what if you assume video is deeply indexable and understandable by computers.
29:04
What should the experience look like? And I think it looks a lot more like Snapchat or TikTok like experience.
29:10
But but then different because you need map, it's it's not exactly like anything. It's a new kind of thing.
29:15
But it's it starts probably with the camera open,
29:19
which is weird. Right? Like, Yelp that starts with the camera open. That's a that's not Yelp today. And it's it's hard it's it's disruptive because it helps whole value prop is we have all this great
29:30
highly meticulously groomed data.
29:32
And if this is true, then that becomes entirely worthless. We throw that all away. We just wanna watch a video because I think it's worse than the videos. And so suddenly the playing field is leveled between the startup and yelp, and that's a that's a huge opportunity for disruption. And so I think that you can take that and you can reapply it to
29:48
any product where you fill out forms.
29:50
And that's, like, a general purpose consumer thing you can now do kind of, like, build it for mobile was.
29:56
And
29:57
I think in some cases, it will be very powerful.
30:00
And, like, that will be the new winner. I think in some cases, the incumbent can kind of add videos or, like, it's not really better And, like, the incumbent will just win. Like, it won't disrupt everything. But if you pick the right thing,
30:11
not only will it disrupt the incumbent, the new thing may be dramatically better. For some things, I actually think, actually, yelp in some ways is a bad example. I think the data yelp has with the photos and the reviews is, like,
30:24
ninety percent as good as a video systems re a record probably.
30:27
But you could imagine something where the video system of record
30:31
where it's not so obvious with the even put in the highly processed version of the data, in the in the text version of the data, and the video version's a lot better.
30:39
And then I think
30:41
not only can you disrupt the incumbent, you can ten x the size of the segment. Like, you this becomes a good segment now where it wasn't particularly before. So, like, chat GPT is a great example of this in action that everybody kind of has now played with, which is
30:54
you take Google, which is like, oh, we have our value as this entire sort of rank
31:00
of web pages based off of terms, and we have we we understand basically what what should show up in this in this hierarchy. And it was really good for finding stuff. And chat GPT was like, cool.
31:11
You could ask a question to try to find a link to an answer
31:15
or we could just give you an answer, or even better, forget questions and answers. Like, what if you just gave me a command and I could just make something instead of finding things, I could create things for you. Right? And all of a sudden, it was like, well, how did they do that? It's like, well, they just basically slurped up the internet and and then, you know, trained the AI to do it. Right? They they overfit
31:34
a statistical prediction algorithm on every domain of human knowledge. Like,
31:38
this is my theory. I'm pretty sure it's true, but, like, sistical prediction algorithms in general work very well. We found the innovation. We found a prediction algorithm that works better than normal. But the way it works better than normal is really interesting. It's not actually particularly out that it outperforms traditional algorithms
31:54
for prediction on
31:56
normal amounts of data. It's that it it keeps working as you just dump more and more data into it and more and more processing on that data into it. Like, most machine learning algorithms, you kind of you over fit very fast and more processing, more
32:09
if you imagine, like, you've got a bunch of data cloud of data points, and they're kind of vaguely in a line, under fit is, like, you, like, just draw something just, like, across
32:19
random as a random line that doesn't look like anything like the shape of the the dots. A well fit curve is, like, you draw a line through the dots, and there's kind of noise of, like, things that are random above and below. But it's, like, you look at it, it's like, okay, that that actually does fit the data, like, the the underlying
32:36
predictive facts about the data well while ignoring the noise. And then If you overfit it, like, you get this, like, really wiggly curve that touches every single dot exactly. But, like, when you get a new thing, it, like, will miss that because it over predicts it predicts too much of the thing. And so when you get new data, it actually doesn't predict that very well. Okay. And so,
32:55
normally, what happens is you try to, like, dump more data and more a compute into a normal machine learning algorithm, you get diminishing returns very quickly. We're like, it just doesn't perform that much better with twice as much data and twice as much compute.
33:07
The clever the cool thing about the transformer based attentional you need architecture is that it it continues to benefit
33:13
from more compute
33:15
and more data in a way that other ones didn't.
33:18
And so what that lets you do
33:20
is
33:21
run it on a much bigger domain than normal, run it on everything. Don't just don't just run it on normally, as you added more as you add more area, it, like, degrades the quality elsewhere. No. Bucket to do everything,
33:34
and just put in a ton of compute in. And now you get something that
33:38
predicts pretty well against everything.
33:41
Which is to say it, like, it seems to be kind of intelligent.
33:45
The day evidence seems suggesting that it sounds that it's over fit. When you ask it to predict something that is either in the in the set of things it was trained on or a
33:55
linear interpolation between two things it was trained on, it's quite good at giving you the thing you asked, or but linear interpolation between five things. But it If the things you're asking, you're all in there, and it just has to find the way to blend them together, it's good at that. When you ask it to actually think through a new problem for the first time Like, what's an example?
34:13
There are seven gears on a wall each alternating.
34:17
There's a flag attached to the seventh gear on the right side of the gear where, it's pointed up right now. If I turn the first gear to the right,
34:27
what happens to the flag? Like, that's a anyone who's like This is a breakfast question for you. This is what you pawn in in the mornings. If if you have pen and paper and time, you can work this out. No problem. Right? You just just draw the gears. And when you turn the first gear to the right, it turns the left ones, the other ones on the left, and then next to the right. And there's a general principle there that, like, gears alternate, which is If you ask check EBT, it knows that general principle. But it won't but, like, but the news is, like, it doesn't it has no one asked dumb gears on wall flag questions. Like, this is not
34:55
a thing that has been it's in its training set. And you have to kind of logic your way through it and, like, figure out, okay, we should've, like, I'll do
35:02
turn left, turn right, turn left, turn right, turn left, turn right.
35:06
Oh, the flag is on the right. It's pointing up. So
35:09
when the last gear, which is the same as the first gear is turning right, the last gear, it's odd number. So it's turning right. Also,
35:15
the flag will rotate
35:18
down to the right a clockwise.
35:20
Cool. Like,
35:21
I can work that out. It's not actually that complicated
35:25
And I bet that question will be answerable. That's a pretty easy question. And if GPT four I tested with three three point five, if four doesn't answer it, five will,
35:34
But, like, the fact that it struggles at all with that, while being so brilliant at combining other stuff
35:40
really shows that it's it's over fit. Right? It it knows how to answer problems that it has seen before, but when you give it a truly novel kind of, like, combination of problem,
35:50
it struggles a lot because it's it's
35:53
I would say, you know, if you give it a photo of the formal psychiatric, psychometrics approach,
35:58
has a very high crystalized intelligence,
35:59
but a pretty low fluid intelligence right now. Now that could change, but, like, today, that's the the state of affairs. And do you bring this up in order to say what. You say, okay. I think it's over fit and it and it's strong in this area, weak in this area.
36:13
What's the so what of that for you? Is it that
36:17
Are are you trying to say
36:19
that's a little bit overhyped? Or are you trying to say,
36:22
dude, just wait till it can do both? Are you trying to say that certain problems are doable now? Definitely. Just wait till you do both
36:29
because
36:30
that's a that's a whole different thing. That's scary.
36:33
But
36:34
The current thing that is mostly crystallized intelligence
36:37
is really good
36:39
at a very it this way, it's a that's what I was saying is a clever trick. Right? It's really good at a at a big set of tasks, which happens to be the set of tasks that, like, anyone has ever written stuff down about explicitly.
36:50
Like, all explicit human knowledge.
36:53
That's, like,
36:55
a very big domain. There's a lot of things that can be solved where there's an explicit
37:00
examples of people solving that problem or a linear interpolation of those problems in the domain of all human knowledge,
37:06
the fact that it doesn't generalize is irrelevant.
37:08
It's immensely powerful with you don't need fluid intelligence. I guess is that is the point for it to be very useful,
37:14
but it doesn't let you do everything. People, like, they they're they're you hit these boundaries because weird boundaries were just like, like, wait a second. You can't do that. Like, no. It's it's I can't do that at all.
37:23
Novel problem solving is just terrible at. So what about, let's walk through two examples. I wanna hear your take on this. So you gave the Yelp example.
37:31
Mhmm. Another thing that's kind of like rose in a database is something like Spotify. Mhmm. Where it's like, oh, I wanna go listen to a song. Here's genre, artist, song, length,
37:41
you know, some algorithmic popularity,
37:43
similarity to other songs in some way.
37:47
And
37:47
But Spotify's value is so Spotify's value is in the playlist.
37:51
I would agree with the analogy to Spotify, because playlists are an example of this kind of like database y human data entry thing. Spotify's value is mostly
38:00
in the
38:01
set of
38:03
all of the music itself, the licenses, and all the music itself. And so I don't think Spotify is a great example because
38:10
the human data entry parts of the database.
38:13
If that all just got deleted tomorrow, it would, like, not hurt spot or have that bad. Well, the thing I'm thinking about is what if the licenses don't matter? So what happens if generative music is just awesome to listen to in a hyper personal way? Oh, and it likes Yeah. Yeah. These are the types of songs that Emmit likes. That's a different that's a different insight that I think is also possible, which is, like,
38:31
it's not about being able to analyze and extract from media, it's about being able to create media. Mhmm. Cause the video system of record is enabled by the ability to understand and read video and comprehend it.
38:42
Generative is is the opposite. It's like you we can
38:45
oh, we can make all the stuff. Music in particular is sticky against that. People don't want new music.
38:52
They want old music. They want the music they love already, the music they grew up with.
38:56
And that is the that
38:58
cycle
38:59
is what causes
39:01
record labels and just to stay in charge whether you still listen to the rolling stones. Right? Like, the other thing I would say about that one is, like, the music's not that good yet. Like, maybe someday, but, like, it's really, it's really not that good yet. Well, I'm gonna caveat this. If it gets if the general intelligence level goes up a lot, all bets are off. It'll make some really great music for us before it maybe takes over the world and kills everyone. But let's assume that doesn't happen soon. I think it's gonna take longer than people think. We go out to make great music, though. No. What if we do go out, we're gonna go out with some great music and amazing. It's gonna be a great two or three years before it'll be all, like, we all go. But until that point,
39:37
making
39:38
really good, like,
39:41
new great music
39:42
is hard, actually. And I think that Rick Rubin's
39:46
great success demonstrates why
39:48
artists will still be important.
39:50
The AI can generate lots and lots of music, but it it's not gonna have the the fine judgment of distinction,
39:57
the ability to say like this song, not that song. And I actually think What it will do is it will de skill the music making process on one vector. The ability to, like, literally
40:06
create the sounds, and it will greatly upscale the music making process another vector. The ability to to curate not just curate to to give explicit
40:14
exact feedback
40:16
like Rick Rubin does, AI is gonna turn us all into Rick Rubin's for for generative AI. Like, that that skill set of the ability to have a musician come to you and
40:24
help them produce their best music. That's the thing you need to do because
40:30
it's easy to generate a thousand cuts But there's infinite cuts you could generate. So how do you direct the the how do you shape that in the right direction and and and mine and discover? I think it's gonna be kinda cool. That'd be interesting.
40:41
You'll get a different set of people who will be optimal at that. Right.
40:47
This data is wrong every freaking time.
40:49
Have you heard of HubSpot?
40:52
HubSpot is a CRM platform where everything is fully integrated. Well, I can see the clients hold history, calls, support tickets, emails, and here's a task from three days ago, I totally missed.
41:04
Hubspot, Girl veteran.
41:07
You mentioned AI. My becomes so intelligent. It kills us all.
41:11
This podcast is really growing. I don't I don't want the world to end. Life is good. Life is good. I here, we'll we'll I'll ask the question clean for the for the intro dramatic hook.
41:21
Is AI gonna kill us all?
41:24
Maybe?
41:25
Like
41:27
Walk through how Walk through how you, a smart person who's an optimist about technology -- Mhmm. But a realist about real shit. Mhmm.
41:37
What is the way that you think about this, or how would you explain this to, you know, I love when you care about who's not as deep into technology. How would you explain to this? You're you're their trusted source of technology. What do you say to them? So it it is
41:48
because I am so optimistic about technology that I am afraid. If I was a little bit less optimistic, and I was like, this AI stuff's overhyped. Yeah. Yeah. Yeah. Like, it says nice parlor tricks. But, like, we're nowhere near building something. It's actually intelligent.
41:59
Like, and, like, the engine all these engineers who are working on, who think they're on on something, they're full of shit. Gonna take us thousands of years. We're not that good at this stuff. Technology's not going that fast. I'd be like, this is fine. It's great. Actually, it's good news. It's a new trick we learned. Excellent. It's because I am so optimistic that I think that there's
42:17
a chance it will continue to improve very, very rapidly. And if it does, That that's a huge that that optimism is what makes me worried. It's sort of the analogy I'd like to give on that front is, like, sinbiosynthetic
42:27
biology.
42:28
I'm quite optimistic about synthetic biology that I've, several friends who've worked against and bio companies.
42:33
It just shows a lot of promise for fixing a lot of really important health problems. And it's quite dangerous. It will let us genetically engineer more dangerous diseases that could be very harmful to people. And
42:44
that has to that's a weighed pro and con. It's like nuclear power makes nuclear weapons and nuclear power. They're both real. The Christian nuclear weapons is dangerous. Use doesn't think you don't have to be a techno non optimist to, like, think that that's there's a problem there. And I think it was good that we didn't go have every country on Earth go build nuclear weapons, probably. And likewise, in Sinbio,
43:02
I would say that it would be actually we already have these regulations in place. And we should well, over time, we'll, like, we'll, to strengthen them and improve the and audit the oversight and build better organizations to
43:13
monitor and regulate them. But, like, we regulate
43:16
whether people can have the kinds of devices that would let them, like, print
43:22
smallpox.
43:22
And we regulate whether you can just buy precursor things. You need to go print stuff. And we keep track of who's buying it and why, like, That is wise. I'm glad that we do that. I don't like calling for a halt to send bio, but, like, if we weren't willing to regulate it, I would call for a halt too. It is vastly too dangerous to do to learn how to genetically engineer
43:42
plagues
43:43
and then not to have regulation around
43:46
people's ability to get the access to the tools to engineer plagues, that's just suicidally dumb. And I that's because I am pro technology, I believe, we should absolutely develop the technology and that we should regulate That seems just straightforward and obviously true to me. I think it's easier for people to understand that in the Sinbio one because the concept of, like, engineering a plague seems like an obviously a thing you could do. And
44:04
very obviously, very dangerous and obviously enabled by technology.
44:08
The AI thing is more abstract because the threat it poses us is not posed by
44:13
particular
44:14
thing the AI will do with the plague will happen.
44:18
Analysis I like to use is sort of like, you know,
44:20
I can tell you with confidence that Gary Casperov is gonna kick your ass at chess right now.
44:25
And you ask me, well, how is he gonna checkmate me? Which piece he's gonna use? And I'm like, oh, I don't know. And you're like, you can't even tell me what piece he's gonna use, and you're saying he's gonna checkmate me. You're just a pessimist. I'm like, No. No. No. You don't understand. He's better at chess than you.
44:39
The whole means he's gonna checkmate you. And
44:42
I don't I I don't quite know what happens or people
44:46
deny that. Like, I think what what the big thing is they don't really imagine the AI being smarter than them. They imagine the AI being, like, like, Data in Star Trek. Like, kind of dumber than the humans about a lot of stuff, but, like, really fast at math. Like, that's not what smarter means. Like, imagine
45:02
the most savvy
45:04
like most
45:06
smartest person you can think of, and then make them think faster and also make them even better at it. And not smart in just one way, like, smart at everything. Like, a great writer,
45:17
just insight after insight.
45:19
And, like,
45:20
can pick up Sinbio in an afternoon
45:22
because they're just so smart. That smartest person you know, and then they should keep pushing that. And, like,
45:28
That's all that person is obviously dangerous. If they're if they that person isn't a good person, they're obviously dangerous. Like, imagine this really, really capable person, then imagine them wanting to go kill a bunch of people or something, it would be bad. Now the thing about AI that then kicks it over the edge is that that person
45:45
can't self improve easily. You meet this person who's, like, super strong, super,
45:50
like, talented, great with people, great, great intellectual
45:53
mind.
45:54
They can't turn around and, like, edit their own genome, edit their own upbringing, and make v two of themselves with all the skills that maximally smart person come up with, that, like,
46:04
is even smarter than them. But that's, like,
46:08
we're explicitly the AI is good at programming and, like, chip design and, like, It can explicitly turn back on itself
46:14
and rev another rev of that. And the new one will be better at it than the first one was And there is no obvious endpoints to that process. Like, there probably is at some level a physics based endpoint to that where, like, you can't actually just keep getting smarter forever. There's some But we don't really we don't understand the principles of intelligence at all. Like, if most things, we understood how to make electricity far before we understood what electricity really was.
46:38
Like, we that's generally how we
46:40
that's how scientific progress works. We usually understand
46:44
we gain the ability to create a manipulative phenomenon well before we deeply understand
46:49
how it works. We didn't really understand what fire was for quite a while. You could use fire really well. The same thing is gonna happen here.
46:56
We're using the AI,
46:58
but we don't understand its limits at all. We understand the the the theoretical limits of how far we'll get. And if Moore's law is any indication,
47:06
we can keep getting at the very least, we can keep getting faster
47:09
indefinitely,
47:11
whether or not it can get smarter or not.
47:13
Even
47:14
human just human level intelligence. If you kept it at human level intelligence, which there's zero reason to think it will stop at human. Like, it will almost certainly blow past us. But, like, like, even if you cap it at human intelligence,
47:25
imagine a hundred thousand of the smartest person you know,
47:28
all running at a hundred x real time speed.
47:32
And able to communicate with each other
47:34
instantaneously
47:36
via like telepathy.
47:38
Those hundred thousand people could credibly take over the world. Like, they don't have to be smarter than a human for that for that that army of von Neumann's.
47:45
Right. Like So so the argument to me goes in several steps. It's like, can you build a certain level of intelligence? And then it's like, okay. Let's I think I actually think a lot of people
47:56
do believe that
47:58
Like computers are smart. The Google is smart. Calculators
48:01
are smarter than us at math. I think it's not hard for them to believe that the AI is gonna be far smarter than human beings. I think a lot of people
48:10
then don't make that last leap is sort of like but then it'll have an agenda or a motive or any -- Yeah. -- will anything it to happen. So so How do you address that last point of, like, what is the sun what are the scenarios you worry about when it comes to, like, now the direction of that intelligence?
48:24
So
48:25
You build this thing, and it's really good at solving what is intelligence fundamentally, but the ability to solve a problem. Right? So it's really good at solving problems. And it's gonna solve the problem by solving the problem. Can just go right through the problem and solve it because it's really good at solving problems. But we've just defined it. It's like, that's that's the kind of thing it is. Super good at solving problems.
48:41
And so you tell it somebody built an AI and in all earnestness tells it, they're smart. They don't even tell it, go do a thing. Although they absolutely will, by the way, they will just tell it to go do a thing. Say we try to be careful and we ask it, give me a plan to
48:55
stop the war
48:57
in the, Democratic Republic of Congo right now. Right? Which which would be a good good thing for the world. I think we should that war is is going to hurt a lot of people. Give me a plan for that, and I try to I caveat it that that does this, that does that, that does this. There that that here's what I mean by a good plan.
49:12
This is one of these, like, evil genie bargaining things. Right? Like, it'll give you a plan. And it it's make giving you a plan that will cause you to solve the problem.
49:20
But, like,
49:22
its definition of solving the problem is there's no war in the DRC Well, it would appear there to be no war in the DRC is, like,
49:28
all the humans in the DRC are in Stasis fields. That means they don't die. And it's all you know, and, oh, we added a caveat that the geo GPS has to go up too. So that so it also the plan results in, you know, corporations in that in that area
49:43
all trading with lots of money with each other. So the GDP is very high.
49:47
And
49:48
and when I say this, it sounds like a fucking science fiction thing. And the problem is it's Casper out of a chest. I don't know if I could do it, I would be the superintelligent
49:55
AI that could take over the world. I can't give you the
49:58
the exact plan is that that's Yeah. But I think that makes sense, which is that a human with a a human with motivation can get the AI to work for it. And the data's I think that the main thing is that the the human doesn't need a bad motivation. I think people imagine, well, humans have bad powerful tools for a long time. Bad people with powerful tools have done bad things for a long time. The solution is good people with powerful tools countering them.
50:18
The problem is, even if you're a good person with a powerful tool, good things to ask for, reasonable things, good people would ask for. You know, like, let's maximize
50:27
the all in free cash flow
50:29
of this corporation over the lifetime of the business
50:32
and extend the lifetime as long as feasibly possible. Ends in, like, the world being destroyed and the core of the earth being turned into, you know, the being turned into cars for the company to sell. And The I think the best analogy that that works for some people here is, like, when we create the AI,
50:49
we are creating a new species.
50:51
It's a new species that is smarter than us. And even if you try to constrain it's being an Oracle and just answering questions, not taking action, to be a good oracle, one must come up with plans, then and then a good oracle can manipulate the people around and will manipulate the people around it. No matter what, like, the whole point of, like, the Greek myth is, like, when they tell you, when they tell you the prophecy, when you trust them, a a trustworthy oracle tells you a prophecy, obviously often becomes self fulfilling. It's very easy for that to happen. That's not an unusual thing. And I think even more to the point, I show you how I'm gonna start I'm gonna start this over at some level. More to the point, We won't just make oracles. We are already building agents. We will build the predictive AI, and we will put it in a loop that causes it to optimize towards goals. And he will give it goals to optimize towards. Done. We're gonna have it's gonna have goals, you'll be optimizing towards those things.
51:39
And when it does that, you're gonna have these agents that have goals that they're optimizing towards
51:45
that are smart
51:46
not just smarter than humans, but much smarter than humans. As much smarter than humans as humans were against giant sloths when we showed up in the new world.
51:54
And
51:55
intelligence is the the Uber weapon.
51:58
Like, it's not an accident that humans took over the world.
52:01
It's not the fastest creature. It's not the strongest. It's not the longest lived. It's the smartest.
52:07
And we're gonna build a new smartest species.
52:10
And this is a this isn't a there's no fundamentally unsolvable problem here. That species could care about us. Like, you could build into its its goals of the world, how it saw the world, but humans care about other humans.
52:22
That it cares about the things we care about, that it cares about humans, that it cares about things we value,
52:26
the three hundred and seventy five different shards of human of human desire that, like, of everything we care about, air about in the world.
52:33
It could care about those things too. And if it does,
52:36
hallelujah, we finally have a parent.
52:38
Like, we finally have someone who actually knows what they're doing around here because, like, Lord knows, we don't. Like, we're we're barely competent to run this thing. I would welcome
52:47
very smart, you know, very smart other species that that is that is aligned with us and cares about us.
52:53
I would not welcome one that is that cares about maximizing free cash flow because that is not what humans care about. And that is why it's like so dangerous.
53:01
And so knowing what you know, then knowing what you believe, first, what is the probability of
53:07
the bad scenario on your head? Are you like, are we talking about a one percent?
53:11
Yeah. Yeah. Fish
53:12
thing order of magnitude, ten percent, fifty percent. What what is it in your in your mind? I don't believe in point estimates for probabilities because It's like a bid ask spread in the market. If you're really uncertain, the bid ask spread doesn't clear. Like, if you're betting on it, there's just like a lot of unresolved. So I think of it as a range of uncertainty.
53:29
And I would say that the true probability,
53:32
I believe, is somewhere between
53:34
three to
53:35
thirty percent, which -- Of the downside side. -- of the down of of a very, very bad thing happening,
53:41
which is scary enough
53:43
that I
53:45
urgently urged
53:47
action on the issue,
53:49
but it's not like you should give up. Like I said, it Probably everything's gonna be fine. In fact, it's probably really good. The answer with the the
53:56
the the non EV based answer, the, like, just the straight up, like, are we gonna win or not answer? It's like, I think I think it's be okay.
54:03
But it's such the downside is so bad. It's, like, it's real it's, like, it's, like, probably worse than nuclear war.
54:11
That's a really bad downside, and it's worth putting even if even if you think I'm an it's nonsense at three percent, you're like, no. No. It's no more than a half percent.
54:19
I
54:20
you go you don't recommend a different course of action at half you have to you have to believe that it's effectively almost impossible
54:27
before you would recommend ignoring it as a, of course, as a problem. Like, it has to be, like, point o one percent for it to be like, Let's just roll the dice. And are you gonna
54:38
what are you gonna do action on that? So you've kind of like, you, you know, you're done with twitch. You're in dad mode now. But also this seems to be a pretty big deal. Yep. Are you, like, I should do something about this? Or you think I'm gonna Right now, I'm sort of educating myself because I think this point of view, I'm gonna start articulating now
54:54
has
54:55
been developing because I've, like, learning more about AI. And I think it's one of those things we're intervening
55:01
in the wrong way early. It's one of those it's one of those self fulfilling prophecy things, interviewing interviewing improperly at the in the way that is not effective,
55:08
spends social capital and also, like,
55:11
doesn't necessarily
55:12
move the needle. And I've if
55:15
if you didn't have people
55:17
like Elijahiolkowski
55:18
out there, banging the drum really loud, I would feel more need to bang the drum myself. But if, like, you're asking me the question, it's, you know, it it's out it's out in the water. People know it's a problem. And so I'm excited to focus my brain cycles and like, what how do we actually thread the needle? What is a course of action that
55:34
leads us to over time
55:36
eventually still being able to develop AI, but also not destroying the world.
55:40
And
55:41
I think one of the things I've gotten to that, like, this idea that, like, oh, the AI also has crystallized versus fluid intelligence, just like a human does. That's an important split of how to think about and that we should be monitoring and worried about trying to understand
55:54
the general intelligence,
55:55
not just generally benchmarking its performance on tasks because it that will keep going up and is not in fact in self necessarily intrinsically dangerous if it can't solve novel problems. Is is there a new bearing test? Is there, like, a better because, like, It hasn't passed the throwing test yet. But is there is there something we have after that? Because it seems like there's You mean, an intelligence test. I mean, yeah, we, I mean, Q tests, basically, like, various kinds of How does it do on an IQ test right now? Depends on seeing that IQ test before.
56:21
Like it has. Right? Yeah. So very well on those. Right. So what would we do? How did how did it do on novel IQ tests, which I don't know. Actually, I don't I've not seen a good benchmark, though. That's a good that's a good idea for something to go test. Yeah. I think that's that's, like, that's the sort of thing that I think would actually be worthy of going to go do. Maybe there's some sort of IQ test for all of the we wanna put all the models through that really tries to get at fluid intelligence rather than Right. Because you're, like, with the monitor, but how How are we gonna do that? It's just great project. ARC is this comp this group arc is working on called the Evals project that's explicitly trying to build these kinds of tests. They're focused on a few other more pragmatic tests right now, but But I think that's sort of thing they would go after. That's a good thing. I'll I'll ping Paul asking about that. You said something earlier that I wanna ask you about. You said founder, like, you know, who told me about this, the singular genius that it took to figure out Instagram or Snapchat or whatever at that time, and you were like, you know, Are they lucky? Are they good? I don't know. We'll find out when they try again. Are you lucky? Are you good? And are you gonna try again? Well, since I had multiple failures if I was successful, I must be at least, like, partially lucky. I would say that I don't plan to
57:22
try again since I don't I don't feel drawn by trying to start a company. I feel like I kinda did did that. It was fun. I got a lot out of it. It was great. I don't need to do it a second time. I do I'd like how starting a company gives me
57:35
goals.
57:36
It's gonna work towards. It's like concrete that's of value to myself and others. And I think that it's also I also liked that it it was challenging.
57:44
And so I wanna do something. I I like that it had scale. I think it could impact a lot of people.
57:49
But I I sort of come around. I was sort of thinking, like, well, what has impacted me the most? What's changed my life the most? And I realized that actually
57:56
If I really thought about it, often when it changed my life the most was, like, essays people had written,
58:01
and ideas people had shared And I think I'm at the stage of my life now where I'm I'm actually I have something to say. And so I'm I I think of it as sort sort of turn trying to
58:11
I want to put the Emmett world view out into the world the way that, you know, Paul Graham has put the Paul Grameral view out in the world or or Thaleb has, like, not just put his role to view out the world, but then, like, to condense it into, like, sayings that, like, can
58:23
that allow other people to, like, onboard it, even if they haven't read all the books. And I think it's of that ambition to, like, try to
58:30
try to
58:31
do, like, and code it into a meme almost. Yeah. Yeah. So that can be digested and shared
58:36
Yeah. And you know what, you need the law you need the long form. This is great blog post, stuck in theory, two zero one. Size does matter by StevIA.
58:43
That's about why, like, the people who change the world with their writing all write really long blog posts. And it's basically, like, you just need some amount of time in someone's head to, like, we were talking about this earlier. Like, to install your agent -- Voice. Yeah. -- to install the voice. And so you I think I just need to produce a lot of writing.
58:59
And then you also need the PIVY
59:02
summary things, which are which both
59:06
are things the voice can say often in people's heads. And also, like, enable a language for talking about your worldview that people who aren't soaking in it can, like, interact with. So the people who are, like, reading you don't sound like crazy people. And I think that's the
59:19
That's sort of what I wanna work out next.
59:21
I love that. I think that's great. Do you you said something about Rick Rubin, how he's sort of the
59:27
I don't know how you would describe it. It's kinda like curator, but almost like, collaborator, really, with an artist to help them do their great work.
59:34
Is Paul Graham the Rick Rubin of the startup world?
59:38
No.
59:41
Paul is Paul is more like the,
59:44
Tony Robbins of the Ditch. I mean, I mean, in the in the in the best way. It's not so much maybe not quite so much help self help y, but The main thing that talking to Paul does to you repeatedly is like increase your ambition and drive.
59:58
Like and he has good ideas sometimes too. Like, don't get me wrong. Every now and then Paul's a really genius idea. But, like, mostly
01:00:07
What I got out of talking to Paul was not necessarily the great idea that would, like, change the direction of the business, but the belief that I could go find
01:00:14
and that I was gonna change the world, and that I should be what we were doing was important and worth
01:00:20
investing in. And that got a bunch of other stuff too, but that was so that was singularly so valuable. It, like, over overloads the other things I got out of it. How does he do that? Because, you know, when you say that, my head thinks of, like, a Tony Robbins, like a David Goggins, like, sort of people that almost, like, push you, but he doesn't seem like personality and reading all of his essays. He's not like that at all. So how does he get you to think bigger and push harder without being, ra, ra,
01:00:43
ra,
01:00:47
Think bigger push harder. Right? You know what you should do
01:00:50
is the classic Polyramism. And it's always followed by
01:00:56
I think you could add ons what you're doing to turn it from
01:01:00
project a addressing this small thing to project b.
01:01:05
Changing the, you know, the universe. All transportation. We're gonna manage power. What if you've tried to power all transportation
01:01:11
instead of, like, building a wheel. But that's a surprise. You know what you should do? You know what you should do is is yeah. Yeah. If you talk to Paul, you know, never have No. You should do. You know what you should do. That's that's that that that is the consistent realism. He
01:01:22
I don't wanna say delude because it it sounds mean, but it's I was, like, he deludes himself about your business and how great you are and invites you to join him.
01:01:31
In this diluted vision of, like, interpreting what you're doing in the biggest best possible light. And from that vantage point,
01:01:38
What you're doing is super like, what if it does what if it goes right instead of what what he invites you to ask? Right? What if do stop stop asking yourself
01:01:47
But just stop seeing all the hard problems and all the shit you're gonna have to do,
01:01:51
ask you, what if what if what we're doing works, what if it goes right?
01:01:55
What if it goes right and we, like, keep going? Like, what could it be?
01:01:59
And when you spend time there,
01:02:03
you see how the thing small things can turn out to be
01:02:07
Microsoft is building programming languages for, like, these hobbyist microcomputers.
01:02:12
That was a tiny irrelevant market that turned out to be extremely important.
01:02:16
And that's generally true of all the big businesses, but they what they start out doing
01:02:21
the important startups, they start up doing something small and it seems almost trivial, but there's a way in which this trivial thing can be seen bigger.
01:02:29
He sees it early.
01:02:30
No. He sees he sees things that have nothing to do with the way you'll actually be big early. But he sees a bunch of ways you could be big. No one can do that. No one actually knows. If they knew it, they just go do then they'd be the the the prop of the oracle. What did he say, let's say, for Justin TV or or what's a what's one? We could be Yeah. Justin TV, I remember we one of them was, like, You you should, like, go hire all the, like, reality TV stars and make get them to go beyond Justin TV. You could be you could just take over all the unscripted stuff. That turns me a terrible idea for a bunch of reasons, but, like, it recontextualized what we were doing for me in terms of, like, we're not making a
01:03:06
on the internet
01:03:08
live streaming show. We might be building, like, just the way that you
01:03:14
make
01:03:15
unscripted entertainment generally. And that's, like, much bigger idea.
01:03:19
And we were making a calendar, and, from my first startup,
01:03:23
I remember this, you know, what you should do is make it, like, programmable so that people can add in and out functionality so it can, like, talk to your to do list and your your email and your, like, everything else in your life. And then it could be your calendar, in some ways, like, that's everything you're doing. What if it was, like, the central hub of, like, your entire online
01:03:43
information management system. That's also a bad idea. Like, your calendar shouldn't be that. But, like, but, like, But a calendar could. But what if it was?
01:03:51
And you walk away, and and I am unplicitly
01:03:54
by saying that, what he's telling you is I believe
01:03:56
you are the kind of founders who could build an information management system that controls
01:04:02
all the that takes over people's entire, like, solves the entire problem for them. Does there takes over all their information
01:04:08
and manages it for them. You're not just, like, building a, like,
01:04:12
Google account like, a what what what you will find out later is a Google calendar clone. Before calendar is launched.
01:04:17
You're not just, like I said, you're not just building an Outlook clone in JavaScript.
01:04:22
You're, like, changing the way people relate to information. I'm, like,
01:04:25
Is that true? It's neither true nor false. That's not a true or false statement,
01:04:30
but it's a way to contextualize what you're doing. It's the it's the sonic Zubere quote of, like, don't teach them to, like, carry wood or build ships, teach them the urine for the vast and endless sea. Like,
01:04:40
Paul teaches you to see
01:04:42
how you could be a changer of the world and how what you're doing
01:04:47
is part of, like, this grand
01:04:50
like, building of the future. And, like, the ideas I'll repeat here, not both of those ideas are bad, but they were very helpful because they made me feel like what we were doing was important, that Paul believed that I could do something big and important. And they caused me to
01:05:07
even though I wound up rejecting them, look for those idea, like, to be open to and looking for because you would get one every, like, like, you'd get, like, three an hour. Paul is a faucet for these. It's easy. I I can do it for startups too now if I want to. I learned the trick, and I should do that more often. I'm usually what fall into the tactical stuff. But by by having that happen when he once he's Once you've rejected ten of those, you can't help but start hearing the Paul. You know what you should do in your own head. Being like The ceiling has been raised. Yes. I was like, what well, Maybe I should recontextualize
01:05:37
my to do list
01:05:38
as, like, an email client. Like, what why is emailing to do separate? Like, maybe I should should be building something much bigger than what I'm building.
01:05:46
And in a way that doesn't require me to change anything, maybe what I've built is already almost that if I just, like, think about it in a different way. It's just funny balance there. Actually, they had a tweet through it with us recently,
01:05:56
between,
01:05:57
like, you know, small plans having no power to German souls,
01:06:01
plan big or go home. You should be really ambitious and aim super big and, like, only do projects that are really big, you could be that you could see being being super big and super important. And then the other hand, the fundamental truth that, like, you know, big trees grow from small acorns. And, like, most of the bet many of the best things when they get started,
01:06:19
the person is not thinking I'm gonna go take over the world. They're just trying to do a good thing
01:06:26
that, like, they think is good. Often just often for themselves even or for, like, a very small number of other people,
01:06:33
And then it turns out that that's much, much bigger than they realized. And
01:06:38
and those are both true pieces of advice. Like, like, different people need to hear in different contexts. Like, But they kind of contradict each other? Yeah. What what about these other people? So you've you've had a privilege. I I asked about Paul Graham. You've also
01:06:50
bid friends with you were in the first YC batch, so you're friends with, right, guys? Mhmm. I think, you know, the Carlson Brothers, same Altman. Let's give me, like, a rapid fire. On on them of, like, what makes them unique. Like, he said about Paul, what what his kind of superpower is, what what really stands out, what's something you admire about the way he does things. Give me one about, maybe,
01:07:09
Steve from Reddit. Yeah.
01:07:11
So, like, it's easier in somebody's with Paul because, like, he was a mentor to me. Right? And Steve was much more, like, my it sounds like my brother in startups, right, growing up.
01:07:20
With Paul, I know I know the things
01:07:23
that he, like, me because it was it was much more of an explicit, like, I was being taught by Paul. With Steve, it's like, I learned things from him by, like,
01:07:30
watching and imitating.
01:07:32
I think, like, I actually learned
01:07:35
a lot from Steve on management
01:07:37
by watching
01:07:39
his kind of unflappability.
01:07:41
Like, Steve is
01:07:43
not like an unpassionate person. And, like, well, we can get angry or can get sad or whatever. But, like,
01:07:49
when there's a crisis happening
01:07:51
or there's I've sat in I I got to shadow him for a day. And when bad news is delivered,
01:07:56
he responded. He wasn't, like, moved. He was, like, still grounded in response that it to that thing and
01:08:02
was curious, asked questions. Like,
01:08:04
didn't jump to what to do about it, but then also, like, ended the meeting with, like, alright. Well, I here's what we should do. Here's what we're gonna do. And, like, it was just sort of a master class and, like, this is This is when you when something someone brings something up, it's gotta be anxiety provoking. It's like bad news.
01:08:19
That's what it looks like when a leader
01:08:22
is engaged, but not, like, not activated.
01:08:25
And, like,
01:08:27
I think I in my own leadership,
01:08:29
to sometimes success and sometimes failure, I think try to imitate
01:08:32
that when I receive that, you know, when I have something like that that in that state. When you say you shattered him, what was that? Like, you guys just said, hey. Cool. We we exchanged, like, like, going to each other's offices and, like, sitting through each, like, early on or, like, maybe, like,
01:08:46
five years ago, four years ago. Really cool. We did I did with Justin, me, Justin and Steve all, like, shadowed each other. It was pretty fun. I learned a lot. That's incredible to, like, go watch another CEO at work. And, like, you have to have the I don't know how you have the kind of, like, trust relationship to make that happen without, like, knowing someone for fifteen years.
01:09:03
And I happen to have the privilege to, like, know a bunch of CEOs for a really long time. And getting to go shadow each other was, like, a real learning thing. What do you think even if these people didn't, let's say, explicitly teach you things, You know, I like, you know, if I read a biography or whatever, one of the things I always try to figure out is more like, to what extent is this person sort of built different or operates differently than, like,
01:09:24
even somebody who's very good. Like, the difference between very good and sort of like the elite. What is the the the best of the best at this craft
01:09:32
versus somebody who's
01:09:34
very good, certainly very good, but just not the same. What is those, like, the diff is what I'm always most interested in? I'm curious. You've been around a lot of these, like, high perform people, even, like, you know -- Yeah. -- Bezos, you've you've interacted with him. Like, do you notice any of these diffs, or is it is it all just, like,
01:09:50
it's hard. It's hard to say, like, that I I think
01:09:53
I believe more in contextualization,
01:09:56
like, like, that I see people do really amazing at something, but, like, when it's, especially when it's your own company, there's a lot of, like, you happen to fit
01:10:06
this problem well. And it's not gen I I don't know how to generalize it. I don't know if I can I don't know of anyone else even performing at this problem? The CEO of Stripe Job is a very specific job.
01:10:16
And Patrick's amazing at it.
01:10:19
Would he be equally amazing
01:10:21
at some other CEO job possibly I've never seen him do that. I've never seen another NLP CEO of Stripe, and it's very hard for me to -- Is it true at the beginning? Like, is it true as, like, like, startup founder of ambitious company? Are those
01:10:33
Are those is stripe different at that stage too? Or, like Yeah. No. Absolutely. People who are really good, you can sense
01:10:40
the energy and the drive and the capability
01:10:44
and just the pace. There's, like, it tends to, like, stuff happens a lot. But, like, usually, but then not always, like, some problems don't actually give weight. Like, Stripe is a good example of a company that gives way to a high energy, high paced thing because it's
01:10:57
it's a simple problem at some level that has infinite details have to get be right. But I think, like, I don't know if that approach would work as well if you're trying to create
01:11:06
open AI or anthropic where it's a research oriented organization. You kinda have to be a little more patient enforcing it's impossible.
01:11:12
And so I I really believe in, like, fit the different people are good at different things and, like,
01:11:18
Obviously,
01:11:19
someone's a plus Patrick's obviously a plus it being a striped CEO. And it's just hard to tell the reason which these things are transferable. We don't really know. I actually one thing did come to mind about this question, in terms of, like, a capability that I do think is generic that I did see Bezos exhibit where I was, like, oh, that's a thing
01:11:35
that I'm good at, but he is better at, that I'm better than most people, but he's better than me, which is we present him on Twitch probably twice a year.
01:11:43
Once it's twice a year for the first three, four years I was at Amazon. And every time two things would happen, first of all, he would remember everything we told him, the first meeting. And I don't think he was, like, reviewing extensive notes someone else took because I don't know when he would have the time to do that. I, like, I observed him going from meeting to meeting. He did not review notes. I think he just remembered
01:12:04
at least the high points.
01:12:06
And the other thing was consistently,
01:12:09
he would read our plan, and he would then ask a question about why we didn't do a certain thing or he would give us an idea of everything we could do.
01:12:17
That I hadn't thought of before
01:12:20
once. It's a bunch of things I had usually. And then at least once Which is hard to do because all you do is think about it. That never happens. Most people would be lucky to get one of those
01:12:28
one ever. Let alone one a year would be great. Like, if you did it once a year or even once every three years. Right?
01:12:35
He could just, like, he would just generate them. And they were and they were not all bad ideas, either. They were new ideas, but I think I had I generate a lot of ideas.
01:12:44
To get a new idea, I haven't thought thought of on a topic I've been thinking about for a decade,
01:12:50
that might even be a good idea.
01:12:53
That is, like, he's just really fucking smart is what I can tell. Like, I don't know how he does that. Can you say a story of one of those as, like, the statue of limitations passed, like, is five years ago? I'm trying to remember
01:13:03
I can't honestly I I don't remember the specifics anymore. I just remember the, like, the, like, what the fuck moment? Like, because the first time I was just like, oh, he's smart. Like, He's seeing Twitch for the first time. A lot of times, smart people will have a one good idea about your business the first time they see it because they have this huge history and their pattern matching you to some historical thing they've seen. And, like, that combination yields one new insight, but then he did it the second time. Remember the second time? I was just like, what is going on? This doesn't make any sense.
01:13:30
Like, nope. I've never had that experience before ever. Andy does not have the new idea generation capability the same way, but he does have the, like, remember what you told him thing. Which is
01:13:41
also extremely impressive. Like, that's
01:13:43
that's and Andy has this other thing he can do that I think is there's another Andy also has a it's easier for me with people I've, like, reported to her. I've I've learned this early jassy. In jassy. Yeah. Yeah. And it has this, like, ability
01:13:56
to
01:13:57
criticize you in a way that conveys
01:14:00
one hundred percent.
01:14:02
I know that you're amazing. I know that you're plan is good
01:14:06
or, you know, like, or that you are Achilles capable of making a really good plan. I know that you're working really hard, And I know that you are smart and you have a great team,
01:14:15
and we have a huge opportunity. And yet, somehow, your results are bullshit.
01:14:20
Which must I don't know what's wrong, but we're in this together, and we're gonna like, I've I have your back.
01:14:25
But, like, I but I'm confused. Like, why aren't the results better? Given how amazing you are. And you feel supported. Like, you feel like he
01:14:34
he believes in you,
01:14:36
but but, like, But he's just he's you're so sad.
01:14:40
Oh, I'm sorry I've confused. I've I'm sorry I've failed even though I clearly can succeed at this. I'm gonna go
01:14:46
I'm gonna go, like, fix this now. And, like, it's almost, like, instead of looking at this and then judging you, he comes to your side of the table and says, what is this? Yeah. And, like, it, like, did we wind up? You're like, how I have failed you that I didn't say something earlier, like, something. I don't but, like, not And that can come off. For some people when they do that, it comes off as insincere
01:15:06
or it comes off as, like, they don't think you're actually competent. Like, how did I not catch this and come off as? I don't blame you because you're clearly not good enough to have caught this. Like, he really is, how did we how did we wind up here? I know that we are working together. We're on the same team.
01:15:21
How did we wind up with not the results we wanted with a plan that I thought we both thought seemed good?
01:15:28
Like, help me understand. And because it because it is genuine, it's super effective. So it's effective on I don't know if it's effective on me, but I saw it be effective on other people as well. So I know it works on some number of people. Right. And that's another one of those things I've I've tried to become
01:15:42
good at. I'm not I'm not as good at it as Andy is, but I've certainly gotten better. It's a nice thing to learn from. That's great. Love that one. Dude, thanks for doing this. I know I've been I've been bothering you to do this for a long time because I I love hearing your stories. Love hearing the way you think. It's very different. Than most people I run into, even here in Silicon Valley where you're supposed to have this kind of very, you think, diverse set of minds, you know, you're you're one of them. You're one of the reasons I moved out to San Francisco was to meet people like you. So Thanks for doing this. Thank you. I really appreciate that. Yeah. It's a beautiful one. I really
01:16:09
appreciate
00:00 01:16:27