Best of 2023: 15 Untold Stories From Tech’s Inner Circle

It’s been a wild year for The Logan Bartlett Show, starting with a rebranding and ending with our 50th episode of the year. We’ve pieced together the top 15 most unforgettable moments from 2023, including our favorite wacky untold stories from tech’s inner circle & the most profound insights of 2023.

(0:00) Intro

[00:00:00] quite catastrophically wrong on the scale of human civilization.

Who? Sam. What's he do? He runs this AI HR software guy? Yes, that's right. That's right. The people building this stuff do not appear to be taking it anything remotely like what I would call seriously. The earth might get destroyed, but first there'll be some great tech companies. I see you on Twitter. F ing 24 by 7.

Yeah, I see you with all this podcast Yeah, like for instance, how many deals have you done in the last 12 months? Welcome to the 2023 recap episode. Some of our favorite clips over the course of 2023, including operating advice from folks like Dave DiCheria, MongoDB. I automatically assume if someone's been in a job for two years or less, As well as Daniel Ek at Spotify.

You'll also hear from very, very temporary OpenAI CEO, Emmett Shear. Every startup that wins like 98 percent is too early because if you're not too early, you're usually too late. [00:01:00] Finally, you'll hear some funny stories. Including one from Matthew Prince about the early days of building Cloudflare, which included servicing a number of Turkish escort websites.

There were so many crazy customers in the early days, we have like 250 Turkish escorts that had signed up. Thank you everyone for listening in over the course of 2023, and we look forward to seeing you in 2024.

(1:30) Satish and Scott (Partners, Redpoint) roasting Logan for becoming a media personality

I'm really concerned here a little bit, um, about your performance, um. Is this, this is actually a performance review? Yeah, this is, you know, I mean, I see you on Twitter fucking 24x7. Yep. I see you with all this podcast. I see you all doing all kinds of stuff. When do you fucking do work? Yeah, like for instance, how many deals have you done in the last 12 months?

Zero. Yeah. Doesn't surprise us. Yeah. Does not surprise us at all. It depends on how you define deals. I wouldn't be able to do any, [00:02:00] Scotty, would you be able to, I wouldn't be able to do any deal either if I was doing all the things you are. I would try to become a media celebrity. a media celebrity, a rock star.

Instead of really focusing on his job. I, uh, the good news, the good news in that is I think any deal that I would have done over the last year would be marked down significantly. So that's the way I justify it. Isn't that, isn't that the case with all the other deals you've done as well?

(2:22) Matthew Prince (CEO, Cloudflare) exposing Cloudflare’s 1st customers: Turkish escorts

There were so many crazy customers in the early days of strange things.

Turkish escorts. Turkish escorts. We had, we had a, um. You cornered that market for a while, right? There's a good story, uh, in that. But, I mean, even before that, all these weird corners of the internet that made us learn how to make it easy today to serve, you know, we're a third of the Fortune 500 and, and, um, somewhere between 20 and 25 percent of all of the web.

The internet was built on some of these, like, corner cases. And I mean, we could probably say porn. A large portion of the internet was built on the infrastructure to [00:03:00] support the porn industry. Did you have the vision or did Michelle have the vision or whatever that actually saw through to, I saw an example that the CTO of Salesforce used on a personal blog and then took it to Salesforce.

I mean, did you see that through line the whole time and have the confidence that is going to turn into, uh, the opportunity for enterprises? Fundamentally, we thought that Cloudflare's business. Was if you could see enough of the internet you could have better data on Both the good guys and the bad guys if you could see as many people doing legitimate transactions online Like that's a signal that when they go in to do something else online In an anonymous way you can still say That's a legitimate consumer versus if somebody is trying to launch some sort of an attack.

And so we always knew that just being able to gather as much data as possible was really the key to how our business worked. And the Turkish Escort [00:04:00] story, um, the way that worked, there was a bell that would go off in our office every time someone would sign up. And we ran over to the computers and looked at what it was.

And I remember it was the moment I was like, wow, we really got to get it on writing like that employee handbook. Because all of a sudden it was like, It was a Turkish escort site, which was exactly what you would imagine Turkish escort site looks like, which was no big deal, except then there was another one and then another one and another one.

By the end of the first week, we had like 250 Turkish escorts that had signed up and we were eight of us and we were sitting in Palo Alto and it's like, how did they even hear about us? And what had happened, we finally got one of the webmasters on the phone merely because we were curious. And the webmaster said, oh my gosh, thank you so much, you've solved this enormous problem for us.

Um, you may not know this, but Turkey is this complicated country. As you go further west in the country, it's relatively European, it's relatively cosmopolitan. It doesn't love what we do, but tolerates it. But as you go further east, it becomes very, very conservative, very, very Muslim. They see what we're doing as just an absolute threat to the underlying kind of way of life [00:05:00] of more conservative Turkey.

And so we suspect that someone who lives there is launching these attacks to knock them offline and that it's basically a political statement. Didn't have any way of stopping it before, so we would just go offline. Cloudflare came along, you stopped it. By the way, there's not a lot of money in Turkish escorts and we don't have credit cards that we can pay in US dollars.

So we just signed up for your free service, but thanks so much for the service. And our systems behind the scenes, you know, we would call it machine learning today, I guess we'd call it AI, but would classify these attacks and they got classified as the, the T E attacks for the Turkish escort attacks. And they, they bubble up.

And almost exactly a year later, I got a call from a very frantic Dutch gentleman who's calling from Baku, Azerbaijan. And, um, and he said, you have to help. It was like six o'clock at night on a Thursday night. And, um, we'd moved up to, up to San Francisco. And so I was, I happened to answer the phone. He said, you have to help.

The contest is [00:06:00] tomorrow and are all of our systems are offline and we don't know how voting is going to work. I said, what contest? And he said, Eurovision. And I grew up in Utah. So I had no idea what Eurovision was. Um, but it turns out now I do that it's the, the largest non sporting event by viewership in the entire world.

And it's basically it's American Idol, but with nationalism built into it. And Europe just shuts down for this contest every year. And, and all the European countries plus, um, strangely, Australia participate in this thing and they pick different winners and, um, Whoever won the previous year hosts it. In this particular year, um, the host was, was Azerbaijan, who had won the previous year.

They were down to the five or six finalists, and one of them was transgender. And there was an Iranian Actually, a student who sent out a threat to Eurovision saying this is an insult to the Muslim country of Azerbaijan. [00:07:00] I am going to wipe Eurovision off of the internet and launched a series of attacks.

Um, I again had no idea what Eurovision was and so I was like, yeah, it's late here. Just sign up for the 20, you know, 20 a month plan. You'll be fine. Click. And the next morning I came in and we had these French engineers and their eyes were like saucers and where they were staring at the screen being like, do you have any idea who signed up last night?

I was like, you're, and I was like, Oh yeah, the Eurovision guy, the light. And then they're trying to explain to me what Eurovision was, but what was amazing was behind the scenes, our system just kept flashing T E T E T E and lo and behold, it turns out that the exact same style of attack and it turned out the exact same attacker.

Was launching this. So it wasn't actually someone living in Eastern Turkey, it was someone in Iran. The Euro, and we kept Eurovision online and it went, went through. What's interesting is that that then, that student caught the attention of the Iranian military. He now runs offensive military [00:08:00] operations, cyber operations for Iran, and about a year after that, launched an attack against all of the US financial institutions.

Consumer facing financial institutions. We got called in. That's how a lot of the big financial decisions became became our customers. But we wouldn't have had the data to be able to protect against that if we hadn't accepted that sort of not particularly attractive. You know, early customer, and today I'm sure there's still lots of Turkish escorts that use us, but importantly, so does Eurovision.

We powered all the voting, online voting for Eurovision this year, and so do a lot of the biggest U. S. financial institutions in the world.

(8:37) Dario Amodei (CEO, Anthropic) predicts the future of AI

Yeah, so, you know, what I'll say is, at least to my knowledge, no one has trained a model that costs billions of dollars today. Um, people have trained models that cost, I think, of order 100 million.

Um, but I think billion dollar models will be trained in 2024. And my guess is in 2025, 2026, several billion dollar, maybe even 10 billion models will be, will be trained. There's, you know, there's [00:09:00] enough compute in the industry and enough ability to do data centers that that's possible. And, you know, I think, I think it will happen, right?

If you look at what Anthropic has, has raised, you know, has raised so, has, has raised so far, at least it's been publicly disclosed. Um, you know, we're at, we're at roughly 5. 5 billion dollars or so. We're not going to spend that all on one model. Um, but, you know, we certainly, we certainly are going to spend, you know, multiple billion dollars on training a model sometime in the next, sometime in the next, in the next two or three years.

Where does that go? It's almost all compute. It's almost all GPUs or custom chips. Um, uh, and, uh, you know, and, and the data center and data center that surrounds them 80 to 90 percent of our cost is capital and almost all our capital cost is compute. Um, you know, the, the number of people necessary to train these models, a number of engineers and researchers is growing, uh, but it's, the cost is absolutely dwarfed by, uh, by, by, you know, is dwarfed by the [00:10:00] cost of compute.

You know, of course, we also have to pay for like. the buildings people work in, but you know, that, that again is some, some tiny fraction of, of, of what the cost of compute is. Maybe ending on a, uh, on an optimistic note here, and we touched on a bunch of like the potential medical breakthroughs and things like that, but why should people be optimistic about what Anthropic's doing about the future AI and everything that's going on?

Yeah, I don't know. So I've answered the question in two ways. I mean, one, I'm optimistic about solving the problems. I mean, I am, Getting super excited about the interpretability work, like, people didn't necessarily think this was possible. I still don't know whether it's possible to, you know, to really do a good job interpreting the models, but I'm I'm very excited and very pleased by the progress we've made.

I'm also excited about just, you know, the wide range of ways we've been able to deploy the model safely, like the wide range of, of, of happy customers who, who, who, who just say, you know, this, this model has been able to solve a problem that we had. It's solved, it solved [00:11:00] it reliably. We haven't had, you know, we haven't had all of these safety problems where we've managed to solve them.

We've deployed something safely in the world. It's being used by lots of people. That's that's great. That's one level of great. And I think the second level of great is this, this, this, this, this, this, this thing you alluded to with like, you know, medical breakthroughs, mental health breakthroughs, like, I think, you know, energy breakthroughs are already doing pretty well.

But, you know, I imagine AI can speed up material science very, very, very, very substantially. Um, so, you know, I think, I think a, a, if, if we solve all these problems, I think a world of abundance really is a reality. I don't think it's utopian given what I've seen that the technology is capable of. And, you know, of course, there are people who will look at the flaws of where the technology is right now and say it's not capable of those things.

And they're right, it's not capable of those things today. But if the, if the scaling laws that I'm talking about really continue to hold, then I think. I think we're going to see some, some really, some really radical things. [00:12:00] Um, you know, one of, one of the things, you know, it's not a, not a complete trend, but you know, I think as, as we, as we gain more.

You know, mastery over ourselves, our own, our own biology, um, you know, the ability to manipulate the technological world around us, uh, you know, I have some hope that that will also lead to a, you know, to a, a, a, a kinder and more moral society. Um, uh, you know, I think, I think in many ways it has in the past, although not uniformly.

Why don't you like the term AGI? So, I like the term AGI 10 years ago, um, because You know, no one was talking about the ability to do general intelligence 10 years ago, and so it felt like kind of a useful concept. Um, but, but now, I actually think ironically, because we're much closer to the kinds of things AGI is pointing at, it's sort of no longer a useful term.

You know, it's, it's, you know, it's a little bit like if you see some object off in the distance on the horizon, you can point [00:13:00] at it and give it a name. But you get close to it and you know, it turns out it's like a big sphere or something and, and you're standing, you're standing right under it. And so it's no longer that useful to say this sphere, right?

It's, you know, it's, it's basically, it's kind of all around you and it's very close and, and it actually turns out to, to denote things that are quite different from one another. Um, so, so one thing I'll say, I mean, I, you know, I said this on a previous podcast, I said, I think in two to three years, the. LLMs, plus whatever other modalities and tools that we add are going to be at the point where they're as good at human professionals at kind of a wide range of knowledge work tasks, including science and engineering.

Um, I, I definitely, that, that would be my prediction. I'm not, I'm not sure, but I, I think that's going to be the case. And, you know, when people, people kind of like commented on that or put that on Twitter, they said, Oh, Dario thinks AGI is going to be two to three years away. Um, and, and so that, that then conjures up these image of, you know, there's going to be swarms of [00:14:00] nanobots building Dyson spheres around the sun in, in two to three years.

And like, of course, this is absurd. I don't necessarily think that at all. Um, again, the, the specific thing I said was. You know, there are going to be these models that are, that are able to, able to, on average, match the ability of, of human experts in a wide range of things that they can do. There's so much between that and, you know, the super intelligent God, if, if that latter thing is even, is even possible or even a coherent concept, which it may be, or it may not be.

You know, one thing I've learned on the business side of things is that. There's a huge difference between a demo of a model can do something versus this is actually working at scale and can actually economically substitute. There's so many little interstitial things that's like, Oh, the model can do 95 percent of the task.

It can't do the other 5%, but it's not useful for us in less, you know, unless we're able to substitute in AI and to end for the process, or it can do a lot of the task, [00:15:00] but no. There are still some parts that need to be done by humans, and it doesn't integrate with the humans well, it's not complementary, it's not clear what the right interface is, and so there's, there's so much space between, in theory, can do all the things humans can, and In practice is actually out there, out there in the economy as full co workers for humans.

And there's a further thing of like, can it get past humans? Can it, can it outperform the sum total of humans, say, you know, scientific or engineering output? That's, that's like a, you know, that's, that's another point. That point could be You know, could be like a year away because the model gets, is better at making itself smarter and smarter, or it could be many years away.

And then there's this further point of like, okay, you know, can the model like, you know, like explore the universe and set out a bunch of like, you know, von Neumann probes and, you know, build Dyson spheres around the sun and, you know, calculate the meaning of life is 42 or whatever. Um, you know, that's, that's like a, [00:16:00] that's, that's like a further point that also raises questions about, you know, what's practical in an engineering sense and in all of these kind of weird, weird things.

So that's, that's like another further point. It's possible all of these points are pretty compressed together because there's like a feedback loop, but it's possible they're very far away from each other. Um, and, and so there's this whole unexplored space of like, you say the word AGI and you're like referring, you're smooshing together all of those things.

Um. I think some of them are very practical and near term. And then I have a, a hugely hard time thinking about like, does that ime, does that lead very quickly to, to all the other things? Or, you know, does it lead after a few years or are, are those other things like not as coherent or meaningful as we think they are?

I think all, I think all of those are possible. So it's, it's just, it's just kind of a mess. We're just, we're kind of flying very fast into this, this, this glob of. Concepts and possibilities and, and we don't have the language yet to separate them out. We just say a GI and I don't know. It's, it is just [00:17:00] kind of a, it's, it's like a, it's like a buzz word for a certain certain community or certain set of science fiction concepts when, when really we kind of, it, it's pointing at something real, but it's pointing at like 20 things that are very different from, from one another.

And we, we badly need language to actually talk about them. What do you think happens on the next major training run for LLMs? Um, so my guess would be, you know, nothing truly insane happens, say in any training run that, that, you know, happens in 2024, I think all the, you know, all the, you know, the stuff, the good and bad stuff I've talked about, you know, to really invent new science, the ability to, to cure diseases, the ability to make, to make bio, yeah, the ability to make bioweapons, yeah, and maybe someday that the Dyson spheres, the, the least impressive of those things I think, you know, will happen.

Um, You know, you know, I would say no sooner than 2025, maybe 2026. Um, I think we're just going to see [00:18:00] in 2024. Crisper, more commercially applicable versions of the models that exist today. Like, you know, we've seen a few of these generations of jumps. I think in 2024, people are certainly going to be surprised.

Like, they're going to be surprised at how much better these things have gotten. But it's, it's not going to quite bend reality yet. If you, if you, if you know what I mean by that. Uh, I, I think we're just going to see things that are crisper, more reliable. can do longer tasks. Of course, multimodality, which we've seen in the last, uh, you know, we've seen the last few weeks from multiple companies is going to play a big part.

Ability to use tools is going to play a big part. Uh, so, you know, generally these things are going to become a lot more capable. They're definitely going to wow people. Uh, but this reality bending stuff I've, I'm, I'm talking about, I don't expect that to happen in 2024. How do you think the analogy of, uh, versus a brain breaks down for large language models?

Yeah, so it's actually interesting. This is, this is one of the, you know, being a former [00:19:00] neuroscientist, this is one of the, the, the mysteries I still wonder about. So the general impression I have is that the way that the models run and the way they operate is really interesting. Um, I don't think it's all that different.

Um, you know, of course, the physiology, all the details are different. But I don't know, the basic combination of linearities and nonlinearities, the way they think about language, to the extent that we've looked inside these models, which we have. With interpretability, I mean, we see things that would be very familiar in, you know, the brain or a computer architecture, you know, we have these, you know, we have these, we have these registries, we have variable abstraction, we have neurons that fire on different, different concepts, um, again, the alternating linearities and, and, and, and, and nonlinearities, and just, just interacting with the models, they're not, You know, they're not that different.

Now, what is incredibly different is how the models are trained, right? The, if you compare the size of the model to the size of the human brain and synapses, [00:20:00] which of course is an imperfect analogy. Uh, but there's something like still maybe a thousand times smaller. And yet, they see maybe a thousand or ten thousand times more data than the human brain does.

If you think of, you know, the number of words that a human hears over their lifetime, it's a few hundred million. Uh, if you think of the number of words that a language model sees, you know, the, the, the, the, the latest ones are in the trillions, or maybe even tens of trillions. Uh, and, and that's just, you know, that's like a factor of like ten thousand difference.

Uh, so, it's, it's as if we've kind of You know, that, that neural architectures have some, you know, there's lots of variants to them, but they have some universality to them, um, but that somehow we've climbed the same mountain with the brain and with neural nets in some very Very different way, according to some very, very different path.

And so, you know, we get systems that when you, when you interact with them, are, you know, I mean, there's still a hell of a lot they can't do, [00:21:00] but I don't see any reason to believe that they're, you know, fundamentally different or fundamentally alien. But what is fundamentally different and what is fundamentally alien is the completely different way in which they're trained.

Do you think about percentage chance doom or Yeah, I, I I think it's popular to give these percentage numbers and, and you know, I mean, the truth is that I'm, I'm not, I'm not sure it's easy to put to, to put a number to it. And if you forced me to, it would, it would fluctuate all the time. Um, you know, I, I think I've, I think I've often said that, you know, my, my chance that something goes.

You know, really quite catastrophically wrong on the scale of, of, you know, human civilization, you know, it might be somewhere between 10 and 25 percent when you put together the risk of something going wrong with the model itself. With, you know, something going wrong with human, you know, people or organizations or nation states misusing the model or, or it kind of inducing conflict among them, or, or [00:22:00] just some way in which kind of society can't can't handle it.

That, that said, I mean, you know, what that means is that there's a 75 to 90 percent chance that Uh, that this technology is developed and, and, and, and everything goes fine. In fact, I think if everything goes fine, it'll go not just fine. It'll go really, really great. Um, again, this stuff about curing cancer, I think if, if we can avoid the downsides, then this stuff about, you know, about curing cancer, extending the human lifespan.

Um, you know, solving problems like, like mental illness. I mean, I, this all, this all sounds utopian, but I don't think it's outside the scope of what the technology can do. So, you know, I, I often try to focus on the 75 to 90 percent chance where things will go right. And I think one of the big motivators for reducing that 10 to 25 percent chance is.

You know, how, how great it'll, you know, is trying to increase, is trying to increase the good part of the pie. Um, and I think the only reason why I spend so much time thinking about the 10, that, that 10 to 25 [00:23:00] percent chance is, hey, it's not going to solve itself. You know, I think the good stuff, you know, companies like, like ours and like the other companies have to build things, but there's a robust economic process that's leading to the good things happening.

It's great to be part of it. It's, you know, it's, it's great to be one of the ones building it and causing it to happen, but there, there's a certain robustness to it. And, you know, I find, I find more meaning, I find more, you know, when this is all over, I think. You know, I personally will feel I've done more to contribute to, you know, whatever utopia results.

Um, if we focus, you know, if, if I'm able to focus on kind of, you know, reducing that, that risk that it goes badly or it doesn't happen, because I think that's not the thing that's gonna, that's gonna, you know, that's not the thing that's going to happen on its own. The market isn't going to provide that.

(23:47) Dev Ittycheria (CEO, MongoDB) gives 3 steps for holding people accountable

Now I want to hear your three steps for holding people accountable. Yeah, so the first step is you got to be very clear on what you want. And for a project, for the role. in this period, whatever. You got to be very clear. Here are my expectations. And so a simple example [00:24:00] would be, say, I expect you, you to send me a report every Friday on the week's activities.

I'll just make a trivial example. And the first Friday you don't send it to me. And then I see you on Monday. I say, Hey, where Logan, where's that report? Oh yeah, I forgot about it. I'll get it to you. And now, you know, then you send it to me and I go to you and say, Logan. Listen, I may not have been clear why that's important to me, but I use those reports to really understand what's going on in the business.

You and your team play a really critical role and it's not just make up work. This is really important to me. So the, what I'm really doing is I'm making the prom. That was my fault and not clearing, making clear to you what the importance was for why I'm asking you to do this. And then I say, now, are you clear, Logan?

Does everything make sense? And you go, yes, I got it. Now, the next time you forget to send me that report, now I can hold you accountable. So you can't hold someone accountable if you don't do steps one and steps two. So most people get terrified of holding people accountable because one, they're not clear what they want, and they've not been giving clear feedback on when there's been [00:25:00] missteps.

And when you don't give feedback, you're basically sending message not just to Logan, but to everyone else. What I want is not really that important. There's not a really culture of discipline. I am okay to let people cut corners. And that by itself is sending a message that maybe this is not as a high performance organization as it is.

And so going to that three step process, at least for me, makes it much more easy to hold people accountable. I've heard you say that recruiting is like pipeline management. Once you stop it, even for a bit, it dries up. How much of your time do you spend recruiting today? Today, not as much. Obviously, for senior level recruiting, I spend a lot of time.

We just hired a new CTO, so I was obviously intimately involved in that recruiting process. And I was pleased that I actually beat my own ambitious Goal by a month, but that's a different subject, but obviously recruiting, it all starts with recruiting. And if you can't hire the best people, then nothing else really matters.

In my mind, a leader has to essentially do three things. One is recruit the best team possible to develop them, to get them to do what you want them [00:26:00] to do, and three, make, make sure they consistently meet and exceed their commitments, right? So recruit, develop and execute. But it all starts with recruiting.

And I think McMahon has said this before. If you hire mediocre people, but do a great job in development and really monotonously focused on execution, you'll still have a mediocre outcome. If you hire great people and do an okay job in development, an okay job in inspection, you can still have a pretty good outcome.

So it all starts with recruiting. Um, the point about the pipeline is that. Like a sales pipeline, a recruiting pipeline drives up pretty quickly. So you're talking to candidates, you're screening candidates, but then if you say, you know what, I'm not going to put focus on recruiting, those candidates will slowly wither away and go somewhere else.

So then you say, Oh my God, I need to hire a hundred people next quarter. You can't suddenly like magically produce a hundred candidates, a hundred qualified candidates very quickly. So constantly being in the market and talking to people, even if you don't have roles to fill. And getting a sense of who's great, who's out there selling the message and the value propositions of [00:27:00] MongoDB is important for us because we're planting seeds.

Sometimes we need to harvest those seeds very quickly. Sometimes we're planting seeds because we know in the future we'll want those people to consider joining us. One of the things I heard you say in your interview process is that you inspect how thoughtful everyone was in switching between jobs. Why is that important?

How do you go about thinking about it? To me, the most important professional decision you can make is what company to go to, right? So when I look at a resume, I try and understand why did they go from company A to company B? Or why did they even start a company A? Why did they go to company B? And then maybe C or D.

I'm trying to understand the decision process. And what I'm really trying to assess is how thoughtful were they in making that decision to go from company A to company B. Because, as I said, that's a very important personal decision for that person. And that person is pretty cavalier. Oh, I followed my boss.

Oh, I got a call from a recruiter and they offered 20 percent more. Then I'm thinking to myself, okay, those might be interesting signals, but that's not a very thoughtful process. And then I'm thinking to myself, how [00:28:00] thoughtful are they going to be when they're in that job making day to day decisions about what products to build, what marketing programs to execute on, how to prosecute, you know, sales deals, et cetera.

And so that to me is a really high quality signal about the kind of person they are. Whereas conversely, if they're incredibly thoughtful in saying. I was really happy here, but I saw this new emerging trend. I was really intrigued by this new technology trend. I talked to five or six different companies.

Of those five, six companies, this company seemed the most interesting. I cold call them, got a bunch of meetings, and then they ultimately offered jobs. And now that's a person who's incredibly thoughtful. I automatically assume if someone's been in a job for two years and less, they failed, unless there's clear evidence that there was another reason why they left.

I either got married and had to move to another city or there's some personal event in their life, or the company got acquired and they shut down that operation, whatever. Why two years? Because in most companies, it takes about a year to figure out that someone's not working out and then another year to get rid of them.[00:29:00]

So if you've been at a company for less than two years, unless you can convince me, clearly convince me. Why? I've automatically assumed you failed. My philosophy on that is you can usually fool the people above you. You can sometimes fool the people around you. You can never fool the people below you. And the biggest tell for me if a leader is not scaling is how frustrated is their team becoming.

Are good people starting to leave? Is the quality of the new people that the leader's recruiting is now no longer A's but now B's and B minuses and C's? That to me is a huge tell about whether or not a leader is scaling. Because the people who work for you clearly know how effective you are, clearly know the quality of decisions you're making, clearly know if you're being a hindrance or an asset in terms of enabling them to get their jobs done.

And so that will show up very quickly to them. And so that to me is the best tell. And in fact, when I joined MongoDB, one of the first things I did was review all the organizations. And I tried to understand who was in these teams, what the level of turnover was, and, uh, the quality of the people in those teams.

And that [00:30:00] gave me a great map about where the problems were in the organization.

(30:03) Jack Altman (CEO, Lattice) joking about growing up with Sam

How annoying is it to have someone with the last name Altman out there in the press all the time making all this noise? Have you ever met him? Who? Sam. What's he do? I hear his name a lot. Uh, he runs this AI . Oh, he's like a brother of that HR software guy.

Yes, that's right, that's right. That guy. Yeah. Yeah, that's right. What, what, what is the most annoying thing, not about Sam specifically, but about Sam Altman, the CEO of Open ai? There was a funny, so just the most classic brother thing to do when I tweeted about our HRIS. He quote tweeted it, like, I don't know what an HRS is, but happy for you.

And somebody in his comments is like, you know, uh, like how brutal is that to, you know, build a big company or whatever and have your brother invent AGI. And I was like, eh, it's funny, but I don't know. Like, I'm really proud of him. It's really cool. Like, it's like such an important thing for the world and he's earned [00:31:00] it.

Like he's worked so hard to make it happen. And I see. All the work he does to keep this going and to like, keep pushing it. And like, look what it's done for tech. Like, where would we be right now without it? Like what would be happening in tech in 2023 without AI? Venture capitalists wouldn't know where to put money.

Where would they put money? They wouldn't, it would be such a weird time. And it's not just about like. It's honestly, to me, it's not even just about what's happened in VC. It's like, it's possible that what's happening with AI is going to be so much bigger than like, SaaS getting a big booster pack. Like, it's just so cool.

So, like, you know, is there ever a moment where I'm like, Ugh, God, I like just like can never do enough? By comparison, maybe, but like, it's really awesome. And I'm very proud. Does it, uh, scare you at all in a weird way that like some of you shared a house with growing up is now, I don't know where he would rank on like most powerful, influential people.

It's high, at least at this exact second. Super high. I think it's, [00:32:00] it's good. Cause I thought what you were going to say is, am I scared about AI? Well, we can talk about that too. And like, I am maybe a little bit, but like the good thing is. I'm, I'm glad that I know that the person who is at the front of a lot of this.

Like that I that I know his character, so I'm like well at least if someone's doing it This is a good person to be doing it So that quells some of my just like generally I fears in terms of him being like super powerful I've kind of just you know It's like if you like watch your kid grow up, and you just like see them get a little bigger every day You like don't notice it.

He's been doing well for a long time. I've like gotten used to it. Yeah Yeah. I mean, obviously in the last year it's changed. Yes. To a new degree. It's gone to a new degree. I, uh, he does seem weirdly, um, thoughtful, cerebral, like not self serious in a way. I forget. I sent you the quote where he was like, well, my sensibilities are that of a Midwestern Jew.

Yeah. I was like, that's. Funny. That's a good joke. That's right. I like that. Yeah. We got to get them on this podcast. Yeah, [00:33:00] that's a good question. I hadn't thought of that. I'll call him.

(33:24) Daniel Ek (CEO, Spotify) on money and happiness

Now back to the show, I literally went from kind of having no money at all, uh, basically just trying to survive to six months later, having more money than I knew what to do with.

And actually I'd read somewhere that if you had a 5 million, you were economically independent for the rest of your life. And I'd. By that time, I accumulated more than that. And you're 22, right, at the time. Yeah. I was exactly 22. This is the, this is the dream, right? You've made it. You should be as happy as, as anyone, this is what everyone aspires to.

Yeah, exactly right. And, and, and, [00:34:00] And for me, it was like even more so because I, I literally came from the projects in Sweden, like I had no money at all. I was growing up with my single mom. And on top of that, I had this winter where I, I sort of between 2000 five where things were really rough. Like you were basically just trying to survive paycheck to paycheck and doing whatever jobs you could to pay your rent, basically.

And so obviously I had this kind of. Uh, dream of one day, maybe I can get to this level. And I thought maybe if I work really hard, I can get there if, you know, if I'm lucky 50, and then I'm going to retire and so on and so forth. And obviously here I was 22 and I'd, I reached that milestone. And, um, And, and it sounds perhaps a little bit sort of obnoxious to say it, but as a society where we're fighting to get to that limit, limit of economic independence, uh, but no one actually talks about what happens when you get there.

Um, instead you're just hearing from these rich [00:35:00] people, how like money doesn't mean that much, uh, and so on. Well, it certainly does if you have none, I can tell you that. Um. And, and they don't really teach you what to do and what, what actually matters. So it feels like a very foreign concept, um, where you're like, um, you know, you're, you're thinking that they say that out of a sort of false humility when, when in fact, you know, uh, they're probably very happy about having it.

And they probably are to a certain extent, but to a certain extent they were right. But no one taught you that. And, and for me. Um, you know, as a funny sort of side note, I actually learned how to speak English through watching MTV and, um, and, and my favorite program at that time was like, yo MTV, just the rap part.

Um, and so you watch MTV Cribs and you watch kind of all of these like great places that, that these rappers are living in and rock stars are living in. And, and it's always these kind of well. filled fridges [00:36:00] with like all the drinks in different colors and like all that stuff. Um, and, and so, uh, I was like, you know, I made it this, this is the life.

So now I'm finally going to be able to get to the really cool nightclubs. I'm going to get all the girls that could never get beforehand and I'm going to be one of the cool people. And I, I thought literally for a while that that would make me happy. And so I spent a bunch of time doing that. Spring champagne and people, um, in central Stockholm.

I, um, uh, you know, uh, try to hook up with the girls. I could never get the four somewhat embarrassingly. And some of them, you could actually succeed with, um, but I realized, um, that, you know, they didn't want me for me. Um, they wanted me for the status that it provided. And because I probably, uh, offered them free champagne and, and, uh, hung around the cool people.

And I realized many of these people probably wouldn't be there unless I had money. And, um, Got me to be pretty depressed [00:37:00] because again, I didn't get into tech to get rich in the first place. Um, and, and I, um, you know, it wasn't one of the things that made me happy, um, either. So tech in itself made me happy.

Music made me happy. And so I, I went on. Uh, number of months of soul searching of what I really wanted to do. And I realized, uh, up until that point in my life, there's basically been two interests in my life. One was music and one was technology. And, um, I, um, um, I was thinking about that. And, and during the same time I met my co founder, uh, and we started spending more time because he had just left the company that acquired my other company.

And we started hanging out together and he had made an IPO and he made even more money than I made, um, by. By an order of magnitude more. And, uh, he too was quite depressed because he too didn't have anything really that tied him up, tied him up. And so we talked about maybe we should do something new.

Maybe we should do something, um, that, um, you [00:38:00] know, it's different. And what we centered around was really this three core concepts. Um, you know, we wanted to do something that we thought, um, you know, we could get passion by. Uh, we wanted to do it with people we could learn from and three, we wanted, um, to, um, have fun while doing it, uh, and have a positive impact in the world.

Those were the kind of three things. Um, and he said, what, what, what are you really passionate about? And I said, well, really, I'm passionate about music and technology. That's that's. my two kind of main passions. And he said, well, why, why don't we do that? And I said, well, that's a really dumb idea. Um, because you know, it's music piracy, it's really hard and so on.

And then he kept asking why, you know, okay, well, why is it hard? Uh, why is piracy such a big problem? And, um, Basically through these series of why, uh, we eventually said, well, maybe if you did this, you could [00:39:00] sort it out. Well, why isn't someone doing it that, like that way? And I said, well, probably labels are going to be very hard.

Okay, well, why is that? If they're losing so much money, uh, as it is, shouldn't they be interested in doing it? And eventually we centered on the idea of Spotify for all of those reasons. Hey, one question in the early days of Spotify I had from you, and I'm not sure how much this is like a metaphorical, story versus a, uh, a real one, but I've heard about you sleeping outside of record labels offices.

And I'm wondering, like, quite literally, were you inside their building and, and, uh, in front of their doors? Is this like one of those stories that gets retold and it's, it's like, uh, maybe an allegory, but was it actually like, how did that, how did that actually come to be? No, I, I didn't actually sleep outside of, outside of their office, offices.

Um, it's more of a sort of expression, uh, and a sort of mental, uh, analogy, as you said, or, or allegory. [00:40:00] Um, but, but the, the, just to paint the picture, I literally slept in hotels where. The, the wallpaper fell down on me and I had cockroaches everywhere, like not one of these finer five star hotels or even four stars.

I think I had minus one star hotels. Um, so, and, and, uh, there's funny stories in the beginning where I and the guy who helped negotiate the licenses, we even shared bed, um, and, and, uh, bed covers because literally, and he was snoring horribly, so we didn't sleep very much. Um, So, so that's, uh, sort of painting the picture, but what was true is I literally sat outside of the office waiting for opportunities to arrive where I could come in and pitch because they wouldn't take the meetings.

Um, and so I took whatever slots I could get and the best way was to sit, uh, they had a Starbucks just outside of, of, [00:41:00] of, uh, A Broadway, in this case, uh, in New York, and I literally sat, uh, outside of that, um, Starbucks, uh, which was right on the border. And, uh, I befriended, uh, the assistants to the point by giving them enough free coffee and, and, uh, free, uh, gifts and, uh, and other things that Whatever I could do to bribe them basically, uh, so that they would tell me when, when the CEO was about to leave or about to enter, or if they had sort of a spare moment where I sort of just by accident would be next to the coffee machine as they were getting the coffee.

So, um, it was directionally true, although it wasn't. technically true that I slept outside of the office.

(41:42) Matt Mochary (CEO Coach to Sam Altman, Brian Armstrong, Naval) shares his hiring and firing frameworks

Maybe touch on the zone of genius and how that relates to this and sort of finding as a methodology that you talk about. Well, that's it. The thing is the zone of genius. There's four zones. There's a zone of incompetence, which things that you're not good at, you should outsource to someone else like fixing your car.

You probably don't know how to do that. You should have a mechanic do [00:42:00] that. There's a zone of competence things you could do, but. You don't love them and someone else can do them just as well as you can like cleaning your bathroom. Zone of excellence. Now, this is a dangerous one. This is where you're really good at it.

You don't like it, but you're really good at it and people value it a lot. They're willing to pay you a lot of money for it. And they also really want you to keep doing it because they're dependent on you to get it done. That's the dangerous one. You're getting lots of praise and lots of accolades and lots of compensation do it, but you actually don't like it.

Probably a lot of lawyers and bankers exist in that bucket, right? Absolutely. And there's a lot of actions that CEOs are taking that are in that zone because they think that it's their responsibility to do these things when in reality, it's the responsibility of the CEO to make sure these things get done.

Not that the CEO has to do them. Herself. And then the fourth zone is your zone of genius. This is the things that you do that you are uniquely good at in the world [00:43:00] and you love them so much. You don't even notice that you're doing them. You may not even be aware of what these things are because time and space just disappear so you don't even value it.

So those things you actually have to ask other people. What do I do that you, you think adds unique value, but you, but, and I seem to be loving it. And they'll list all kinds of things for you. And you'll be amazed like, really? Like when I sit and have long chats with, with you and with others, like that's adding value.

Like I'm just doing that for fun. That's the kind of, that's zone of genius. And if you then start to only do the things that are in your zone of genius, and so you spend more and more time on those, and take all the things that are in your zone of excellence, and start giving those to other people, or you say, listen, the Monday meeting, I've got to be in the Monday meeting.

I'm the seat, like, I can't not be in it. Okay, well then what would make that super fun for you? Well, it'd be super fun and really valuable for me if everybody came with pre written updates that we read [00:44:00] ahead of the meeting and commented on. So in the meeting itself, all we had to do was like the last five minutes of any issue.

Make that happen. Person does all of a sudden the Monday meeting becomes fun, uh, or it'd be really great if everyone, you know, said one minute on the fun things they were doing at home so we can feel like we're buddies, great, make that happen. And all of a sudden the meeting becomes fun. So that's the zone of genius idea that you either do the things you eliminate all the things that are de energizing because the fact is, is that.

You're not good at them if you're doing them, or the meeting itself is not fun for anybody if it's not fun for you. And if it's not fun, it's not productive. It sounds like fear is one of the commonalities across all CEOs and figuring out how to manage that, maybe not all, but at least the vast majority, and figuring out how to manage that is a big input into what you're able to help people with.

Does fear, like you've won this bet every time. Does fear [00:45:00] always lend itself to making the wrong decision in the vast majority of the time? Or are you picking the situation that you know fear is leading them to make the wrong decision? No, fear is, is almost always leading people to make the wrong decision.

And it's, it's not that fear is completely irrational. There is some thing going on here, which is, hey, Red flag, warning, there's something new here, something unknown, something potentially a little dicey, a little spicy. Okay, pay attention. That's valid. But then the amygdala goes far further and it starts making predictions that if you tell this person that they're not performing, then they will be outraged and rage quit.

And you can't afford to lose this person because there's no one else can do this job. And, and then you'll be And so, what they do is, they don't give feedback. They don't tell the person, hey, this isn't working for me. They just hold back. And then, of course, the person continues to [00:46:00] do. And you can see that that's insanity.

The only way that the person can improve is if you'd let them know, hey, What you're doing over here needs to change. Now, of course, there's a way to share that with them in a positive manner. Hey, I want you to succeed. I want the company to succeed. In order to do that, you need to do X, Y, and Z. That's positive.

Instead of, ah, you're an idiot. Like, of course, no, they'll react badly if you say that. But most people just won't say anything at all. Or, this person's not performing, letting them go. That's the big issue. The big issue that almost everybody has is a difficult conversation around someone's performance and either telling them that they need to change or actually letting them go.

And usually it's some eee member for some reason and, but once they do it once. And they let that person go and then afterwards everything's fine. Then they're like, Oh my God. Now they can see all the places where they're [00:47:00] holding back from doing the thing that they know is the right thing to do, but they're afraid of the implementation.

The decision is easy. It's the implementation that's hard. When you're calling someone into your office for this, uh, and I've, I've, I've heard this before, uh, from, from other people, uh, in sales in particular, where you end up managing people out, uh, quite a bit, but is telling them that you're going to have a hard conversation, and I've heard you say that as well, like, this is going to be a difficult conversation, and then you just cut right to the chase, why is leading with, with that an empathetic thing that they're now braced for, uh, what's, what's going to come after that?

Yeah. Um, it's just, it's a weird thing. It's all about surprise. So when something happens that startles me, my amygdala reacts so quickly. It's like a visor that shuts over my face that I don't even notice that it happened. And I'm already [00:48:00] in that emotion. Let's say it's fear or anger. It's happens instantaneously.

And now As far as I'm concerned, I'm just having thoughts. Whereas if someone says, Hey, you're about to feel fear, you're about to feel anger, get ready. Like, and they give you three seconds and you go, Oh, okay. Now I'm sort of seeing that this thing might be coming. And then when it comes. You go, Oh, now I sort of sense the emotion.

Okay, I'm not gonna, I'm not gonna buy into it. I can, I can see the fear coming. It makes it much easier to handle and much less likely that they'll actually go into full anger or full fear. And they'll notice what amount of anger and fear they do go into. That's it. It's just about, do they get surprised or not surprised?

And three seconds is all they need to prepare. [00:49:00] What are the other things that you brought up earlier in this, when you were listening to the executives or the people that stayed behind talking about hearing them out from their feelings, but it sounds like one of the. Tactics that you really use in the methodology and, and some of the empathy that you're able to communicate is also repeating back to the people what was said to them.

And why is that? It seems like it's this human nature thing that you really utilize in your methodology of, Hey, I'm listening to you. I understand. Did I get this right? Why is that so effective? I mean, it's, it's shocking. First of all, it's shocking how effective it is. It's shocking that someone will say, I don't know if I want to say that to somebody that feels like it's going to be really patronizing and I go, okay.

Before you make a decision, let me do it to you, see how it feels to you. I think what I heard you say is that feels really [00:50:00] awkward and it's going to be really, the person's going to think it's really patronizing. Is that right? And they go, yeah. And then they say, Ooh, that actually felt really good. It makes me feel like you really get me.

Like, there you go. That's what it is. People want to be understood and we have so much experience. of going through life where people don't understand us. They'll say things like, yeah, I gotcha. I understand. I understand what you're saying, but then they go do something completely different. So we all have the experience of not being understood.

So when someone proves to us that they actually understand our thoughts and even more our emotion, it's such a welcome relief. That's why it works. And frankly, it works everywhere. It works with customers. It works with investors. It works with recruits. It works with teammates. It works with spouses. It works with children.

It works. It works everywhere. What I'm [00:51:00] hearing you say is this actually works everywhere. You got it.

(51:06) Elad Gill (Solo Capitalist) on how to build a legendary career in tech

You know, there's certain companies where all the best people go to in certain moments of time. And then, um, Silicon Valley really runs in little clicks or networks. And if you fall into one of those networks, um, you can basically work with those same people for the next 30 years in different formats.

Right? And you look at, for example, all the COOs across Silicon Valley for a period of time all came out of Google. It was like Cheryl at, um, Facebook and Dennis Woodside at Dropbox and you, you know, um, Lexi at Gusto and Clary Hughes Johnson at Stripe. And I mean, literally most, many of the main COOs all came out of one company.

And you see that over and over and over again, where, um, different generations of either founders of investors of operators all come from the same small subsets of networks. And then they help each other throughout their careers because they were kind of battle tested together. Um, and so joining Google was really formative from that perspective.

And one of the pieces of career advice that I give people is, um, [00:52:00] especially earlier in career Don't worry about the role. Don't worry about the compensation. Just go to the right company and that will solidify your career for the next decade. And that's way more important than getting, you know, a secondary role with higher, a better role with better comp at a company that isn't going to matter as much.

I have a theory that the center of Silicon Valley now is some combination of the Collison's. The Altman's, uh, yourself, like there's some center orbit of, uh, of people in Silicon Valley that like had this tightly integrated network of, you know, investing together and deals and all that stuff. So, um, no, it makes a lot of sense.

Yeah. There's definitely generational cohorts. And I think, um, I actually, just for my own sake, um, made like a. A slide recently for myself, um, or like a spreadsheet, which is basically, what did I, what do I think are the generational circles over the last few cycles and how are those shifting? And you could think of it by company.

You could think of it by, um, leaders that people look up to or founders people look up to. Uh, you can view it by investors, you can view it by founder networks. And so every [00:53:00] six, you know, five to seven years, the network flips and certain people remain relevant and other people flip over. And the question is why do certain people have so much longevity.

And it's very few people, if you actually think about it. Some platforms have longevity. But very few people are actually relevant over multiple cycles. And the question is, what's the common characteristic of those people? And was there any takeaway of what was the common characteristic? It's one of two things.

They either have a platform company that continues to evolve in interesting ways, which in most companies and technology tend to stale out. Or it's people who reinvent themselves. Right. It's sort of like Sam doing looped and then running YC and then doing open AI and that, you know, and so he's reinvented himself over multiple cycles.

Naval has done that, right? Early super angel early into crypto has done a bunch of AI stuff more recently. Like he's, he's constantly reinventing himself. And so I think the people who focus on that reinvention, and often I think those people are technology driven, right? They care about the tech and therefore they follow what all the interesting technologists are focused on.[00:54:00]

And that tends to be what drives waves. And that's very different from the person who's just interested in it because they want to make money or because of some other factor. Right. And so it tends to be people, I think, where technology minded and they tend to survive these multiple cycles because they follow the technology innovation and shifts in the smart, where the like really interesting builders working,

(54:17) Eliezer Yudkowsky (AI Safety Expert) on the risks of AI doom

the people, you know, at the, at the heads of the Operations building this stuff do not appear to be taking it anything remotely like what I would call seriously.

Some of them are, are records on going like, records like, well, you know, the earth might get destroyed, but first there'll be some great tech companies. Or, you know, just like, ha ha ha, la la la. Um, that's not what you want when you're trying to do an unprecedented scientific feat, feat of science and engineering and having it work correctly on the first try or the entire human species dies.

So, yeah. Like, it's not actually all that complicated. You got a bunch of people who are, like, in the short term, like, getting excited looks at parties, which is why they do everything they do. [00:55:00] And they can get that by, like, building scarier and scarier AI. And some actual uses. Some very important uses. I don't want to minimize that.

You know, some of the technologies coming out of this would be an enormous spoon. But if you were taking this seriously, you, you, you, you, you, you know, put the whole thing on international lockdown and, and like have the good uses, the most important good uses like the medical stuff, AlphaFold, the future, the successor versions of AlphaFold, do that without building, without training the general systems much more powerful than GPT 4.

Try to, try to get the benefits of that, get, get benefits from the systems that are only as smart as GPT 4, which is a lot of benefit. And then, like, just shut down all the giant training runs. They don't know what they're doing. They're not taking it seriously. There's an enormous gap between where they are now and taking it seriously.

And if they were taking it seriously, they'd be like, We don't know what we're doing. We have to stop. That is what it looks like to take this seriously. You published an article in Time calling for a [00:56:00] pause on training AI models. To clarify, you want Well, not a pause. A permanent moratorium. A permanent moratorium.

But you want people to be able to keep using GPT 4 and all AI models and capabilities that exist today, but you don't want GPT.

I mean, if it were up to me, I might possibly, if it were entirely up to me, I might possibly go down to GPT 3. 5, but you know, 4 seems like an okay place to stop. It is probably not going to destroy the world, I hope. You know, compromise, still use GPT 4 instead of going down to GPT 3. 5. Why, why do you think, uh, why is that where you draw the line?

Because it looks like the current system should not be able to destroy the world even if people hook it up in particular clever ways And I don't know what GPT 5 does. And neither will the creators at first, because whenever anything at this level of arcaneness gets released, there's a period as people figure out how to hook it up in new clever ways and get more utility out of it than the creators realized was in it at first.[00:57:00]

From a practical standpoint, I guess, did you write that as, as a, um, sort of an expression and sentiment and characterization of the way that you felt? No, I don't do, I don't, I don't do the emotional expression thing. My words are meant to be interpreted semantically. They're supposed to mean things. I guess at a very literal level then, how would that actually, uh, let's say China says no, right?

And we do it, the U. S. does it. Do we go to war with China over them saying no? China has published air regulations, I don't know how seriously they take it, but they have published air regulations more stringent than the United States ones. So the first thing I would say is that it is not at all obvious to me that China does not go on board with this.

I am not super happy with the current chip controls that prevent China from getting real GPUs. Although NVIDIA is apparently allowed to export GPUs to them that are only like one third as [00:58:00] powerful as their real GPUs. Which, it's not clear to me that there's a whole lot of point in that. I'm not quite sure what anybody's thinking there, unless it's just like, Haha, slap China in the face or something.

But, anyways, um, yeah, like I'm not, I'm not super, I, I, the, the, The problem is not China getting the GPUs. The problem is anybody getting the GPUs. And if we are in the world where the UK is like, we need an international coalition to track all the GPUs, put them only into internationally monitored data centers and not permit giant training runs.

If the, if the UK goes to China on that and UK and China are bringing the United States, I might worry a bit about Russia. Russia, I think, would have a harder time getting the GPUs and putting them into data centers than China would. But if Russia manages to do that anyways, then the thing I would say [00:59:00] there, you know, the posture that I would hope for international diplomacy to take is like, please be aware, Russia, that if you do this, we will launch a conventional strike on your data center.

If we cannot convince you to shut it down, if it is up and running and we do not know what is running on there, or we know that dangerous stuff is running on there. Like, we are not doing this in the expectation that you will back down. We are not doing this in the expectation that you will not go to war.

We are not being macho and being like, this is us threatening you because we expect you to back down. We will launch a conventional strike on the status center in terror of our own lives and the lives of our children, exceeding the terror that we have even of a nuclear retaliation by you. This is not a macho thing.

This is us being genuinely scared. Please work with us on not wiping out the human race here, and if they're like Well, no, we're tough. Then you launch a conventional strike on the data center and you know what comes comes and the thing in international diplomacy is if this is what you're going to do, be very clear in advance that that is how you will [01:00:00] behave.

Do not let anyone get a mistaken impression about what you will back down from.

(1:00:05) Zach Weinberg (CEO, Curie.Bio) on the best way to learn anything

You know, my thesis on any industry is like there's a set of information that's available that you can like read and then there's the inside information and the inside information is typically not It's not written down because, A, it's complicated and annoying to write it down, so most people don't do it.

And then, it's only available to, like, the people who really understand it. And, like, why are they going to write it down? Like, it's an incentive thing, right? Like, if you, if you know something special, why would you share it with everybody else? And so, the trick is to understand the physics, I think of it as, like, of an industry, is not to try and read everything, but to just go find the people who know it the best, and then convince them to teach it to you.

Because you're gonna learn faster from, I think, think of it as like information, right? Like, I can go and read all these things, and try and piece it together myself. Or, I can go find the five people who have already read all [01:01:00] of the things, plus they have their own knowledge, and have synthesized it, and I can just ask them to teach it.

And neither of us are big fans of reading large amounts of, of stuff, right? Much easier to learn talking to people and, uh, in that way, right? Like, I'll read, but the only thing I get from reading this stuff typically is like, Yeah. I don't know. I can't. It like prepares you to go ask questions, but then the actual learning sort of comes from the questions and answers.

Yeah. So we just ran a process. We did it at Flatiron and then we just did it again, which is like, all right, let's just go learn everything there is about this. And the way you do it is you go find the people. Yeah. It's all about the experts. And you know, the hit, what you were mentioning is like the hit rate on those experts is really low because most people aren't experts.

They think they are, but they're not. Uh, or they think they understand something, but actually what they understand is something someone else told them. And you gotta go find that person. Uh, so yeah, we just meet this person, person. How do you figure out if someone's full of shit, like they have all these, you know, Doctor and B, whatever, big, big job [01:02:00] at, uh, HSS or, but they actually don't know the incentive structure of the health system?

Yeah, so we did this intuitively at Flatiron, meaning I, I didn't realize I was doing this, and so as a result it was like less effective. And then at Curie, we do this explicitly. Those are maybe not the right words, but whatever. One we planned, the other one we didn't plan. Uh, the non planned one with, with Flatiron was, we just asked people, like, how does it work?

And when you got an unsatisfying answer of, like, how something worked, and you said, well, how does that part work? And then you realize, like, okay, this person actually doesn't understand because you just kind of keep asking, like, well, how does that work, and how does that work? And the beauty For us, at least, I think, and I'll talk about it the better way, but at Flatiron was we actually didn't know how it worked, and we weren't doctors.

So, it was really easy for us to show up in a room and be like, I don't, I don't know anything about this, and not give up our credibility. Because we could basically be like, hey, remember we sold this company to Google, so like, we're not morons? [01:03:00] And so people would engage and then we were just very comfortable saying, I don't know.

Yeah. And that to me, that was the trick. I was like, I don't know. I'll just show up in a room and say, I don't know how this works. And you explain it to me. And then when someone says this, I'd be like, I don't really know what that means either. What I learned from one of my Curie co founders is there's a better way to do this or a little more efficient, uh, which is you ask people how they know.

And it's just like. The dumbest, simple, I can't believe I spent 10 years of my life not asking this question. And then, you know, after we had sold the company for 2 billion, I still didn't know how to do this. That's how I knew there's always somebody better than you. Uh, and Tom taught it to me. And he's basically like, yeah, if you talk to somebody and they say, well, this thing is true.

Just ask them how they know it. You know, like, oh yeah, that's like the best. And then most people will say, well, I heard it or I read it. And then you say, okay, where did you read it? And then they, like, they're, they're lying, and, or they don't really know. Uh, but the ones who do know, they're like, Oh, I'll show you exactly how I know it, and I'll teach it to [01:04:00] you.

But you just have to be very comfortable asking that question, because it's a very awkward question to ask somebody, because you're kind of insinuating that you don't believe them. And so, you just have to get really good at asking the question in a non antagonistic way, which is more of an EQ skill. It took me a while to figure that out, uh, because I know, like, if I'm in, with somebody, I can definitely come off as, like, Aggressive, uh, which I'm aware of.

So I try to like mute it with a joke or something. We just worked on these skills. It's like, all right, how do I get someone to explain how they know something? And then the other thing Tom taught me was you, if you ask some, like, if I said to you, you know, what's the average contract value of, of ramp or whatever, and you're like, well, it varies.

Most people will just stop at that. Oh, I don't know that it varies and actually you can get. You can kind of guide people to give more specific answers. So I'd say, okay, well, is it like closer to a dollar or 100, 000? And they say, oh, okay, it's like way closer to a dollar. And you [01:05:00] go, okay, well, is it like closer to 5, 000 or what?

And you can kind of guide people to a more specific answer if you give them choices. Parameters within. It's a funny thing. If someone doesn't want to tell you something, right, and you throw something out that you know is wrong, but you're not exactly sure, you know. Zach, how, uh, whatever, how big do you think Curie can be?

And, and you're like, Oh, I don't know. It can be big. And I'm like, right. But like 10 million probably. Right. And you're like, no, no, no, way bigger, like at least five. And you didn't want to tell me originally, but when I framed it so low, it forced you to correct big, right? No one wants to let you sit out there with false information as well.

It's like an interesting thing to get information from. And this, this, this little tactic, not the only one, but we use like at Curie when we were first doing homework on it. Which was, you know, this core thesis that, like, yes, drug discovery is expensive, but the cost to do it was coming down. And you would talk to people and they'd say, oh, well, drug discovery is expensive.

You know, it's really expensive, it's really expensive, it's really [01:06:00] expensive. How expensive? And they're like, I don't know, it's like, you know, millions and millions of dollars. Is it, is it five or thirty, right? And you were just kind of like, Slowly, and then say, well, this part's really expensive. Okay, how expensive is that part?

And then you figure out, ah, the cost is over here, the cost is over here. You're kind of building this mental model of what the actual true facts are, rather than, like, people's perceptions of those facts. The other thing, there's this, like, subtle, I think of it as a momentum. Uh, thing, which is as you, this is the third thing I learned from Tom.

So he's the one that taught me to ask, like, how do you know? And then he's the one that taught me to do the sides. By the way, this is all post flat iron to be very clear. Like, this is all things I learned after we had sold it. Uh, the third one is that as you do this little fact finding mission, now you become an interesting source of facts and there's this like, and so now when you go to the next person.

You have something compelling to share with them that gets them engaged and gets them interested because you know something they don't know and you see this momentum build where now when you're [01:07:00] talking to an expert, you actually have something to share, which gets them to engage, which gets them to share more.

And that's how you get to the cycle where, you know, everybody, because by the 15th or maybe it's 50th or whatever conversation. You've learned like little nuggets all along the way so like I can tell you really interesting things about You know the cost of drug discovery or where the failure modes or like how the economics of these venture funds work Because I learned them as I was doing it.

I could use that in the next conversation and share it that person's like okay Maybe this person isn't an idiot like I just learned something and then they're more like they are more likely to share You just have to do the work

(1:07:38) Brian Halligan (Co-Founder, Hubspot) on surviving a terrible snowmobiling accident

you got in a snowmobiling accident that happened two winters ago I had a really, really bad snowmobile accident and didn't, didn't think I was going to live, I didn't think I was going to live through it.

You were up in Vermont with your son. Up in Vermont with my son. We went kind of off the trail, went down about a mile and [01:08:00] slammed into a tree. He confused the, the buttons or something. I mean, not that there's culpability in this, but. Let's not get into that. But anyway, we went off the edge of a Small cliff.

Small cliff. Not, not, you know. I think a cliff is a cliff. Certainly when you're flying off of it in a projectile. And I don't remember it. We both passed out and we're kind of lying in the snow, passed out. And I'm going to say something I was never said in your show in a minute. And we woke up and we're both.

You know, I had a lot of broken bones as did you. Are you wearing a helmet? Oh yeah. Helmet cracked. Yeah. Uh, both smacked our heads. In a lot of pain. And it's 4. 30 in the afternoon in Vermont, middle of February, freezing cold day. And I never bring my phone snowmobiling because there's just no signal in the whole darn state, never mind on the snowmobile trails.

And so it was [01:09:00] about 45 minutes of sitting there in the cold. And I was like, well maybe I do have my phone, and I reached in, pulled, kinda semi pulled my phone out, and I had just enough. signal. No one knew where we were. I had just enough signal to go 9 1 1. I never called 9 1 by the way, that's the killer app.

That thing works. And the lady called the two local fire stations. They're both volunteer. Fire station people called the volunteers at their home. They took their showmobiles. Took them about an hour to find us. Pitch black by the time they found us. Are you guys talking? Yeah, but we, I was in and out of consciousness, just in a lot of pain.

Because you were concussed. In a lot of pain. Uh. And, and he broke his femur? He broke his femur, his kneecap, and hit his head pretty bad. And he was 17? Yes, 16 or 17, I guess he was 17 at the time. And you, what were your injuries? Concussion, obviously. I broke 13 different bones, um, I ended up, I got [01:10:00] metal, I have, I have 3 plates and 30 something screws in me.

I'm a walking metal factory. Do you get, when you go through the airport, are they, uh? No, they're non ferrous. Oh, wow. Slip right through them. Um. And then they sort of drag you, like if you get a ski accident, and like they wrap you in a toboggan, drag you up, and then helicoptered us to Dartmouth University Hospital.

And I love Dartmouth University Hospital. They saved our lives. It's a trauma center. I had five surgeries in there for a long, long time. Um. So you're sitting there, and you're drifting in and out of consciousness. Yeah. You think you're not going to make it. I was pretty sure we weren't. I was like, we're gonna freeze to death.

I didn't think like a coyote was gonna use us. I thought we would just die of exposure. And then I'd remembered I had a phone and, and the thing that I'm gonna say that no one, I'm sure no one's ever saved, said on your, uh, podcast is at t saved my life 'cause of the cell service. I had cell service and I didn't, it's, it's rare to have cell service.

My mind, I just had enough cell service, it saved my life. Now [01:11:00] that would be different because you know, the phones and the iWatches have that collision. Signaling that will happen but back then it didn't have that so that was a dark moment I was like, I don't know how we're gonna get through the night.

We were too badly injured to move But yeah, very, very thankful I made it through. And the volunteer firefighter that came and got you recognized, or he's the guy that like clears your driveway or something? It's a small world, but the first person who found us, so they sort of spread out to find us, he's, he's asking me questions and he's like, Brian, is this Brian, are you Brian Halligan?

I'm like, yeah. I'm Joe. I'm the. I'm your driveway guy, I'm your driveway, I'm like, Joe, thank you. So, better that than like, uh, I went to Inbound 2014, right? Totally, yeah. Brian Halligan, uh, that's, wow. So, so, so you're, you're sitting there, what's going through your head? What's going through my head is, is Life's short, it could be very short here, [01:12:00] and then if I get out of this alive, I don't want to just keep on the path I was going.

I want to zig off that path and, and live a, a different life and make some big decisions in my life. Uh, that was going through my head. And so then that led to, how long were you, you had five surgeries? Yeah, I was in the hospital for a while, rehab for a while, I was in a wheelchair for a long, long time, and I'll tell you, we did something, it turns out kind of smart at HubSpot, we had a board meeting two weeks before the accident, and we had the conversation of what happens if Brian gets run over by a bus, or You know, that expression run over my bus and we said, well, Yamini will take over.

We all agreed it wasn't a debate by any means. Is that just coincidental that it happened two weeks before you had this like, obviously coincidentally. Yeah, it wasn't like a plan, but it's not like something you do every year or like a refresh on, hey, this contingency of. Maybe we do, I don't remember. Um, but we definitely [01:13:00] had just finished it.

Uh, and so, I wasn't able to talk to anyone or anything like that. It was COVID, people couldn't visit me. And so, it's kind of break class, so Yamini took over. Um, and I was pretty much out. Outta commission for six months, and I had a pretty bad concussion. I couldn't work. And so she, she ran and did a fabulous job while I was there.

How long had she been at HubSpot? She had been at HubSpot, she started like three weeks before Covid. So she hadn't been there that long. No, but she was doing great. She helped us get through Covid, the deep, deep down and the up, up, up of it. When she took over as CEO, things were smooth. She did a great job.

Um, and so when I came out of it. Uh, one of my goal, I didn't want to go back to being public company CEO, I've been doing it for a long time. It's rewarding to a certain extent, but I wanted to do something new. Approaching 10 years in the public markets, right? Some eight, I think, um, [01:14:00] I was just ready for a new challenge and that was kind of clicking around the back of my head, but I wasn't planning on doing anything about it.

But I'm like, this is a, this is the time I'm going to do it now. And so as I'm getting ready to come out of my What do we call it? My medical leave, I called my co founder Dharmesh and I told him I'm not coming back as CEO, as chairman, but not coming back. What do you think about Yamini as CEO? He's like, he was super cool.

He's like, sure. Um, and then I remember a bunch of conversation, but I had a conversation with Yamini about it and she did not see it coming and didn't want it. It was actually a struggle to convince her to take the CEO gig. She's like, I know you've been through a lot, but let's not rush this. You know, I'll be a COO.

We'll work together side by side and we'll build up to this. I say, you basically have two choices. You can be the CEO. We're going to go find some schmuck from the outside to be the CEO. I think you should do it. You're ready. And so fortunately she did. And she's done a really nice job.

(1:14:59) Keith Rabois (Partner, Founders Fund) lists key traits in founders and younger investors

How do you think about [01:15:00] fitting the founder to the story then in that case?

Like, how do you know what, what, what they need? So there's two different comments. One is I think I'm looking for an extraordinary trade. The reason why is the chance, if you think about what's the likelihood that someone starts a company and reinvents the world or an industry in the proverbial garage with like their college roommate rounds to basically zero.

So unless you're, you have a trait that is so extraordinary that the probability shift off rounding to zero, you should not invest. So that's why I need a top 1 percent top 10 basis points, something spike, or I'm not going to invest. Then you want to ask a question. The second question is does what the person spikes on relate to the skills that are required to build this particular company.

Some of these traits are transferable, like if you can recruit better than anybody else in the planet, like if you can assess people, close people, that will apply to any company, whether you're building SpaceX or Facebook. Some skills don't translate, like, for example, if you're the savviest technologist in the history of the world.

There are some [01:16:00] companies where you could, like, let's say you were doing a database company. A new novel architecture for database? Sure, that's a perfect company to fund for that kind of founder. They want to do a photo sharing app? I'm not sure they can really leverage that trait. So you do ask that, but that's usually the second question, not the first.

And when is domain expertise a good thing versus a bad thing? So in my view, it's never a good thing. I don't fund people with domain expertise. I do like them to be able to answer the Balaji and the Chris Dixon blog post that summarizes it, uh, intellectual maze question, which is Balaji, when he was teaching startup engineering at Stanford, had this great paragraph that explained how the most amazing founders can walk you through the roadmap from where they are to super success and know how to avoid the pitfalls, trap doors and navigate.

And when you hear that clarity of a roadmap, it's extremely rare. It happens once a year or so. That's a reason to invest, maybe independent of the traits. But the traits that are exceptional plus the intellectual roadmap is like a whole lot. It's like instant investment. Here's your money Do not pass go you don't have to meet my colleagues.

Here's your money. Please. Please. Please do not [01:17:00] take any more meetings That that rarely happens that can be based upon some experience. So for example, um, one of the founders we work with in Miami Uh, before he left Uber to start his company, had been a warehouse supervisor, uh, before he went to Uber. And he started a labor marketplace that connects workers to light industrial warehouses.

So the fact that he'd started his career out of college. Yeah, as a warehouse supervisor was insightful, but it's the other traits about this founder that make him extraordinary in a top 1 percent founder in the planet. And, and do, is there, if, if not in the founders at some, some of these companies you need domain expertise brought in so that you don't fall in.

So you can borrow it. This is the trick. You can always call up people with domain expertise and ask them questions. They're actually pretty happy to talk to you usually. And then you just say, why can't this work? So the question I always ask when I do diligence, like when, let's say I'm not. Occasionally invest in some pretty deeply technical things like autonomous driving or genomic sequencing.[01:18:00]

So when I call up experts, what I'll ask them is tell me why this can't work. I don't want to know whether it can work. I want to know metaphysically point to something that will make this impossible to solve. And if they can't articulate a specific blocker, then I'm pretty comfortable backing a world class founder to try to solve it.

(1:18:18) Emmett Shear (Co-Founder, Twitch) on his top advice for founders

You said something interesting once. The only reason you ask about TAM is to understand the ambitions of the founder. Uh, can you tell me why, why you think that, or why, why you approach investing in that way or why you approach thinking about TAM in that way? What, what was the TAM of, uh, live reality television?

Uh, whatever Justin was willing to pay you at the time. Yeah, yeah, yeah, like nothing basically. Um, and yet that's pretty big, live streaming is just a pretty, pretty big business. Um, I ask people about TAM a lot and I, I'll push them on it, but it's like, it's because, What you're trying to figure out is, do they have a story that might have a lot of holes in it and like, or like questionable assumptions [01:19:00] where like they're trying to build something really great and big, or do they have a story that's like, Oh yeah, yeah, I will know that.

Don't worry about the Tam. We're just going to flip this to Google, uh, in two years once we hire the talent, that's a really bad sign. Companies don't get built by people who, who are looking for the exit. You want people who are looking to build something big, something great. Um, because to even build something medium sized, you have to be trying to build something really great.

And if you're, if you don't, if you're not, if you're aiming for the aqua hire, you're probably not even going to get that. Um, and, uh, and so it's a very important question, but the point is not to like, that you can actually analyze the TAM per se. And I think that's a little different. For more established founders, especially, it's a little different in places where there's more, you need more commitment up front, more capital up front to get going because the other thing I look for is there's founders who, [01:20:00] there's this trap you can wind up in where you, you start, is your, is the, if you're building for a customer that doesn't exist or like that hates buying software or that like doesn't want to, doesn't want any, anything to do with this whole thing, that's bad.

So like, yeah. I don't care if your thing credibly, you know, is in, is every developer on earth is actually going to use your product. But building software for developers is a good plan. There are a lot of developers and they spend a lot of money on software. And so even if you're wrong about your exact thing, it's going to be okay.

Like the software developer market is big. If you're building market for software, that's like really for public school librarians, that's the only. People and you're like, and I'm pushing you on like, no, but isn't this bigger than that? And you're like, no, no, it's for public school librarians. I'm really worried for you because even if you like crush it, that's not actually, they don't, nothing wrong with librarians.

They don't even buy, they don't like buying software in the first place and there aren't that many of them. So like, [01:21:00] that's maybe not a great idea. And so like, and founders can get a bit trapped in these ideas where they're building okay businesses targeted at very small groups of people who don't want to buy their product.

Um, and so it's more about your Just in theory, if you executed like crazy and figured out six things that we didn't even think of in this room on a pivoted idea that's like halfway, only halfway connected to your idea, would that be a big business? Yeah. No? Okay, maybe, maybe we should change ideas. Yeah.

Uh, there's two options for companies. You can either be, generally, you can either be early or late, uh, and I, I get the feeling being early is probably better. How do you think about timing a, timing a market and making sure that you're there to ride that wave and the differences between being too early and too late?

Um, you're, every startup that wins pretty much, not everyone, but like 98%, uh, is too early. [01:22:00] Because if you're not too early, you're usually too late. If you think about it, you're trying to like, there is some optimal day, like literal, it's probably a day, where like, if you start the company on this day, everything will be available as you need it.

There will be sufficient bandwidth to do your idea, there will be sufficient this to, you know, the monetization techniques will work, the distribution will be in place. But if you start it, On that day, someone else, when everything actually becomes available, everyone else, someone else will have started it a year earlier, stupidly.

Just by random chance, because every idea is getting started over and over again. And they will be in motion, and they will have infrastructure, and they will have a product built. And suddenly their product will be working, and you will be too late. And so what you're trying to do is, like, So if you think about it for Justin.

tv, we were too early. Bandwidth was too expensive to make our business work. Uh, and almost honestly, most people couldn't even watch, didn't have a good advantage to even [01:23:00] watch reasonably quality live video and the video ad market didn't exist. We didn't have a business. It was impossible, but there were other startups that tried to compete with us, several that started later.

And they just got destroyed because we'd built global live video infrastructure for the past four years. And they hadn't. And so. You know, from when the start gun went off on like, Oh, no, no, the business works now. There's just, how do you catch up with someone with a four year head start still led by the founder CEO, still like grinding super hard, trying to make it work.

Like you're just screwed. Um, and so a big part of it is if you, if you have faith that your thing is going to work and the pieces are going to come into place, the question for startups is how do you survive? That's why you read so much about like, why sees advice is like be a cockroach. Like, don't, don't, don't be a, uh, uh, graceful swallow, be a cockroach because you're usually too early and you just have to survive and survive until it's a combination of like you find the [01:24:00] secret thing.

But often it's actually like you didn't change anything. The market caught up with the future that you saw that you can't ever time exactly right. And suddenly your thing that wasn't good is good. And boom. Like, Airbnb is a good example of that, actually, I think. Like, they, they had a product, and yes, they did iterate, and they did figure stuff out.

But, like, a big part of it was, like, people got comfortable with the idea of, like, host, listing stuff on the internet and renting through, uh, renting other people's stuff. Uh, and, and a business that didn't work and wasn't good became a business that did work and became, and was good. And I think that's a, it's a very common pattern.

Talking about cockroaches, and you guys were certainly that with JustinTV, um, frugality, uh, being Something that's important, but maybe not a virtue, uh, that, that being able to use frugality, I, I've written down here. I assume you said it because then quotes frugality isn't a virtue in itself. Speed is how [01:25:00] do you think about the balance of speed and frugality and the ability to execute the other thing?

I think you said the blast radius of startups is low, which is to your advantage as a startup. Yeah. I mean, frugality. Amazon, one of Amazon's core, uh, like, like Amazon, Amazon's leadership principles, the ALP. Um, one of them is frugality. Um, it's about how leaders should be frugal. And inside of Amazon, there's a saying about like, yeah, yeah, frugality, but don't, you don't want to, oh, you're going to do avoid frupidity, um, because frugality and frupidity are near and far aligned, um, because you need to be willing to spend money.

And to invest for something that's important to do something, but also, but wasting money is very bad. And the difference between one man's investment is one man's waste is another man's investment and vice versa. The real thing that kills startups in terms of spending money is usually not overspending on like bandwidth or like server [01:26:00] capacity or like even marketing.

Obviously don't run marketing. It doesn't work. It's inefficient, but like what a. What kills startups is hiring. You hire too many people. And that does kill you because you have this high burn rate and eventually you run out of money and the burn rate kills you. But it also kills you because more people means more slow.

Um, and speed is the essence. And like, hiring is not in alignment with speed. Um, speed is in alignment with speed. There is a point where you do need to hire more people to go faster. But like, it is later than people think. It's less hiring than people think. Um, and so I think a lot of the advice to startups about frugality is really better framed as advice about you need to be going fast.

And that means don't grow your team too much. That creates a lot of drag.

(1:26:47) Liz Zalman and Jerry Neumann (Co-Authors of Founder VS Investor) debate founder versus investor perspectives

Let me tell you two experiences. So in both of my companies, so two separate boards, one in ad tech and one in infrastructure, not a single investor that ever sat on my board agreed to log into my product with me and [01:27:00] see what it did.

Why should I, as a founder? Take anything that they're saying seriously when they don't care, they don't see, seem to care about the product or the company enough to even look at it. So that's just one experience. Second experience, so I, I advise for a seed sage VC fund and they're invested in this company, so a completely distinct set of investors than I've ever had.

This company raised at the froth of the market, also in the infrastructure space, and I'm observing these board meetings and the investors have told the, the CEO, Exactly what they want to see. Young, young guy, uh, brought in as a co founder later on. And he pulls this together, and he spends days pulling everything that they've asked for together.

And this meeting is four hours long. And I've heard the investors after the meeting complain about how long and tedious it is and how we're getting mired in the details and this and that. And so he's done exactly what they've asked. I can tell that the company is not doing well, and he's six to nine months away from getting fired, just listening to what's [01:28:00] happening.

And not a single one of them, some of them career VCs, some of them former operators who've turned into VCs, have gone to him and said, Hey, I think this could go better, and told them how they're actually feeling. What do you do in those instances? It's hard. We talk about VCs as an asset class. I don't know what it was.

Jerry, when you were getting into it, it was probably, I don't know, 50 people or 75 people or 100 people maybe. In New York? Well I mean, maybe 25 years ago, right? You've been doing this for 25 years? 25 years ago, you could get everybody Who worked in the startup sector in New York City into one bar. Yeah, one bar.

I remember doing that. And like you throw, so maybe there's, I don't know, you throw in Silicon Valley and it's 200 maybe? It's not a, it's not thousands and now, now it's certainly thousands. And so I guess I'm curious, I haven't had, we joke about the, uh, the VCs that are like [01:29:00] this among my. And, uh, we, we hear these stories and what's, what's the weird, uh, we, we talked a little bit about like how the different.

Constituents that you're solving for, uh, is, is, um, founder friendliness, right? Versus LP success. Those two things are very much at odds with one another. And what people thought founder friendliness was, was like. being hands off in 2021. Right. And now I think people are actually more and more amenable to having help, uh, in some way, shape or form, at least having people give a shit.

Uh, and, and whatever that is, maybe not overstepping, maybe stepping within the right degree. And I think it's hard because whenever these Every two years, there's a VC review site that comes out, right? And it's like, hey, rate my VCs, and you'll come through, and It's just, it's so interesting because [01:30:00] We could all be talking about the same BC and assessing the same actions, and people could be totally, uh, taking it in different ways based on what their own unique experience was, what their failings were in that board meeting, what their expectations were.

(1:30:17) Laela Sturdy (Managing Partner, CapitalG) on what all the best investments have in common

The best place to build a career as an operator. Like really fun and gnarly and challenging growth opportunities is when things are growing like crazy and there's not time to get the people that know what they're doing in the door so they have to give smart capable eager people the chance to try it themselves.

The difference between joining a really established tech company where there's hierarchy, there's order, there's people that know what they're doing, a startup where there's people banging their heads against the wall a lot of the time trying to find product market fit. And then there's lightning in a bottle.

I'm like, go find that if you're a builder and figure out how to learn as fast as you can. I've been on the Duolingo board, I think for [01:31:00] nine years now. So these to me are like really long term investments. Um, so that is first and foremost. Um, but the other thing that I think has stood out in almost all the investments I've made is that you look, um, there is something really compelling around the, the value to the customers or users that you can see in the data really early on.

And that most commonly shows up in things like, um, the cohort NDR, like compounding over time. I mean, look at companies like Stripe or whatnot, and you just see, um, there's other totally different businesses, but what you can see is just an engagement. In their core customers, you saw the same in UiPath like, and it shows up in really high engagement data, really high, um, revenue growth data on the enterprise side.

And to me, that signal is just, this is, this isn't just, oh, I bought a package of software and I kept it. This is, wow, this software, um, or this new habit or a service on the consumer side [01:32:00] has really changed my life. And, um, There is going to be a ton from a business model perspective, a ton of future growth in existing customers that are going to, um, to me, those are, those are the companies, um, that have outlier growth.

Is where there's just a huge embedded potential in their existing customers to grow. And those are the ones that, um, grow at disproportionately fast rates that usually are in end markets that compound for much longer than you expect. And are that from a business model perspective are get to profitability and, and hypergrowth in a much more efficient way.