Alex Komoroske: Systems Thinking, Builders vs Gardeners, and Working In Large Organizations

 
 

New friend: Alex Komorske. He’s building a secret new startup after spending 13 years at Google and 3 years at Stripe. I met Alex through our friend Sam Arbesman at Lux Capital who joined us in episode 63.

Alex refers to himself as a “Gardener of Systems.” We talk about the differences between traditional business approaches and systems thinking. Alex also shares practical advice for those navigating large organizations.

Here’s what we explored in the episode:

  • Alex describes himself as a "gardener of systems" 

  • Influence is better than control.

  • In gardening outcomes, success comes from nurturing potential rather than immediate, forceful action.

  • Alex wrote the "Magic of Acorns" essay to highlight the difference between builders and gardeners–with gardeners recognizing the potential for organic growth.

  • Some leaders exemplify the gardener mindset over the more common builder archetype.

  • Alex introduces the concept of "magic" in social contexts, where belief and indirect influence can create significant outcomes.

  • Steve Wozniak, Stewart Brand, Dee Hock as examples of leaders who organically catalyze larger movements without seeking direct credit.

  • Being both inside and outside the system is important to influence it effectively.

  • Alex shares his journey into systems thinking which started from his interest in complex adaptive systems in high school.


Invest your money into high-growth startups at the earliest stages

When new technologies meet the market, the world changes for the better. That's why we invest our money into the best founders in our network.

We write small checks to 15-20 very different startups each year. Previous investments include ​Aalo​, ​Gently.com​, ​Omella​, ​Driv.ly​, ​Weavechain​, ​Stell Engineering​, Ouros, ​Solve Data​, and more.

Our ​Website​ has background on the fund, our past deals, and more.

Accredited Investors: reply to this email, I'll send you our deck, and we'll get you into our deals starting this quarter.


Learn more about Alex Komoroske:

Additional episodes if you enjoyed:

Episode Transcript:

Eric Jorgenson: Hello again and welcome back. It is a pleasure to introduce you to my smart friends. And today, our new friend is Alex Komoroske. After 13 years at Google and three at Stripe, he is off starting something new and mysterious. I met Alex through our friend Sam Arbesman at Lux Capital. I'm sure you remember him from episode 63. Alex refers to himself as a gardener of systems, and I've learned a ton from him about systems thinking, about anticipating consequences, and playing the long game. He's hilarious. He overflows with incredible metaphors that stick in your head and help you really retain these somewhat esoteric concepts that are super important to implement in thinking successfully through multiple order systems. This episode, we get into a few adjacent realities. We get into gardening versus building, or the way I like to think about it, winning the easy way. We think about what systems thinking looks like in practice. We get into concrete examples of how Alex built his incredible career and is thinking about the future of technologies with systems thinking. And the last 20 minutes or so, we talk about meta moves in the AI gold rush. I think Alex's take here is really fresh and really interesting and really cuts through a lot of the bullshit, frankly, that we hear around AI and LLMs these days. 

I don't do outside sponsors for this podcast, but I will invite you to consider investing alongside us in the badass potential future monopolies we invest in through Rolling Fun. We're really trying to find the mega cap companies of the 2040s and put some money into them in the very earliest possible stages. And we make it our mission, as I like to say, to provide capital to obsessive geniuses building utopian technologies. If you want to support us in supporting them, please open your browser right now and type in rolling.fun. Over the past two years, we've invested in 30 companies, including Aalo Atomics, who builds nuclear fission micro-reactors, Atom Limbs, who builds non-invasive, mind-controlled robotic prosthetics for humans, and recently a next-generation battery company that could change the whole face of a lot of pieces of our transportation and electrical and air infrastructure. I'm honored that over 50 listeners just like you now invest alongside me, and accredited investors can join through AngelList today. With a rolling fund structure, the sooner you invest, the more of these deals you get to participate in. If you want to put your money to work alongside ours, click on rolling.fun linked in the show notes below. To meet with me about it, please reach out through Twitter or email. Now I invite you to take a deep breath, relax, and enjoy Smart Friends, your favorite podcast, arriving at your ears in three, two, one. 

Eric Jorgenson: Dude, I'm extremely excited to have finally a recorded conversation with you. Because I learn something every time I interact with you. Twitter, your blog, it's amazing. Like our first conversation sent off fireworks in my brain. So, I'm very excited to get another crack at this. And I think I have a lot to learn from you. I admire a lot of the domains that you work in and how far you've taken them. So, I'm really excited about this. 

Alex Komoroske: Cool. I'm excited to be here. 

Eric Jorgenson: Your very high-level introduction of yourself I think on your Twitter bio or whatever is gardener of systems. And I thought maybe you could introduce yourself by breaking that down a little bit. 

Alex Komoroske: Yeah, sure. I find one of the core things that is a commonality across a lot of the work I've done in my academic career as an undergrad and then in industry, it comes down to systems thinking, gardening systems, swarms. I just gave a lecture a couple of weeks ago at Oxford on AI and philosophy and the frame was swarms and under what conditions they show up and when they can create value and the tension with coherence. I think that to really wrestle with systems, you have to, Donnella Meadows has this great quote of like, to engage with a system, you have to let go and just dance with the system. And I think that a lot of the approaches that we take in normal business are very instrumentalist, they're very first order, they require kind of just heroic execution. And a lot of the ways to create magical outcomes are understanding systems and dancing with them. And one of the reasons I like the word gardening is it underlines that you are not in control of this system. You are influencing it. You have a meaningful impact on it. But it is a thing that is alive without you. And so often I think that we use the builder mindset when we're approaching products or problems. And that's why I like- the funny thing, of course, is my husband will tell you, my husband is a very good gardener and our garden is beautiful because of him. I am a terrible gardener of physical symptoms. It's like, oh, gross, you get all dirty and sweaty, and it takes forever to see any results. So, I like the gardening metaphor more than I like the gardening in an actual practical reality. 

Eric Jorgenson: What is it you like about the metaphor? Is it the patience? Is it the inducement rather than the action? 

Alex Komoroske: Yeah, it's like it's reacting to... I think people often have... I wrote a very short essay, because most of my stuff I write is very long and discursive, I wrote a very short one a few months ago called The Magic of Acorns, and the overall frame of it is a parable about the builder and the gardener, and the two of them are both trying to build large, beautiful things, and the builder gets immediately to work and says, okay, here's some rocks I can pick up, and here's the heaviest one I can carry, and pulls it together, and with sweat on his brow, he's getting stronger and more clever about how to use leverage to move rocks in place. And the gardener's just sitting there dilly-dallying and looking at stuff at his feet and looking at picking up little pebbles. It's like, what are you doing, man? And the builder keeps on making more and more progress, getting stronger and more clever. And at a certain point, though, the gardener starts creating outcomes that start becoming large and beautiful. And the builder thinks, are they cheating? Like, are they doing it when I'm not watching? What's going on? And this thing gets bigger and bigger and more and more beautiful. And the secret, of course, is the gardener understands that other things can be alive and that some of those pebbles are actually acorns and a pebble and an acorn are totally unlike each other. And then the acorn has the potential to be alive and to be this big, massive thing that grows on its own. It takes a little bit of tending and care and finding the acorns and watering them and putting them in good soil. But that looks very unlike the process of building that entire tree yourself. And it's a very different mindset. And it's highly atypical for us to use in serious business. Serious business is all about heroics and sprinting around and having frameworks and saying, I definitely know the answer. And the gardening mindset is much more about acknowledging I don't know the answer, and yet I still can assert that I can make good outcomes happen. A garden requires a gardener. A gardener is somebody who pulls weeds, trims back things, plants certain seeds in places that things might hang on without saying that it built that thing. It takes time, it takes reacting to this living system, but you can do amazing things with it if you just recognize that you are not building that system, that there's an alive thing outside of you that you're interacting with. 

Eric Jorgenson: Do you think there are- are there leaders or CEOs or founders that you would classify as archetypal gardeners today? Or do you think we're still far from that ideal? 

Alex Komoroske: I think I wrote an essay last year that frames the Sarumans and the Radagasts as two types of real magic. Most founders, most CEOs are canonically the former type, the Saramons, the builders, than the gardeners. I think there's a structural reason for that. Just to rehash the overall argument, we talk all the time about magic. Magic obviously isn't real in the physical world, like duh. But in the social world, it absolutely is real. The social world is where I make most of the decisions that then lead into the physical world. So in a very real sense, magic affects the physical world. And in this essay, it described two different archetypes of real magic that work, both of them work for game theoretic reasons. They work for totally different reasons. And the first is the Saruman. And this is the typical Steve Jobs is a canonical Saruman. It's the heroic, powerful, insightful startup leader. And somebody who believes in the great man theory of history, one, two, believes that they themselves are a great man, and three, has enough initial success that leads others to believe that they might be a great man. And then this can become a self-accelerating thing where enough people believe it's true that they take actions assuming that it will be true, which causes it to become true. It's like a reality distortion field that follows these leaders. And the way it works game theoretically is a complete and total absence of self-doubt. And of course, this is one of the ways you get Steve Jobs style massive dent in the universe kind of outcomes that create all kinds of amazing value. The vast majority of people who attempt to use this magic are just jerks. They're just annoying jerks that people don't like working with. But the vast, vast, vast majority of them are that. The ones that really make a difference use this magic. The second type of magic is the Radagast. And I don't know Lord of the Rings that well, except for I've watched the movies a couple of times. I actually asked Claude to help me unpack the back story. Turns out Saruman is, I'm talking about Saruman before the Lord of the Rings movies, back when Saruman was just an industrious, powerful wizard, before he had his kind of like evil turn and descended into evilness, which Claude tells me has a rich history of many thousands of years that he was in that mode. But the Radagast in the movies is the guy who's in the, the wizard who's in the woods, and it's like, is he even magic? And he's got like bird poop on him, and he's high all the time or whatever. And the Radagast magic works for a totally different world. This is gardener magic, and it works by loving everything and everyone around you and seeing seeds of greatness everywhere you look and helping develop those seeds of greatness into something that's larger, that grows into something larger than the sum of its parts. And this magic fundamentally, to wield it, you have to let go of the idea of getting credit for what you do. So, it's like fundamentally an indirect effect. It's possible to do this very easily. It's possible to do this with very low risk, actually. The problem is that when it happens, people will say, oh, you just got lucky. You just happened to be there when that miracle happened. And no, it's like I'm farming miracles. I'm structurally creating the potential for miracles on a continuous basis. I can tell you why it works. I can tell you why, but whenever it happens, it will always look like luck. And if you're okay with that, if you're okay with the performance review system at your company not recognizing that effort, then you can create and do real magic. 

Eric Jorgenson: Who are great- I love the frame of miracle farmers. Who are the best farmers of miracles that you've seen? Like, who are the Steve Jobs? Or is it the nature of them that we don't know who they are because they're quiet?

Alex Komoroske: They're often indirect and behind the scenes. And they're often- there are a number of good examples. And somebody just gave me a really good example the other day. I was like, oh, duh, I should add it to the essay, and I forgot. But Steve Wozniak is a great example, Stuart Brand, Kevin Kelly, these kinds of people who inspire and catalyze something much larger than themselves. But by its very nature, it has to be, to some degree, a slightly behind the scenes kind of thing. Like the big brash, big personalities almost inherently must be the Saruman types because it's so obvious. It's not like, oh, they just got lucky and were there. No, they moved the mountain. You watched them do the thing. Whereas the Radagast, if you aren't watching too carefully, you'll think that they were just along for the ride or just doing something totally unrelated. And so, I think on a structural level, like one of the things in the essay is like no Radagasts own their own helicopters. Like the only people who are wealthy enough to own their own helicopters are Sarumans. Which by the way, one thing I don't like about this metaphor is, again, it emphasizes good versus evil. No, no, no, Saruman, the industrious, pre-corruption Saruman. This is a powerful force to create good in the world, too. 

Eric Jorgenson: Yeah, the ones that came to mind for me, I don't know them deeply well enough to know if they fit your metaphor, but Dee Hock the- 

Alex Komoroske: Yes. 100%. Dee Hock, perfect example. Dee Hock was one of the people who I didn't know about before I worked at a payments company. And the more I learned, the more I was like, oh my God, damn, I want to meet this guy. Unfortunately, I never met him before he passed, but I knew a bunch of people who knew him quite well. And 100%, chaotic organizations, like his whole vibe is 100%. In fact, I should just add that to the essay. 

Eric Jorgenson: Yeah, it was a very organic sort of emergent management style that he had. And I think he, I mean, he had a grand slam with Visa and then just left and became, I think, a farmer, literally, like an orchard- just started tending his trees at the peak of his success, actually, just very quietly left and was just like, I'm good. 

Alex Komoroske: Yep, I got it. I'm comfortable with the impact they had. Yeah, a lot of the- this also happens in organizations. I forget where someone had this take on it, of people who understand the actual complexity of the organization that they're in tend to become hermits. They tend to remove themselves from the complexity of the thing. They can understand the complexity of the whole system so much that they can see how hard it is to influence and also, in Dee's case, I guess, feel comfortable that they have influenced it successfully and can feel comfortable leaving. But when you embrace the complexity of organizations and the world around you, the world is complex, that complexity is terrifying. When people experience it for the first time, it feels like staring into an abyss of like, if I acknowledge that this is correct and this is what's happening, then like nothing matters and nothing is possible to influence. And that is absolutely not the case. It's just what it feels like at the beginning. Once you recognize this inherent challenge of complexity that surrounds us all over the place, you realize that your actions, of course, matter. And if you do them correctly, your actions can compound and build on themselves and arc the world to a wildly different place than it would have if you wouldn't have done that. But people almost ignore it because it feels so scary, if it's true.

Eric Jorgenson: It's funny, I mean, that's a midwit curve, right? Like the people who are terrified of it and run away, and then the Jedis remove themselves from the system probably so that they can see it objectively and act upon it more likely from the outside because it's very difficult to really perceive the full complexity of the system as a cog inside of it. 

Alex Komoroske: Yeah, 100%. If you were inside the system only, you can't see the system sets your horizon, the end of the things that you can imagine. So what I typically talk about is the people who are best positioned to have the largest leverage have one foot inside the system and one foot outside the system. They are able to see the system from the outside as a particular assemblage of a thing that can be influenced. They're also inside the system enough to influence it and figure out where the leverage points are within it. You need that point of leverage. If you're only watching from outside, there's no good way of influencing it and no good way of understanding its internal logic. If you're entirely captured by its internal logic, captured inside the kind of- usually systems have to be, to some degree, especially large organizations, are kind of a machine that bend all the participants to its emergent logic. And you need to be able to see outside of it to understand. If you've only ever worked at one company, you can get really distracted and kind of in this weird position where the emergent reality of the company feels like everything. And when you've worked at multiple companies, you go, oh, wait a second, wait a second, I can see how these things are similar, or these things, this is one contingent fact within this company. When you only work at one company, you have this feeling of like, this is everything, and if I were to be fired, it would be death. But it's not death, of course, because you have all kinds of know-how. And you can go to other companies and do other useful things. And one of the tricks to doing magic is to recognize that what feels like death in a given context is not death. And that allows you to take things that look insanely risky, people who see it as actual death – oh my, I can't believe you would possibly do this action that might get you fired. Yes, but it also might create significant value for the company and for its employees, for its users and for society. And that's worth taking a swing at. 

Eric Jorgenson: Yeah, it's fascinating. I want to come back to the more practical career moments a little later. But while we're on systems, I think, and I've thought for a long time how each person- how do you become a systems person? Like, every time I meet one, they have a different path and a different sort of curriculum. How did you come to know all these things? Because I think it's such a superpower, but there's no curriculum for it. 

Alex Komoroske: There isn't. I read a book, Steven Johnson wrote this book called Emergence that I read. I had forgotten I had read this book, but I was talking with him a few months ago, and he said, did you ever read my book Emergence? And I was like, oh my God, I did. And I realized that had a huge influence on me and my slime mold deck was partially inspired by some of the things in that book. So I read about complex adaptive systems and chaos theory and stuff in high school. I built a little agent-based model of an ant colony one week at the beach. I did not like going to the beach. I was inside the beach house making this little agent-based model of ant colonies. It was in the back of my head for a while. My undergraduate thesis was on the emergent power dynamics of Wikipedia's user community. It's the sociotechnical systems of these user communities and how they worked and how they made sense of the world. At the time, I thought it was a total waste of time because I was going to go be a product manager at Google and I didn't have a CS degree technically, even though I had almost a dual major, not technically but almost. I was like, what a waste of time doing this social studies thing and writing my thesis on this weird thing. It makes me a weirdo in the tech industry. I kind of forgot it. And then you go into Google as an associate product manager that's 40 or so a year, and you're thrown into the deep end and told, hey, you're a product manager now. You're like, what? What does that mean? What do I do? And I helped run the program in various points over the years after that. So one thing we realized was like, one of the things we'd done accidentally, Marissa and others had done when they set the program was everybody was in a very competitive challenging environment, it was very challenging for them, but nobody was in direct competition for any scarce resource, and that allowed a lot of commiserating and learning and supporting one another. And the next year, the next class starts, and we would help create bonding moments across the classes. And so, the next class would come to you for mentorship. It's like me? I have no idea what I'm doing. I'm surprised they haven't fired me. But they'd say, yeah, but you have- first of all, you're very approachable because you're just one year older than me, and two, you have a year more experience than me. And so, I learned very early on that mentoring really helps. Mentoring is one of those things where it adds value directly. It's an end in and of itself. It has to have a huge impact on someone's life and career. People bring you concrete challenges when they come for mentoring. It's not exactly the same as direct experience, but it's orders of magnitude more rich than book learning. And also, a lot of the way that we know how to navigate complex environments in the real world is, it's pre-linguistic, it's intuitive. And when someone wants to hear it, and you work to distill and abduct these ideas into words, you're often like, oh, that's why that works. And then you can apply it much more directly, so it's just like a win, win, win, win, win. So, I think a lot of product managers have this God complex thing, where, especially at large companies, you make a small tweak to a thing, and tomorrow 20 million users have done a different action, and it's hard not to think, I did that. I made that happen. And in a certain way you did. But I think product management is actually, you have much less control of your product and its users than you actually think you do. If you're a platform product manager, then you know there's some layer of indirection of like I do this thing and then it causes developers to do a thing which then causes the outcome I want. So you know there's another party in it. I was the PM for the open web platform for a number of years. And that's one of the situations where you've got multiple browser vendors, all of whom don't really like each other that much and are constantly kicking each other under the table. And so, you're under no illusion you're in control. You're like, wow, I have surprisingly little leverage. How can I force other browser vendors to do this thing that's definitely good for the open web but against their business interests? And how can you do that times a hundred over the course of multiple years? And so, it forces you to grapple with systems thinking. And I did it not because I was- I just did it intuitively. Like, okay, what are the incentive structures? How can I get these things to compound and go on? And later, I kind of discovered, oh, here's why this worked for game theoretic reasons. And here's some of the stuff I learned in college and probably some of those lenses. And later I uncovered the intuition I had developed and could explain it using systems theory and game theory. I find people who understand systems theory too early in their career, it's really a curse, actually, because you'll see extra steps ahead of what your boss sees. The number one rule, the emergent rule of all organizations is pretend your boss is right because your boss can fire you. And I don't mean- in general, fundamentally, like your boss has to think that you're doing a good job. And if you understand something they don't understand and you're doing something like avoiding a problem they don't see, it can be a real problem for you because they'll say, no, simply do X. It’s like, yes, but X will cause the opposite of the thing that we want to happen. So, I will do Y, which will cause three X outcomes to happen. And at the beginning, you don't have a lot of credibility. I was lucky that I was getting promoted quickly enough and I had earned the credibility in the traditional way that people would be willing to listen to my kooky weird ideas and be like I don't understand how he's doing it, but I see that he's making little miracles happen, I'll give him credit for it. But if you're early in your career, you don't get that. You're just the person who's doing a bunch of random stuff that nobody understands, who thinks you're high and mighty. And also, at the beginning, you're probably wrong. A lot of systems thinkers I've found who are early in their career will confidently state certain like [inaudible 19:23], and it's like, I don't think that's how that works. But so, if you're wrong as a systems thinker, like you can be really wrong, and you can really disengage from what's happening. 

Eric Jorgenson: Yeah, I mean, the Dunning-Kruger Valley of systems thinking, because it's so complex, you can imagine it's actually so much longer than it might be and so much foggier. The feedback loops are so long in some cases and the cause and effect are such a black box that it's really, even if you are good or are becoming good, it's difficult to gain confidence in that. 

Alex Komoroske: A hundred percent. And what I found is some people who have some ability to do it, they'll say, well, they'll be taking actions that aren't actually viable in the context they're in but are right in some sense. And they'll be failing at it. And they'll say, well, the system is the… These people just don't understand the thing. It's like, cool, but you have to work with these people. Like, this is a constraint. This is like saying, well, if gravity didn't exist, then this would work. It's like, cool, but gravity does exist. So that's a constraint to fit within. And so a lot of it is how do you make the right thing happen in a large organization of people where all kinds of weird emergent effects show up that are completely outside of any individual, that no one individual understands or even wants to happen. You have to acknowledge those effects. And once you recognize that they're there, you can do all kinds of crazy magic tricks. But you can't, this would have worked if only. It has to be a thing that could possibly change. And if it's not, then like that's a constraint you must fit within. 

Eric Jorgenson: So, let's talk about some of the magic tricks. I think it seems like an odd question, but I also think it's worth asking: What does systems thinking look like? What form does it take for you? 

Alex Komoroske: For me, often, it's the general technique is what we call optimizing for serendipity, farming for miracles. And what you do is you plant a small number of a diverse set of acorns. Acorns are small little ideas that are viable that might grow into something, and then you just invest energy in ones that happen to grow. So the ones that don't grow, you don't fret about, you don't try to, oh, is this exactly going to work? You just plant a whole bunch and then respond to the ones that are going in the direction you want and don't hold on too tightly to any particular plans. Like you're creating the wave that you’re surfing to some degree. And if you do this properly and you plant a diversity of seeds and they're all cheap enough and have very little downside, the only downside is opportunity cost, you can get pretty good coverage of a number of interesting frames. For example, this might be a project that you think is interesting and has potential network effects if it gets going. It might be investing in a person who you think has really interesting ideas and she's an amazing product manager who knows the space really well and also can see strategic, interesting things, like I'm going to help support her and give her encouragement to explore a prototype or spar on ideas with her to help her find the ones that are going to be really great. That's an example of an acorn, somebody who like, I don't know, but this person has a bunch of really interesting insights that are unlike what I've seen from other people, and they can connect dots in a way I haven't seen before. So I don't know what's going to happen, but I think the world would be a better place if I help this person become an even more effective version of themselves. And so, it's a lot of that. There's a number of things you can do, like a bunch of tactics, very, very tactical in organizations is if you actually understand what people think, not the official thing of what they're supposed to think, but what they actually think, you often can uncover, oh, wait a second, here's the answer that works. Like, this is the only answer that plausibly fits all these constraints, but they'll often pop to you, and people often don't share a lot of the constraints. I have this whole essay about constraints can be- normal constraints everybody can see. Those are easier. They can also be like dinosaur bones, like they're concrete but they're buried, and you’ve got to unearth them, and once you do, you can point them out. Oh, my god, yes, okay, that's a constraint. But there's also another class of constraints I call laser beams that are, they're real, they're hard to find, and even once you've found them, you'll forget that they're there. And these are things like so and so once is feeling like they haven't met- that they're behind and they want to get promoted and they feel like they deserve a promotion and they haven't gotten one. This is a thing you don't talk about very often, that's not an important constraint, but it kind of is, though. When people are feeling engaged and like they're growing, they do significantly better work. And so, these are the kinds of constraints that you have to take into account that are- or one of my rules of thumb at large organizations is if your idea isn't self-evident within 30 seconds of exposition to a new grad engineer, your idea is not viable. It has to be so clear. If anybody could say no at any point, you need to find an idea that everybody can look at and go, yes, that makes sense, I think that… and you can scope it down to the point where everyone thinks it makes sense. And there's tactics that will help you find these ideas that can grow into something much larger. One of the weird tactics I would do, and one-on-ones are magic, if you can create an authentic relationship with somebody to demonstrate, to connect with them. One of the things I would do if I'm meeting someone new in an organization is, one, I would curse, drop the F-bomb within the first 30 seconds, and two, I would observe something embarrassing about the leadership team or something. Like, I would say something that's aversive. And what this does is it sets an expectation from the very beginning of we're going to be authentic and real here, and you can be real with me because I just told you things that if I threw you under the bus for whatever reason, you could throw me under the bus. Oh yeah, well Alex said this aversive thing. And that from the very beginning sets you on a path where they'll often tell you. Another thing you can do is what I call cold reading statements. They're statements that allow an interesting conversation to develop. So, one of the ones that works in almost every case in every organization ever is you say in a one-on-one, I believe in the official strategy and I think it's going to work. But man, are there a lot of challenges. And if you say this statement, this has been true in every organization ever, and what will happen is everybody will say, yeah, I can't believe that they think that X thing is going to work or that Y is the right leader for that. You're like, okay. So you're getting this kind of disconfirming evidence that helps you get a full picture of how the thing works. And you have to be- you have to operate in all of this. Like a lot of these tactics are amoral, which is to say they are neither moral nor immoral. If you use these tactics to cause harm to somebody or harm to the thing or something that was self-serving but bad for the company or for these employees or whatever, that would be morally bad. So you have to have a moral compass in all of this and ask yourself continuously, if someone were to show me a video of this in 10 years in front of 100 people whose opinion I deeply care about, would I be embarrassed? And you want to do things that you'd feel actively, ideally not embarrassed of, but ideally proud of. And this lens helps align you and make sure you aren't doing something too focused on what's good for you in this moment in this role. You want to think more broadly what's good for the organization, what's good for the company, what's good for society. All of these should be operating to some degree in decisions you make constantly. 

Eric Jorgenson: Yeah, I think that visualizing yourself from a third party is extremely useful. I remember, this is an Arnold Schwarzenegger thing. The documentary crew showed up and followed him around the gym and all of a sudden, he was PRing on everything. And so now he like mentally would use that trick over and over again, just pretended like a crew was following him all the time. And every minute of his day was like, if millions of people knew what I was doing right now, they could not deny me that like I'm going to become the greatest of all time. I was like, ah, cool. So, another question on the application of systems thinking here, are there times when acknowledging the complexity is actually the wrong thing to do? There are obvious truths where you should just act on the first order of information with the next most- like the whole mental model where a successful business is most often just taking the most obvious next step seriously and thoroughly and then figuring out the next most obvious next step. 

Alex Komoroske: Yeah, 100%. There's cases, if the thing is going to die if you don't take some action in the next step, like take some action, man, like do something. You’ve got to get the momentum. Like momentum matters. And when people, organizations of people get really antsy when they feel like they're going in circles. It's like being stuck in traffic. They hate it and they get frustrated and terrified, in existential dread. And so sometimes doing a thing, even if it's the wrong thing, if it's not like a dangerous thing, is a good idea. Like keep on keeping on. It's easier to navigate something that's already moving than it is to like get something that people are feeling is swirling. And sometimes, the lens I use for how organizations operate is kayfabe. Have you ever heard of that word? 

Eric Jorgenson: Yeah, that's the professional wrestling term, right? 

Alex Komoroske: Yeah, that's right. It's an old carny word that's traditionally applied to professional wrestling, and to me, it's a thing that everybody knows is fake, but everybody acts like it's true. And kayfabe is one of the things that makes organizations work. If every time that someone proposed a new project, like a leader said, we're going to do so and so, someone raised their hand in front of everyone and goes, this will definitely not work, then I can guarantee you it will not work because everyone goes, oh shit, that's not going to work. And so, you want- kayfabe is like a little bit of optimism. It can get dysfunctionally off the charts when you have too high a pressure of an organization where everyone's green shifting their ideas just a little bit or their current status. Imagine if it’s a high-pressure situation, your manager comes to you and says, hey, I'm doing the status update we're going to share with leadership next Thursday. Is your thing green or yellow? And if it's kind of yellow, but I'm on a path to getting it to green before next Thursday, I'm going to fudge a little bit and say it's green because I don't want to worry them. And by the time they hear about it, I will have fixed it. And if you do this up multiple layers, it compounds. You can quickly get into a situation that's radically off from the underlying ground truth. But if you point out in that situation the ground truth is different from the official plan, in some ways, you're like, oh, I'm a hero, well, you actually just destroyed everything because now all of the structures of beliefs and plans were based on something that's not really correct. But if it's just a swirling chaotic mess and nobody knows what's happening and everybody is frustrated or whatever, like that also destroys value. You have to find- sometimes organizations get stuck in situations where they are so radically off from the ground truth that they are destroying value as they execute. And in those cases, how do you plant seeds that will help them? I used to describe it as like tickling the company. How do you tickle the company? You can't cause it to do something radically different by tickling it. But you might, if you get the right belly laugh at the exact right moment, you might get the organization to go on the right path. There's a bunch of techniques I used to use. They're called like safe subversive, where you connect nine of the ten dots and leave the last dot unconnected. And that way, because if you connect all the ten dots, you say something subversive, someone who you're implicitly critiquing or whatever, if they hear about it from, oh, did you hear about this thing that said our strategy is wrong, they're going to come swinging. I mean, I would if I were them. And so, what you do is you give nine of the ten dots and you allow the reader to connect the dot. And sometimes they connect it, sometimes they don't. Sometimes they connect it in a different way than you're envisioning. But when they do, what happens is the reader now is part of the argument. And they go, well, wait a second, that thing about the other company, that applies to us. It's like, oh, my God, what? Like, yes, I picked that example specifically because- but that allows a level of them feeling co-creating. I wrote a newsletter in a past life that was, now there's an external version of it actually, and it's about systems thinking in a large organization, and at one point, we had something that was sort of critiquing a class of product strategy, and some VP reached out to me after the essay, after that issue went out, and I was like, oh crap, this is the person who we were directly subtweeting, and they go- and I was like, they're going to be mad at me and say, why are you doing a newsletter on company time or whatever. And they said, Alex, last week's episode of that newsletter made me rethink my entire strategy. I realized, I think that applies to me. And this thing happened. And I was like, ah, yeah, I was sub-tweeting your specific product, so that landed. And they had understood it in a slightly different way because they had a different contextual understanding of it, and so they came up with something that was way better than I- I was wrong. I was correct…

Eric Jorgenson: Your solution was wrong. 

Alex Komoroske: Yeah. What I thought they should do was wrong. But the idea that something was off was correct. And so it helped them see and they had a situated insight that was much stronger than what it some random critiquing them saying, I think you're an idiot. That's not going to go well.

Eric Jorgenson: That's beautiful. I mean, you were the butterflies wings. Is there an art? I mean, are you always trying to like make the change with the lightest possible touch? Like a minimum effective-? 

Alex Komoroske: I typically try. I typically try. And I just find that you can plant a lot more touches and the likelihood that something big happens is reasonable if you can plant a number of them and then invest in the ones that seem to have traction. And this requires, you’ve got to- like one of my rules of thumb is you’ve got to survive to thrive. Like first you have to survive. If you are knocked out in round one, it doesn't matter what would happen in round two, three, four, or five, you're dead. You can't do anything. And so, in an organization, you have to be- if anybody ever asks somebody else about you or your team and they go, what does so and so do again? That means you're on the cusp of death. And so, you want to minimize the number of times that people ever ask that because you want everyone who looks to go, I think that that person or that team is at least minimally competent and is creating more value than they are costing the company. You want everyone to agree with that statement, which is actually a relatively low bar in a lot of ways. But my rule of thumb at other companies was like 70% of my time should be dealing with normal, like just do the thing everybody thinks that we should be doing, turn the crank, even if the crank's not attached to anything, but look, I turned the crank 10 times this month or whatever. And then 30% of the time would have been slop and random. Instead of having it just be slop and randomness, apply it very deliberately to specific bets that I'm making of people to invest in or ideas to pull on. And something in that 30%, one of those acorns will start growing. And then once it starts growing, it's natural to start watering it, investing it, and it might grow into a whole oak tree. 

Eric Jorgenson: Yeah. Are you able to get any more concrete about what some of those miracles you farmed were? The first set was helpful or the examples sort of at a high level, but I think… 

Alex Komoroske: So let me give a, I'll give a stylized example of a real example, but like I won't talk- it won't be identifying, I guess. Imagine an entire product area of a large organization that is working on a thing that if you ask every individual person, behind the scenes, and say, I can't make this work. I don't think- does this actually work? It seems like there's this miracle. Everyone would go, yeah, it doesn't work. It's like, okay, but you're still executing as though it does. What am I going to do? Stand to thwart this entire steamroller and get run over? I will definitely not be able to convince everybody, but I would definitely get run over. And if I participate and help move it forward, then maybe I'll get promoted from like the intermediate work before the whole thing blows up. And so, you find a bunch of people, so it’s like, okay, this thing is in what we call a super critical state. It's a situation where the right inciting incidents will cause this thing to shatter and then re-cohere. And so if that's the case, how can you plant seeds or shelling points of ideas that when the whole thing shatters at some point for some unknown reason in the future, it can re-cohere around something that does make sense. And so, I was mentoring a PM in this organization who had been there for seven years, wasn't very senior in the organization but was widely respected by people who knew them. And at one point, he came to me and said, Alex, I think if we could restart what we're doing in this PA from scratch, we would've done it exactly backwards from how we actually did it. And he had this visual, and I said, ooh, it's like, yeah, it's kind of like slag tights with slag mites, like, that's great. You should develop this. Like, you should take some time to develop a little seven slide deck. And so, I worked with this person behind the scenes. I spent like maybe 40 hours of helping them story workshop and like, if this is too direct, these teams will feel threatened, so put a question mark at the end of that statement, and you want to open with this thing and then you establish this visual metaphor and then land it on slide six. And that will give people the Chekhov's gun, kind of like, ooh, this narrative is coherent, it makes sense. There's a lot of narrative tricks you can do that make a thing really feel coherent. And then they shared it, and it was shared around to a few. During this time, you could sense when a reorg is coming. You can just feel it. Everything kind of gets a little bit frozen a bit, and people say- and they get kind of cagey about stuff and you can- It's nothing explicit. It's just lots of like little signals that all start pointing in one direction. So, I could tell a large reorg was coming. And so, this person shared this document around. It was read by a number of people and they're like, oh, that’s a good idea. But nothing really happened. And then the reorg happens and a new VP comes in and the VP of course says to everybody, this organization is clearly not working that great, says what's the one thing I should read? And everybody points them at this deck, which lays out the starter, the bones of a new strategy that is radically different than what the team has been doing in the past. And so that becomes adopted as the strategy. And so that's an example of planting a seed, sensing that something might be afoot. And there's a non-trivial chance that that deck could have like not been particularly well received or spread that widely, or the reorg may have not happened. But for 40 hours of work on a thing that would be a massive change for the broader organization, I don't know, it seems like a reasonable trade-off.

Eric Jorgenson: And the beautiful, I mean, the very first thing, which was the first assumption of like it's okay if this thing implodes, like this plant might die in the gardener's situation, but what do we do after it does? We may not be able to save it. It's dubious whether we should try even. 

Alex Komoroske: And me standing up in front of everybody and saying, hey, I think you're all a bunch of idiots and this thing doesn't work, I just know I'd be booed out of- tomatoes pelted at me, and I’d probably be wrong. Like I'd probably be missing some significant thing. So this opens up the possibility for others to cohere around the idea without pushing it, without holding it too tightly as a thing that must happen. A lot of these things are- one of the tricks of doing miracles is you just have like dozens of candidates at all times. And people don't see the ones that don't grow. And they do see the ones that do. And so, if they're cheap enough, on the side you can have tons of them. And it's funny when people- I write a whole bunch externally. I think talking and like writing to me is a way of thinking through ideas. And a bunch of people will talk to me in my office hours and say, I aspire to blog more often and I'm just nervous about, I want to only say things I think people will find really interesting. And I'll say, good news, nobody's going to read what you write, unless you say something controversial, which don't do that. Like, if it's not any good, no one will read it. And if it's good, people will read it. And so it's like a self-capping downside. So, if you do this thing, it's like, oh, someone will look at this and realize they made the wrong decision, if it was super cheap and no one pays attention, who cares? It doesn't matter. People aren't judging on that, they're you judging on the things that do work. And so, I think people often get stuck in these situations where the risk and downside is miscalibrated about how they're thinking about it. 

Eric Jorgenson: Yeah, I mean, the condiments like Nobel prize was like we weigh the social downside so massively, like the perceived expected value of even a small amount of embarrassment is just like so scary to people that it prevents them from doing things that are in all rationality zero downside, all upside. 

Alex Komoroske: I said like, nobody's going to care. Nobody's going to give a shit. Like you do some embarrassing thing, you're like, for years when you're going to sleep, like, oh, remember when I did that thing that was embarrassing, and nobody else has thought about this situation ever again. Nobody else cares. I find that I am dysfunctionally conscientious. I am deeply driven, like the monster that drives me is a deep-seated need to be liked by people. And this is a thing that I’ve worked through in couple's counseling quite a bit for the last decade. But it also helps drive you to understand other people's perspectives. That's one of the ways I've channeled that energy. I still find myself getting stuck in situations where I'm easy to manipulate for that reason I guess. I wear my heart on my sleeve and I'm... but understanding how these things work and recognizing, when I applied for... when you apply for a job that you really want, like your dream job, the thing that people default to do is I want to show them how well I fit with exactly what they want. But what if you don't fit exactly what they want? And going into it, you can try to show them exactly how you fit, even if it's not a good fit. Or you can say, listen, I do not fit in your normal criteria. Here is what I can do and what I think I'm really good at and what I think will be a useful thing to this. And if it doesn't work, whatever. It was like, you're never going to see that person again, and they're never going to talk to you again, which is basically the situation you started in. And if it does work, they'll hire you for you, not for the person they want you to be. And so, a bunch of these tactics are like, I don't know, just about recognizing the actual cap downside of the reality of this person, other person not caring, or it's one hour of your life that's gone. 

Eric Jorgenson: Which in a way is, maybe I'm over extrapolating, but in a way is like systems thinking in that it's just focusing on the second order outcome, not the first. 

Alex Komoroske: So here's the trick. I agree. Systems thinking is almost defined by its multi-ply thinking. And this allows you to then call non-systems thinking one-ply, which brings to mind like rest stop toilet paper or something that barely is effective. And part of the problem with multi-ply thinking is, first of all, it's extremely hard to coordinate multiple individuals that you're working with, especially if it's a cacophonous background, everyone's running around, doing a bunch of stuff. It just takes time to think through the implications that are not obvious. Two, if you get any of the earlier plies wrong, your entire conclusion is incorrect. And so it does require, like you can mess it up significantly. There are a bunch of tricks though that allow a lot of the details to wash away. Like if you have one thing that has linear returns and one thing that has compounding returns, it kind of doesn't matter like, oh, what's the rate of growth, the linear thing? It doesn't matter, y'all. Like on a long enough time horizon, if this one truly is compounding and is also viable, this one will win. So, all the details wash out and who cares? So, people focus so many- I can't tell you the number of times that I've seen in large organizations, just anyone I've been a part of, where people will sit there looking at these spreadsheets and they'll say, well, we got a spreadsheet that tells us we're going to expect 20 million daily active user growth next month. And they'll sit there debating, hey, this field right here, it says 20.8, should that be 20.9? It's like, guys, what are you talking about? This is all a comfort blanket. It's all made up. This is not actually the fundamental- What matters more is some of these things that have a compounding return. I worked in a large product which I won't name specifically, that had at one point, it was always trying going to grow daily active users, and at one point, they're trying to have spreadsheets. Okay, you're responsible for 5 million daily active users, and you're responsible for 22.3 million daily active user growth or whatever. And there was one year where the politics were so bad, none of the VPs could agree, it was like bitter fighting. And so that meant that all the actual product managers and engineers and designers were like, I guess we just work on the P2s? There's no direction on what it is that we're supposed to work on. And that was the year that had the highest amount of daily active user growth. Because it was like a swarm of P2s. It turns out if you take a thing and you make it work better, like one way of looking at product debt is how different is how your product actually operates from how your user thinks it operates. That's product debt. And so when you sand this down and you have these across different things, you clean up little inconsistencies across features, nobody would ask for this explicitly. That's not anybody's individual problem, but collectively, it creates a very different vibe for the product. What's funny is after that happened, the leadership team said, well, what was the heroic thing? What was the one that solved it? And they researched and they tried to understand and they can't find a single thing that caused it. So they go, I guess it was an anomaly and nothing happened. Or maybe it turns out that like allowing a bunch of people who care about their products to like invest in the things that they suspect will make their product better without some overarching plan is actually a pretty good plan for a product that is itself viable and growing at an accelerating rate. 

Eric Jorgenson: You could have just fired all the VPs, get people self-organized. 

Alex Komoroske: And I understand that self-organization has a- there's certain tactics, problem domains where it works really well, and there's others that don't work very well. So for example, once you have a thing that's already roughly working, self-organization stuff is really good at hill climbing on that thing. Lots of different people trying out different things. And if you have something where you just need a little teensy, weensy thing to get started that will then have momentum that you can pick out afterwards, that also works reasonably well. But if you want like a medium-sized outcome of a thing that's coherent on a medium-sized time horizon or whatever, like self-directed organization is not going to work because you're never going to get anything. I happen to believe that the vast majority of medium-range projects that people do are complete bullshit and will never actually work anyway, and so it'd be better just to take all that energy and put it in a kind of thing. I just think everybody- there's a phenomenon that some folks that used to work for me that have a consulting background talked about called the Hairy Back phenomenon. Have you ever heard of this? It's in any organization, they'll show, oh, our numbers are going down, but here's our strategy to make it go like this. And they'll always have this compounding return once we enact our brilliant plan, and of course they never work. It never does anything. So, if you chart all of the plans over the last 10 years, all of the individual things, you'll see just the hairy back of this thing as it slowly declines. Every plan is like, well, this one is going to work. The reality is a bunch of these things are structural. If you have a thing that has linear returns or sub-linear returns or is fundamentally some kind of dying thing, it doesn't really matter what kinds of linear heroics you do. The fundamentals don't change of that thing. And finding a thing that has the network effects kind of capabilities, possibilities and is viable to start is extremely challenging to do. So, a lot of times, you have- you don't know something is viable until it reaches and reacts to the real world and is seen as useful and is used in the real world. And when you assemble these things, this will definitely work, you put it together, and like I have an old essay on a working product, a working ecosystem is like a roaring bonfire. And so, you want to make a new bonfire, and so what you do is if you have a lot of resources, you get the best wood, and you stack it up real tall, and then you take your flamethrower, and you're like, wow, look at all this flame, it's amazing. It's like, no, the whole question is, did the wood catch? Is this fire self-sustaining, or is it not? And you actually can't tell when the flamethrower's on, because look at all that flame, it's like, oh, I don't know, is it from the flame thrower or is it from the wood actually having caught? And it's much easier to start a growing bonfire by starting with a small thing, a flame that is working and arc and grow, grow it incrementally and nurture this usage. It's much harder to do and it kind of goes against the heroic myth of like, of course, if you simply thought harder, planned better, executed better, this would all work. I think the reality is that doesn't work nearly as well as people intuitively think it does. By the way, I'm not saying that systems thinking this approach is always better. I'm just saying that in practical reality of industry, across all the industries I'm aware of and viewed from various angles, everybody seems to think the plan, like centralized planning approach works orders of magnitude better than it actually does. It's like obvious, like looking at the outcomes that you can see, and yet people are like, if you as an individual employee are choosing between one thing that probably will work, but if it does work, it will get you no credit and there'll be nothing to show for it, or one that probably won't work, but you'll definitely get credit for trying, it'll be very obvious all the concrete actions you took and why they possibly would have worked, as an employee, by default, you will pick the latter because that's the one that is least likely to get you fired. And why stick your neck out on a thing that might not work? And if it doesn't work, you have nothing to show for it. And they go, were you just sitting there twiddling your thumbs? What were you doing? And so you get this pull towards everybody would rather live in a world in which planning works in some significant way beyond. And so everybody has a slight preference for that. And so, we get these emergent outcomes where everybody pretends that it works significantly better than it does. 

Eric Jorgenson: Yeah, then the grandiosity of the plan is like it should ring alarm bells instead of give you comfort. I think the systems book that spoke to me the most of the ones that I've read is the Systems Bible by John Gall. 

Alex Komoroske: Oh, yeah. I have not read that one. 

Eric Jorgenson: I find it absolutely hilarious and just it's what showed me this way of thinking a little bit, and there's a systems law in there that is large systems that work invariably came from small systems that worked. They were not designed and started as large systems. And I find that to be extremely- you can see that all the way through the history of company building and entrepreneurship and probably projects within large companies too. 

Alex Komoroske: All over the place. And this is what's so funny to me is like that one of my best predictors of like, one of the large companies I used to work at a number of years ago, the best predictor for a project that was going to fail was how much open headcount they had. Because that demonstrated that they were growing quickly, at a very quick rate, and probably significantly beyond what the underlying traction of the product merited. And so that was the best predictor of a thing that was going to catastrophically fail within the next year. And it happened almost every time. And so the bigness of the plans, people compare the bigness of the plans and go, my plan is bigger than yours. Okay, that means your plan is worse than mine. But everyone competes on the bigness of their plans, and that's the one that wins. One of my moves that was often, if you have a lot of different big plans that are all gunning for the same turf, like in five years they'll all overlap, and one of them's got a ton of political momentum right now, and they say, I reserve all of this territory, you can go, okay, we'll take this shitty little corner over here that nobody wants, and you know that the likelihood that they die is significant, that that project is defunded or whatever, is significantly higher. And if you do a small thing and say, I'm going to set this relatively small goal that I think I can achieve, that we all agree, nobody else wants this shitty territory, but it would be valuable, that gives you something to survive to then maybe potentially, maybe, grow into some of the other territories others turned out before.

Eric Jorgenson: Survive to thrive on a systems level or on a project level or on a company level. I'm going to send you a copy of this book because it's my very favorite, it’s one of my top five books of all time. 

Alex Komoroske: Is it not the Systemantics book? It’s a different one?  

Eric Jorgenson: Yeah, it’s a rewrite a of Systemantics. 

Alex Komoroske: I have Systemantics on my physical bookshelf but I do not have that one. 

Eric Jorgenson: It’s the third edition of Systemantics. They just renamed it.

Alex Komoroske: Oh, that also explains why you can't find Systemantics except in like auction for very old out of print books. Okay, so I should look for... 

Eric Jorgenson: Yeah, look for the Systems Bible. It's incredible. So, I think you've mentioned a few essays in here and the slime mold deck you mentioned previously. I love, I mean, just the titles of these set my curiosity wandering. And so maybe I can pull your string on a few of these and have you talk us through them. But we should start with the deck because I'm not- slime mold is a term I'm not familiar with. 

Alex Komoroske: So slime mold is a- I originally learned about it, I forgot, from Steven Johnson's book, Emergence. It's a classic example that shows up in a lot of systems thinking and combat system stuff. It's a colony of single-celled organisms that operate as a coherent individual. Often, they just kind of bleh out and it has this emergent structure that will find the food or whatever, but under stress, they can cohere into a coherent entity that works together and creates a stock. And so they are the classic complex adaptive system that has no central planning or authority and yet has coherent outcomes. And a lot of the things, by the way, the writing I do often frames and riffs I've used in conversations that lots of people have gotten. And if lots of people say that, then from the frame of the thing, I'm like, okay, I should write that down. One of the rules of thumb for something that will go viral, and I don't mean viral on TikTok, but I mean viral as in it will stick in people's brains, is at the very beginning, if people who are very unlike each other all find it interesting or intriguing, that's a good sign that it will spread out to be very large. And so, that's one of the best predictors of like- sorry, my kids are walking in. So the intuition is at the very beginning, if you're looking at a social network and only people in the same clique are resharing it, then the ceiling of people you know find it viable is relatively small. But if lots of very different people who are not in the same subnetworks all share it, the likelihood that it circumscribes, sort of potential audience... 

Eric Jorgenson: Addressable audience.

Alex Komoroske: And so, if you find like, oh, the salesperson likes it, and the VP likes it, and the junior person who just came out of college likes it, and everybody else is, ooh, that's a good sign that it's onto something. So the slime mold frame I knew before I wrote the original version, before the public version, I knew it attracted people because slime has this kind of inherently subversive quality to it, I guess. And that's one of the hooks to the idea of like he's saying that organizations are like slime mold. Gross. He's disparaging organizations. And yet I'm talking about how amazing an organization can be once you understand this emergent phenomenon as opposed to trying to fight it. 

Eric Jorgenson: Okay, let's see, of this, I'm scanning your essay list, how about the Iterative Adjacent Possible? 

Alex Komoroske: So, this is one that gets at that question of like people often accuse systems thinkers of being nihilists. And one of the reasons is because the adjacent possible is a design thinking frame, and it's the set of actions that are within your reach, that if you did them, they would almost certainly work. And a lot of, in the tech industry in particular, we act like our adjacent possible is massive, and they can explore this whole huge region. And then when I say, actually, your adjacent possible is quite small, the set of actions that you can actually know almost certainly will work is much, much smaller, people are like, you nihilist us. And the reality is, and no one's ever said it that angrily to my face, to be clear. But the reality is, once you recognize, yeah, it's a bummer that your adjacent possible is relatively small, but you do have actual true agency among the things in your adjacent possible. You can choose among the things you might do which ones you actually will do. And then after you make your choice, the world reacts and the universe changes in response, partially in response to what you just did and a new set of moves become available. And so if you choose consistently, you can arc to a wildly different outcome than it looks like you could at the beginning. And this comes from embracing the fact that, A, your adjacent possible is relatively small, but B, and most importantly, it's iterative. You have multiple iterations at this and they are not independent. And your actions in one affect the ones in the later. And this is why I find systems thinking to be a very empowering lens, even if it's often seen kind of naively as saying everything's impossible, nothing can be planned. 

Eric Jorgenson: Yeah, also I find it a little relaxing when you wake up today or this week and you're like, all right, we're going to make a move and we're going to make it hard and we're going to see what happens, it doesn't have that sense of like stress and dread that medium term planning of like, oh my God, we have to master plan and orchestrate every action that's going to take place over the course of this year. You can just take the most- some of the friends and entrepreneurs that I have who built the most impressive companies just took the most obvious next step day after day, week after week, and there was no master plan at all. But what they've built turns out beautiful and complex and robust and unique because there was no logical master plan in the beginning of it. 

Alex Komoroske: I do find that you need to have both to some degree. If you just follow the local adjacent possible, you can iterate yourselves into a corner. So, I heard, I don't know if this is true, it's just someone told me that some of the creators of Agile, which is obviously a brilliant set of tactics that is used quite commonly that focuses on short feedback loops and adjacent possibles and iterative, is great for navigating some of these things. However, if you don't have a plan, apparently some of the creators of Agile ended up building a product that started off with a very big potential user base of people and iterated themselves into an extremely specific subniche that was obviously not valuable because like that was the strongest gradient. So, what you want is an approximate North star, like a three to five year vision that's like two pages. It shouldn't list any specific teams or any specific parts. No pixels on it because it's easy for a bad idea to hide behind pretty pixels. Just something that sketches out a thing that everybody who reads it, every single person says, yeah, it's plausible. If that happened, it would not require miracles. And so, you're looking for the legal person, the person who's tried this at two other companies before to all look at it and go, I could see how that could work. Not saying it will, but I could see how it could. And so everyone agrees it's plausible, and also everyone agrees that if we did it, we would high five. Because sometimes you can have a plan that like, yeah, it's going to be a really long slog, and at the end we'll have basically nothing to show for it, don't do that plan, that's not interesting. You want something that's like, whoa, wow, the world will be different in some meaningful way. So now you have a thing, a North Star that you believe to be coherent, and now you reduce down to, what's a step in front of me that seems to be viable? If you do it without the North Star, you'll just follow whatever has the strongest gradient. But if instead you say, what is the thing that has the strongest gradient in the general direction of the North Star? So, you use UXR and other things to discover the interesting little steps that you could do that are plausible in your adjacent possible, and you use that as a consistent bias to arc towards something. If you don't, you either get stuck in a niche or you random walk through the problem domain and you have nothing that adds up to more than some of its parts when we look up later. So I think you need both of those, and they inform one another. As you learn more as walking, you'll look at your North Star and go, ooh, wait a second, this part actually is a miracle, but if you tweak it like this, that could work. And so you're constantly updating your kind of understanding of the North Star. And maybe it slides across the sky pretty significantly, but there's always some kind of long-term coordination point that you're siding off of to eventually converge to.

Eric Jorgenson: Very interesting. I like that. Okay. Let's... You spent quite a while at Google, and this sort of echoes back to your like only ever worked one place. So did you see that dynamic in yourself at Google of feeling like it was death to leave, this becomes the known universe? 

Alex Komoroske: I did. But I thought, as an APM, one of the perks was you got management coaching right out of undergrad, which is just insane. I think the world would be a better place if everybody had a therapist and everybody had a management coach. I think the world would just be a thousand times more productive. And so one of the things she did back in 2011 when a bunch of my friends were leaving to go to other big companies or startups, and Dropbox was really hot at the time, I was like, am I staying because it's easy and because I'm terrified of the unknown? She said, Alex, what you should do is once a year, you should allow yourself to be recruited by another company. And stop once you get to the point where you realize you're wasting someone's time. Like if you realize no matter what they would possibly offer me, I would not take it, stop, you don’t want to waste someone’s time. And that was really good advice. It helped me stay, so instead of staying passively, you stay actively. You kind of put yourself up on the fence and then say, yeah, you know what, I'm going to say. And so I felt like I stayed at Google very intentionally. I did feel every large organization fundamentally, the internal social logic of that organization comes to dominate. And it dominates in portion, like the social complexity will absorb all the space it possibly can. And so, it gets to the point where it kind of gets to the efficient frontier where it's just, the company is just barely producing that new value, if that makes sense. And so large organizations, one of the ways you can tell this is, I think Andreesen had this quip of like if people can’t tell you how to get to their office, like external visitors, because the map doesn't make any sense unless you are inside the company, have access to the internal map of it, that's a good sign that your company's grown to the point where you can't think externally. Every large successful organization has this happen, and it's almost like the basal metabolic rate of the organization gets higher and higher and higher as the internal social complexity gets larger and larger and larger. Tanger has this argument for the collapse of societies. I haven't read that book, but I understand that similar kind of argument of society gets to the point where they can't handle the complexity and it gets to the point where it can't do anything more and ultimately collapses. And every organization has that kind of vibe, and everything within the organization feels like this kayfabe, this thing of like this is the thing that's definitely happening. It's like you go from outside, you're like, okay, but like, definitely not though, right? Like, what's going on? And you get so hung up in it. And so it's good, I think, and important to move, to spend enough time in organizations for the long-term implications of your actions to happen. I think it's really easy to come into places and shake up a bunch of stuff and then move on. It's like, I fixed that place, and the people who followed me, man, a bunch of idiots, they couldn't fly the plane. It's like, you're the one who took the plane and pointed at the ground. And then you jumped before it smashed. Like, there's no move to- you were the one who messed that up. So you want to make sure that you are absorbing the indirect effects of your actions and not able to insulate yourself from them as much as possible. But I do think it's good to get different experiences in different companies. 

Eric Jorgenson: Yeah. Well, I think it's very interesting knowing that you now have given yourself this black belt in navigating large organizations to know that you are now branching out and starting something new. 

Alex Komoroske: Yeah, I'm terrified. That's a totally different thing for me. But it's one of those things where at a certain point, it just feels like when you have that idea that you can't stop thinking about that feels like your destiny, you kind of don't have a choice but to take a stab at it. And I think... 

Eric Jorgenson: How long- describe the seed and how it's grown with you. Like, where is the threshold of starting to feel like this idea is your destiny? 

Alex Komoroske: So three things about AI that I feel pretty strongly about, I had some public obscure document on this a few months ago, that like three things about AI: One, I think AI is legit. I think that this is not some flash in the pan, this is not something that will pass. I think LLMs are legitimately a step change in the kinds of stuff that we can accomplish as a species, and it will do all kinds of interesting things with them. I think LLMs are like magical duct tape that is principally composed of the crystallized intuition of all of society. And this has got to be useful for something, even if we haven't found the use for it yet. I think the vast majority of things that we've attempted to do for it as a broader industry are like just, we'll look back on it and be like, we're using it for what? Like, we're just not using this new material in the right way yet. I do think it's legit. Two, I think that companies that take AI for granted will have an interesting advantage. So in a gold rush, the vast majority of gold mines do not make money. And so if you want to make money in a gold rush, you do a meta move. One common meta move is you sell pickaxes, you make infrastructure. And this means no matter who succeeds, there's a pretty good chance that you are also making money. This is a well-known effect approach. Tons of companies are doing this. A second meta move that's less well-known is figure out what things are going to be necessary once gold is flowing through the economy. Once gold is prevalent, what's going to be necessary? Once you can take gold for granted, what will happen? I think that you can safely take the power of large language models for granted. A few months ago, it looks like, wow, OpenAI was so far ahead and nobody- is there anyone else that’s going to have GPT-4 level quality? A good friend gave a talk a couple days ago and said like GPT-4 level quality is now a commodity. We have multiple models that all meet or exceed it. If you haven't used Claude 3.5 Sonnet, it's extraordinary, it's extremely good. This came out, they just released this last week. This is amazing because it demonstrates that it's not like there's going to be one company that has a total edge on this. There's going to be lots of competition driving quality up and cost down, which means if you benefit from having high quality generalized models, you will be in a good spot. The third assertion about AI is I think that ecosystems will find the new, interesting types of value in this world. And the metaphor is, imagine we're on a continent today, the tech industry, we're in the late stage of the app/web paradigm, consumer aggregators, hyper-aggregators, and vertical SaaS, these are the two sustainable businesses that are radically, wildly successful. Every square inch of the continent that we are on is carved up. It's spoken for by some entity who is, to the point where they're factory farming, maybe even strip mining for value. We're getting less and less value because we're in the late stage. We've extracted most of the value on this continent. And then imagine that the sea level drops by 10 meters, by some significant amount. At the very beginning, what everyone does is they rush off into the mudflats, and they go, oh, wow, there's a shipwreck that's got a bunch of gold that I can get, or they say, wow, this would be a great place to plant a rice paddy and get value that way or whatever. But the real value in that situation is in the new continent that is now above water somewhere, who knows where, on the rest of the earth. And how do you find that? Well, you don't know where it's going to be. It's going to look totally unlike the continent that you're in. And so, the best way of finding it is a swarm of individual explorers going off in every which direction. When they find something, they're going to tell you about it. They're going to say, holy shit, I found a whole bunch of land over here. And so if you have something where you have adjacent to a swarm of exploration, the likelihood you find the interesting new continents of value that are radically different are significant. A lot of what we've built in computers and tech in the last 30 years presumes a certain kind of computation that is radically different for LLMs. They're squishy computers. They do not operate like normal computers. They are wrong in a number of cases. People couldn't try to use like this duct tape to create big factory farming tractors out of it. And it's like, yo, it's not going to work for that. It doesn't work. In AI, the joke in product management is always that when you build a product that looks 80% of the way done, it's actually 20% of the way done because the last 20% takes 80% of the time. With AI, it’s like 95-5. As in, you get a really amazing demo, like, wow, very quick. And you say, oh, and the last 5%, every 5% of the time, it punches the user in the face. So we're going to work on fixing that. And then you spend 95% of your time trying to somehow cut this down. The other thing people do, I think, a lot with AI is they use it as oracles. These things are probabilistic reasoners. They are vibes-based reasoning. They are doing a facsimile of reasoning. They are not doing novel reasoning, which I would argue, by the way, is true for most humans most of the time, too. So what happens is we engage our system one, I always get this backwards, system one or system two, the expensive one, the rational thinking one, relatively rarely, when there's a new challenge. We try an idea, we implement it in the world. If it works, then other people watch and go, huh, and they replicate it, or they do it more, or you do it more. And so good ideas tend to stick around, bad ideas evaporate away, and society caches the answers to these good ideas. And then we can just draw on them intuitively. I don't know, that's what everybody else does, I’ll just go do that. And since LLMs are like technical caches of society's good answers to questions in the past, and they have totally comprehensive coverage, which is what allows them to do it and just absolutely bonkers things. But if you use them for something that's going to require like novel reasoning, it's like a miracle it can do some of these novel reasoning tasks. So, when you build a product that assumes the LLM is an oracle, and if the oracle is wrong in a given use case, it doesn't work, it punches the user in the face, you have no recourse. There's nothing you can do to fix it. If instead you use the LLMs as magical duct tape, and you use it to build a kind of platform that lifts up the floor of what humans can do with their extra bit of judgment, you can have something that's much more resilient and can create value, even given that LLMs will never be perfect in every use case, even when they appear to work nearly perfectly in a given domain. 

Eric Jorgenson: That's very fascinating. So I want to, without prying about the nature of the startup in particular because I know you want to keep a lid on that for now, I want to just continue to extrapolate the vision of 10 years in the future, 20 years in the future, sort of writ large, like for humanity. It doesn't have to be constrained to AI, though. I know that's one of the things that you've thought about a lot. I just think this is a really fun sort of way to cap out conversations with people who spend a lot of time thinking about the future. 

Alex Komoroske: Let me sketch out two different worlds. When I ask, if you stop somebody on the street today, and you say, imagine for me the canonical piece of software, the answer you will get, no matter who you stop on the street, is most likely to be an answer like Instagram or just to say an app on their mobile phone. And I think that's kind of a bummer because apps are monolithic, they are one size fits all. Two, they're non-composable, they don't meaningfully interact with any other applications on your device. And three, they're only allowed to exist if one of the big OS platforms allows them to exist, which is kind of bonkers to me that we as a society allow that to be the situation. To me, software is alchemy. It's human agency extended beyond ourselves. And more than that, it's something that allows us to build something that can interact with other things that other people we never met did and work in ways that we had never foreseen. So it has this combinatorial possibility of human agency. It's a magical, amazing thing. And we somehow satisfied ourselves that the platonic form of this was going to be a couple of dozen little boxes that we pour some of this magic into, and that was it. And now with AI coming along, this magical duct tape that's society scale intuition, everyone is kind of just default assuming that the outcome will be that we're all going to be locked inside of a box with a super god AI Clippy. And people aren't sure whose Clippy it's going to be, but that's just the thing that people resign to. It's like, what? That sounds terrible to me. What if instead we use this magical duct tape and we escape the box? What if we allow software to be fluid and malleable and bespoke and personal, to not just be stuck into a size fits all of ready-made boxes? I think we have the potential to do that. A lot of technology today leans into a sort of passive experience for consumers. It's just kind of like a sit back, you get something curated and just infinite scroll. I think that technology should be about active. It should be about helping people create and use hand-tuned tools to extend their agency in collaborative ways with the people around them to create things that are greater than any of them could have possibly done before. So, I have hopefully a more optimistic take that AI will help encourage creativity and collaboration in ways that weren't possible before. And to do this, I think, to unleash software in this way requires, somewhat surprisingly, to tackle the privacy and security models at the core. I've talked to a lot of people who are doing amazing, brilliant stuff, experimenting with all these things, and they'll say, oh, I've got this AI agent framework that allows third-party agents to collaborate, to task. And I go, oh, what's your security model that allows you to safely compose untrusted third-party agents? And they go, oh, I'll figure it out later. No, you won't. That is not how that works. That is a fundamental constraint of the laws of physics. What you need is something that allows untrusted third-party code written by others to collaborate seamlessly in a way that's safe. It's very challenging to accomplish this. It is possible, but it's very challenging. And the laws of physics that we currently use for everything in the web and everything in the app model, the same origin paradigm, has fundamental limitations that make this very difficult and lead to software that is too big, too chunky, and too few, too centralized. 

Eric Jorgenson: I like just the phrase fluid software, like that starts cuing the imagination of like you see people generating new programs really quickly through AI and you can imagine that happening much more quickly. You can imagine layers of it and massive personalization in real time. Is that the gist sort of of the direction? 

Alex Komoroske: Yeah, it's something that allows us to be- like how can we allow AI to help us become more human? And I think the default assumption is that it takes away some of our agency. Some of these things, by the way, if you have a butler, like when people say agent, agent implies agency, and agency implies a thing that can do something behind your back, including stab you in the back if you aren't careful. You have to have a very high trust relationship with it. I kind of joke that LLMs, which are magical and amazing, are kind of like having a trained circus bear in your kitchen that makes you porridge. It's a miracle that it works at all. But also, it's a wild animal that is in your kitchen and could ransack your kitchen or kill you. Like, what are you talking- the porridge isn't even that good. Like, what are you talking about? So how do you allow, use LLMs not as something with agency, but something that has- as a magical duct tape. If you had someone- someone's like, I'm going to give you your own butler. You're like, oh, that's cool, sweet. Like, yeah, I'm going to pay for it. Like, your nosy mom is going to pay for your butler or whatever. I wouldn't trust that guy. He's going to report back. And so you want to make sure that you pay for that. You want to make sure that you have control and agency over this thing as opposed to it being something with agency from some other entity that doesn't have a- it kind of can be a conflict of interest otherwise. And so, I think assistance should be augmentation of your desires as opposed to some agentic thing that you happen to partner with.

Eric Jorgenson: And given the forces and the power that AI is going to bring to bear and massive new compute, the amount of augmentation that I think can come to that in 10 years or 20 years is kind of mind boggling. 

Alex Komoroske: Yeah, I think this stuff, it's just so powerful. And I think one of the things, by the way, with AI, people are like, oh, no one's using AI for anything. That is absolutely not true. Like, one of the things is AI is very useful to individuals, more useful to individuals than to organizations currently because it's very hard to use it in a structured way with planning and official resources. But AI is extremely useful for individuals to use in like hacky kind of jury-rigged ways and informal ways constantly. I use AI every single day, multiple times a day to help me think of a better word for something, or when I publish my Bits and Bobs, the thing that suggests what little phrases to pull out of it, I ask Claude, tell me the distinctive catchy phrases in this set of bits and bobs. And I pick, out of the 28 suggested, I pick five that I like the best. So people can use AI individually. And if you were using AI in your large organization, you probably aren’t going to tell your manager. It's all downside. They'll say, oh, did you get approval from so and so? And as long as you're not sharing information that you should not be leaking outside of the company or private information, obviously. But I think we'll under-report the influence of it. Because most people... 

Eric Jorgenson: A snapshot of AI in 2024 is like college professors saying, nobody's using AI and AI writing every kid's college paper in America. 

Alex Komoroske: Let's just acknowledge the fact that it is definitely being used. Ethan Mollick, who, if I could follow a single person, a single blogger on AI, I would follow Ethan because I think he's, first of all, just an absolutely fascinating thinker, and he comes at this as a deep expert but from a totally different angle. So, he's a Wharton professor, he's an amazing Substack, and I consider him a good friend. And he has this frame of, I don't know if he's changed this in the last couple years, but a year ago or so, he required his students to use Chat GPT to write their things. Like, I know you're going to do it anyway, so here's how it's going to work. You're going to write, you have an appendix where you describe how you used Chat GPT and exactly the queries you used and the things you did, and any factual inaccuracies in the essay is your responsibility. So, you can't say, oh, ChatGPT said it. Like, no, you were the one submitting the essay and you were the one responsible for the factual accuracy of all the claims that you make. And I thought that was a great way of acknowledging like this is how it's going to work. One of the other things, by the way, is we've used a lot of teaching and analysis tools to check if someone did the reading. It used to be very hard to write an essay fake. You could still pay someone to do it. That happened apparently surprisingly frequently, but it was hard to fake it automatically. Now it's really easy to fake. And so the best test is have someone sit down and discuss the paper with the person who wrote it. And if they can't answer questions about the core thesis or the things that they weren't, whatever, they probably didn't write it. And so, one of the coherent themes I see for AI, one of the consistent themes is the amount of slop that we see all over the place when we grow and grow and grow and become a background cacophony. And so I think that we will, as humans, place more and more emphasis on in-person and face-to-face live interactions as a way of being a grounding of what things to trust as we increasingly tune out of like, I don't know, man, like that's something that you put it here and I have no idea if that was a deep fake or slop or what. And so, I think that, I hope that there'll be tools to help people connect in person and also lever that trust into virtual environments. 

Eric Jorgenson: Fantastic. I know we're running short on time, but I adored every minute of this and can't wait to listen back to it. I feel like I learned so much. I want to ask you for one more thing, which is like to give me homework, basically. Like, Ethan's a great follow. What book should I read on AI or systems thinking? Who else should we be following? 

Alex Komoroske: One of my favorite books is The Origin of Wealth by Eric Beinhocker, just catching up with him. He's now at Oxford in the Economics Department. It's a great book on complex adaptive systems and application to economies and innovation. It's well-written. It's easy to understand. It goes through the fundamental challenges and gaps in the classical economics model and also has a really novel way of seeing all of business and innovation as an evolutionary search process over a whole universe of a fitness landscape of business plans. And I just find when it pops up on Readwise, the little highlights from it, every time I'm like I should read that again. So I think it's one of those books that I wish were more influential. I wish more people had read it because I think it's just a really great encapsulation. A lot of the slime mold kind of insights are similar. I read his book, I think, after I wrote that, I think. But he has a part of the reason why these large organizations become kind of paralyzed, similar kind of game theoretic analysis is what I've got in the slime mold deck. 

Eric Jorgenson: Thank you so much for taking the time, sharing, teaching, writing. Yeah, I feel like I can't spend enough time in your brain. So I appreciate you sharing with us. 

Alex Komoroske: Well, thanks for coming with me on the journey. And thanks for having me. I appreciate it. 

Eric Jorgenson: I appreciate you hanging out with us today. Thank you for listening. If you liked this episode, you will definitely love my episode with Sam Arbesman, number 63. You can also search Max Olson or get into any of the Rolling Fun episodes. You can sign up for the Smart Friends email newsletter at ejorgenson.com to get show notes and some occasional essays that I write. You can invest alongside me and my partners at rolling.fun. Both are linked in the show notes below. To support the show, please text this episode to a friend or co-worker you think would enjoy it. Share it on Twitter, LinkedIn, whatever. Remember that every episode you listen to, every good idea you learn brings us one step closer to utopia. So, thank you for listening, and I'll see you next time.