No Way Out
No Way Out: The #1 Podcast on John Boyd’s OODA Loop, The Flow System, and Navigating UncertaintySponsored by AGLX — a global network powering adaptive leadership, enterprise agility, and resilient teams in complex, high-stakes environments.Home to the deepest explorations of Colonel John R. Boyd’s OODA Loop (Observe–Orient–Decide–Act), Destruction and Creation, Patterns of Conflict — and the official voice of The Flow System, the modern evolution of Boyd’s ideas into complex adaptive systems, team-of-teams design, and achieving unbreakable flow.
140+ episodes | New episodes weekly We show how Boyd’s work, The Flow System, and AGLX’s real-world experience enable leaders, startups, militaries, and organizations to out-think, out-adapt, and out-maneuver in today’s chaotic VUCA world — from business strategy and cybersecurity to agile leadership, trading, sports, safety, mental health, and personal decision-making.Subscribe now for the clearest OODA Loop explanations, John Boyd breakdowns, and practical tools for navigating uncertainty available anywhere in 2025.
The Whirl of Reorientation (Substack): https://thewhirlofreorientation.substack.com The Flow System: https://www.theflowsystem.com AGLX Global Network: https://www.aglx.com
#OODALoop #JohnBoyd #TheFlowSystem #Flow #NavigatingUncertainty #AdaptiveLeadership #VUCA
No Way Out
Meaning Can't Be Encoded: OODA Loop, AI, and the Human Edge | Natalie Monbiot
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
The fastest way to get burned by AI is to treat it like a magic replacement for your brain. We bring Natalie Monbiot back to pressure-test a better approach: human agency first, automation second, and judgment always on the human side when the stakes are real.
We talk about what’s changed in AI over the past year, why AI agents feel so emancipating when they remove tedious work, and why trust is becoming a core differentiator between platforms. From job displacement fears to “vibe coding” and the shrinking need for white-collar mechanics, we zoom out on the future of work and then zoom back in to the only question that matters: once the machine can do more, what should we intentionally keep for ourselves?
A big chunk of our conversation is about judgment, meaning, and responsibility. AI can reason and recommend, but it doesn’t live with the consequences. That gap creates an “illusion of certainty” that makes people outsource decisions they later regret. We also get into AI parrots, work slop, and why authenticity in writing collapses when you don’t own the thesis. Then we explore digital twins inside companies and what changes when communication becomes low-risk and always available.
We close with "Artist and the Machine" and what AI is unlocking for artists, filmmakers, and writers, including faster production, new mediums, and surprising shifts in ownership. If you care about AI productivity, AI ethics, human-AI collaboration, and the practical future of creative work, this one is for you. Subscribe, share this with a friend who’s anxious about AI, and leave a review with the one task you’re ready to offload next.
John R. Boyd's Conceptual Spiral was originally titled No Way Out. In his own words:
“There is no way out unless we can eliminate the features just cited. Since we don’t know how to do this, we must continue the whirl of reorientation…”
A promotional message for Ember Health. Safe and effective IV ketamine care for individuals seeking relief from depression. Ember Health's evidence-based, partner-oriented, and patient-centered care model, boasting an 84% treatment success rate with 44% of patients reaching depression remission. It also mentions their extensive experience with over 40,000 infusions and treatment of more than 2,500 patients, including veterans, first responders, and individuals with anxiety and PTSD
Stay connected with No Way Out and The Whirl Of ReOrientation
X: @NoWayOutcast · @PonchAGLX · @NoWayOutMoose
Substack: The Whirl Of ReOrientation - www.thewhirl.substack.com
Welcome Back And AI Update
Mark McGrathWhich I love having people on our show that come from all over the world. Certainly one of our our guests today who returns for her second go-around with us is on the all accent team. If I recall, she's I got a n you're from Nebraska? Is that? Yeah. Somewhere somewhere like Yeah, Nebraska. No. Natalie, welcome back.
Natalie MonbiotThanks very much. Great to be here.
Mark McGrathSo we had you on over just over a year ago. I believe it was January of 25 for episode number 99. We're up to approaching 160, I believe. In the world of AI, which is your your world, um, and why we had you on the first time to talk about it, just in that 13 months since we last spoke, so much has happened. Actually, so much has happened since we booked this recording. Why don't you give us a quick overview of what you think the state of things are since we last talked? Because we we had such a good chat last. I know.
Natalie MonbiotI mean, I'm gonna sort of just try and define my lane, I guess, because there's a lot going on uh when it comes to AI. And what I'm continue a nice constant and through line through all my work, really, uh, is I continue to focus on the relationship between human beings and AI on a daily basis in their work in ways that enable humans to hold on to their agency, right? To hold on to uh having direction, control, freedom in their lives and how humans can, people can collaborate with AI to increase that agency rather than give it away. So I'd say that's a constant through line. And I am particularly uh excited at the moment about really kind of researching and digging into this space even further as I and you and all of us experience the advances in the capabilities themselves. And as that line of what AI can do for us keeps moving, it can do more and more and more. What should we let it do? And what does that leave for us to do? And what do we need to be very conscious of claiming and holding on to in order for this relationship to work?
Holding On To Human Agency
Mark McGrathAaron Powell I mean, that's the isn't that the conversation that many are having? Because on one hand, everybody thinks that they're getting replaced by AI. But then I think when we speak with you and you know, when Punch and I talk about this, we're talking about hey, how does AI serve us and help us and help us think better and help us orient to reality better?
Natalie MonbiotWell, first of all, I think the fact that we can offload a lot of really tedious tasks to AI is liberating, right? So a lot of, I mean, something that I've been playing around with a lot recently is uh clawed co-work. And the fact that I can allow an AI agent to take control of my machine and do the tedious tasks of, let's say, API integrations or like zap, connecting, you know, zaps from this to that and everything to like help me in the back end of my distribution of my content. The fact that I don't need to do that myself and spend my day doing that and feeling really frustrated about it is incredible. That's incredibly emancipating. So I'm all in favor of that. Of course, as the trust questions and like, should you trust that agent to do these things for you and whatever? But like as long as that's taken care of, and actually interesting, I think it's really interesting as well, sort of a tangent. Maybe we'll get to this after. Building trust, like I think Anthropics done an incredible job of branding and coming out as like we're the trustworthy platform. Like, I don't think twice about using Claude Cowork. Even if technically maybe I I should question trusting Claude Cowork to take over my machine. I'm just like, yes, yes, yes, permissions, yes, yes, yes, because I trust the brand. The brand has done a really good job, in my opinion, recently, of building that trust with consumers to enable us to feel comfortable about actually outsourcing such things like use of your computer to an AI. But I would not trust and necessarily entrust another um AI incumbent to do that. So I think it's really smart. Like we need to be able to trust in order to do it. Everyone's got different thresholds of trust. And it might be that we trust some more than others. So I think part of the battle now amongst the AI models actually comes down to how much trust they can generate. And I find find that really fascinating.
Trusting AI Agents With Your Work
Brian "Ponch" RiveraI struggle with giving Claude access to files. Um I I before I do that, I'll back it up. Uh I will, you know, I will test it out, I will do those things. Um, having seen a lot of data breaches and a lot of issues over the last several years, uh, you know, our likely chance that our phones are already hacked, somebody has access, apps have, you know, Siri can listen to this, things can do that. There are unintended consequences of doing this. So it what I'm hesitant to do it. Just based on my orientation with the threats that have been around me for my lifetime, right? Uh it doesn't mean I won't do it. Uh and then something else popped up in the last 48 hours. There's a heat map that came up, and I'm going to share it with the you and our audience at the moment. I I don't know if you've seen this and Moose saw it, but uh I can't remember who wrote this or who created this, and I know it was taken down. This is on the Wayback Machine right now, but it's given a heat map of where um likely jobs are going to be and not be in the future, right? Have you seen this, Natalie? No, I haven't. All right.
Mark McGrathSo the I can't think of his name. I have it here. I'll pull it up.
Brian "Ponch" RiveraYeah, it was somebody who was working with Elon, somebody worked with um I I can't remember which company, I don't want to get this wrong, but it's just a nice heat map uh that pulls from the Bureau of Labor Statistics and looks at what jobs are likely to go away. So if you're in the green, if you're a cook, if you're a hard hand laborer, uh physical jobs, you're pretty safe. If you're in the legal field, if you have the computer in front of you at work or if you're working remotely, uh kind of like what we're doing, uh you're you're you're a target, right? You're you AI might be taking over things from you. And that's it doesn't mean it's right or wrong. It's just uh how things are evolving. It's an interesting heat map that came up and they took it down within 24 hours of them putting it up. I mean, you have the name on that?
Mark McGrathI'm looking for it right now. Okay. I I couldn't pronounce the guy's name, but he he was not OpenAI CEO. No, it wasn't OpenAI. It was um Anyway, I'll send it to you, Melly. Yeah, I'll find it while we're chatting here. Yeah, that's cool. There's an article recently in Forbes that when I sent you Punch, that someone was writing about how lawyers thought they were safe, and all of a sudden Claude is making them realize that no, we're not. We've we've got uh we've got some challenges that we're not we're not thinking about. Trevor Burrus, Jr.
Brian "Ponch" RiveraBut the acceleration of AI is is useful. And in our house this morning, my wife and I, she works with a with a data company, and we talked about what our game plan is, uh, you know, once our jobs are not eliminated, but when we have to really compete for them, I think we're lucky. We we have interesting thesis, we have interesting content, uh access, things like that. I think we're safe. But for those that were coming from the consulting world, the agile world, where software developers are being reduced dramatically, we've heard from product owners that uh you can sit down with your customer now and you can vibe code things into things, right? That vibe coding thing that's happening. So the world's changing, in my opinion, faster than I thought. And what we know from being on or you're having different guests on the podcast, it's about to get crazier. Natalie, any thoughts on that?
Jobs At Risk And Vibe Coding
Natalie MonbiotYeah, I think things are um getting pretty crazy and getting crazier. And I think that there's a couple of different ways of looking at this. I think when you look at it like from taking a big step back and kind of like holistically, it actually feels pretty frightening, I think, in some ways. And then when you look at it, and this is my own experience, right? It's like it's hard not to worry when you look at it from kind of a global perspective and kind of theorize about it. I do feel that the optimism can come about when you're actually working with AI yourself and unlocking the benefits to you. And I think that when you see piece by piece how benefits can be unlocked to you on a personal basis, you can start to imagine what the path to that future might look like for you. And to what extent you need to learn to work with the tools as they're emerging and as their capabilities are increasing in order to be able to even know and be inside the game, to know how to navigate working with AI in a way that actually does offer fresh opportunities. I think that the job landscape is gonna look extremely different. And I think there's there is evidence of pain already. I think that there is going to be a lot of pain. Uh, I also think that there is gonna be a lot of opportunity, but to be able to seize that opportunity, you really need to be in it and you need to be working with the tools in real time and trying to be aware of what you're bringing to the table. Like the tools are doing their piece. So what are you bringing? Now that you've unlocked all this time because I don't need to spend all day, you know, on my zaps. I'm like, okay, so what am I gonna do? There's a certain comfort to just like the mechanics of work. And I think a lot of white-collar jobs are actually about the mechanics of work, and that actually is exposes a fault line or an uncomfortable truth with maybe how corporate life or corporate structures have evolved.
Mark McGrathI feel like it started with do you think like it started with COVID? Because I remember when when COVID happened and I I was working in uh asset management, it really, you know, like when the tide goes out, you see who's swimming with a suit on and who's naked was sort of you know, AI has uh amplified this, but COVID, I think, showed a lot of the redundant jobs that don't need to exist anymore. I also think that blockchain did too. Like blockchain, not not Bitcoin, but like blockchain as a as a trust system showed a lot of things in asset management and processing and others, that there were these massive legacy apparatus that could just go away. Then COVID, now AI, it's showing more and more and more. Um, you know, you're familiar with our substack as we are with yours. I mean, I've not paid thousands for a graphic artist. Why would I do that? I can just go on OpenAI or chat or or Grok or Gemini and I can make my own pictures the way I want them, and it could be done in like a minute or two. Why would I pay thousands for a graphic artist? That kind of thing. I mean, but you're seeing more and more of that.
Pain And Opportunity In White Collar
Natalie MonbiotAnd um say that maybe your business couldn't have emerged in the way that it has without the advancement of these tools that are available to you, right? And so that's an example of what is new that wasn't necessarily possible before with like a two-person unit, or you know, maybe there's more other people in the background or people that like chime in, or you know, you bring in as needed, but a lot more can be possible with ver fewer people. And that's not always a bad thing, you know?
Mark McGrathYeah.
Natalie MonbiotSo I think there is gonna be a lot of discomfort, a lot of pain because of the way things have become, or the way that, you know, company structures and stuff like exist today. But I also think that there are new ideas that will come to light and new, just a lot of innovation that we can't even anticipate yet as a result.
Mark McGrathGrow growing up in Pittsburgh, you know, at one point there were thousands and thousands of steel workers, right? And then all of a sudden there weren't for various reasons with, you know, uh labor costs and that sort of thing. But at one point, still, Pittsburgh is still to this day producing a ton of steel with less than half a percent of what they, I mean, that's just my number, but like less than half a percent or one percent of what they had as a labor force because of the innovations and the things like continuous casting or whatever in robotics that eliminated jobs anyways. I feel like what we're seeing with AI is not new. And that's something that we bump into quite a bit with some of our collaborators. They say, yes, the medium is the message as Marshall McLuhan said, however, AI is different. And I think that McLuhan would say that no, it's not different. It's like any other technology that humanity has had that to adjust to because it's having a direct effect on our human capacity.
Natalie MonbiotYeah.
Mark McGrathYeah.
Natalie MonbiotWell, there's a I mean, okay.
Mark McGrathI think that we, as far as our human capacity is our capabilities or like to think that these things don't affect us.
Meaning, Judgment, And Decision Stakes
Natalie MonbiotWe I would say that basically our human capacity is being challenged right now, right? So it's like I think we've got quite comfortable going through the motions of life and work and maybe not having to think too hard or be on our toes too much. And there's just been a lot of kind of like ex you know, expectation about the comfort that a job will maybe provide for you without necessarily needing to exercise what, you know, push yourself as a human being. And when I say push yourself as a human being, it's kind of like, you know, being kind of more entrepreneurial-minded, like feel the stakes and try to be creative and like need to be creative and think about well, what is it that's needed? What should I do? What does this new situation present? How does you know, what does it present? If AI can do my job or a lot or a lot of my job, what should I be focused on? Like, how can I create value when the tools can handle so much of it? And for me, I really feel like our role as human beings is to create meaning, is to ask these questions and to figure out in our own world. So in my world, it's, you know, focusing on this line of AI human collaboration and kind of, you know, the discomfort of that and trying to figure out ways that we can collaborate in healthy ways with AI that constantly put our humanity in question. Like, what is it to be human? Right. So, like the the post that I just published is around judgment and why judgment should remain, broadly speaking, as a role that, you know, the domain of humans while execution belongs to the machine. But the post is actually about the fact that that line is constantly moving because clearly AI can judge. And clearly we want it to handle a lot of judgment for us because it's exhausting, making the same judgment calls over and over again and having to address, you know, potentially hundreds or thousands of similar requests. Like we want it to do that. We also want it to handle tedious work, judge, you know, make judgments on tedious work for us. But increasingly, as AI becomes more and more capable, its ability to actually judge for you in the way that you would judge yourself increases. And again, that's a good thing if you intentionally want the AI to do that for you. And you want it to do that for you because it makes it a more effective collaborator that understands you better. Yet there is a distinction between judgment kind of in the abstract, right? So just to be able to like reason through a problem and make a decision based on that reasoning versus the type of judgment that human beings are capable of, which is founded on understanding and meaning. Like what are the implications of this decision, right? So taking it all in, that is something that is distinctly human, as embodied creatures that have grown up and participate in the world. So that is the preserve of humans, is basically any type of judgment that requires context, meaning, understanding, um, which is contingent on the stakes as well. Do the implications of your judgment actually affect you or other people? Like you can't rely on a machine that does not feel and experience those stakes to do that for you. There's a really interesting anthropic study that goes into this and basically, because making difficult decisions, even though humans are really good at it and we have instincts, we have something that the cognitive scientist John Vivake calls uh relevance realization. Like we have this ability to zero in and just know the answer or know what's good and that kind of thing. But it's also uncomfortable making decisions, you know, by just trusting your instincts. And this study actually shows, this anthropic study shows what happens when human beings outsource decision making to an AI when clearly it's something that the human being should have decided for themselves. They can't help but do what the AI tells them because it gives them this illusion of certainty. They go out and do it in the world and like execute on these like real-world relationships, and then the result is regrettable. They express regret, right? So it's just proof that you can't outsource that kind of decision-making to the machine, as tempting as it is.
Mark McGrathYou say meaning can't be encoded.
Natalie MonbiotYeah, exactly. That's what I believe. I believe meaning cannot be encoded. Judgment, which has a pattern to it and has pattern is routine, judgment that you can teach can be taught to the machine. But when it comes down to really understanding and operating based, making decisions based on what it will mean, what the implications are, a machine just can't do that. That is for you, especially as you as long as the impact of the decision falls on you and other people, right? So there's something about the stakes as well.
Mark McGrathAnd that's why place the piece though. Like who are you really writing this to? You know, when you when you wrote this, who are you who are you really trying to connect with in this respect for this piece?
Natalie MonbiotIt's people, well, first of all, I write it partly for myself to make sense of things for myself. And then others who are trying to make sense of how to collaborate, collaborate with AI. So these are people who are not, you know, there's obviously a camp of people who just like pro AI as fast as possible, like, you know, like like let's, you know, let's reach a singularity as quickly as possible, and like all that kind of stuff, right? So it's not for them because they're just not interested in that.
Mark McGrathDid you did you see Agents of Chaos? Have you seen that paper? Punkshire talking about that.
Natalie MonbiotYes, I actually have seen that. Yes.
AI Parrots And Work Slop
Mark McGrathBasically, as I as I read it, it basically is all these people had outsourced all of their decision making, including their orientation. So they they they were trying to encode meeting. They were they had they had given up on what they're supposed to be doing when we talk about. You know, they had given up on the world of reorientation and outsourced the entire thing. But I think ultimately that got them misoriented in the wrong direction. And when when they had given everything over to AI, everything just falls apart, which is you know, Moose, I want to add to this.
Brian "Ponch" RiveraI got to step out here in a second. But one thing that we're noticing on the podcast, when we're reaching out to guests and actually bringing them on the show, is we look at how well people write, the topics they're writing about, and and clearly we want those folks on the show. What we're finding and discovering is a lot of times they don't know what they wrote because AI wrote it for them, right? We've had guests on the show that we're asking them about something they wrote in the last 24 hours and they can't remember it. They're like, what are you talking about? And what we're finding is I'll call them an AI parrot, right? They're just parroting what their AI says, projecting it out there as content. But when you put them in a room like this and you have a conversation, they have no idea what they're talking about. They're they're useless. And I think that's gonna happen more at scale. This is something that we're noticing. It's probably more and more.
Natalie MonbiotThat happens with your own guests, who are people that presumably have a strong point of view and all of that. And I think that also has strong implications for where other humans get their meaning, right? So you're gonna trust writing less, or you'll have a knack for seeing writing that was written by a human for humans. I've actually written about this before, specifically, like that distance that we feel. And there's studies like from MIT and stuff that show when you let an AI write for you or write the first draft, if you weren't involved in in the it, like at least at the very beginning, you have no connection to that, to that work. And then I guess the question is, well, what was that piece of writing for, right? If it was just content marketing and it's like riffing off a central idea, which is maybe true and meaningful, and it's just riffing, it's like, okay, so why not let AI do that? It's like, you know, different iterations of the same. You opened it, you read it, maybe you didn't read the previous, the previous post because you missed it or whatever, you know, all the the sort of the rules of opportunities to see and and all of that. But yeah, I mean, I strive to write in a way, even though I do use AI very much so. I strive to make sure, well, like I want it to connect with others and I want there to be a truth to it. So that's my medium. But it's interesting because video, live video, it's quite difficult to, you know, own an idea if you don't actually have it embedded in your system, right?
Mark McGrathYeah.
Natalie MonbiotIn a video conversation, like it's easier to call people out, I guess, or being in a live environment. That's why live events are, you know, so important.
Mark McGrathWell, we're dancing around McLuhan because, you know, he always thought that content was less important than the medium. You know, the actual technology and the environment was more was more important. And I I kind of tie in too. I feel like AI outputs aren't necessarily bad, assuming that the thesis was good, right? Assum assuming that the thesis was coherent, assuming that, you know, you could sit there and brain dump and on things that you know about and in with your own unique thesis, and if the output reflects that, you'd have to know that because you'd have to have the command of the of the information. But it doesn't necessarily mean that the outputs were bad.
Natalie MonbiotAaron Powell Yeah. I mean, if the thesis is grounded and means something and adds something and makes people feel something and it helps them to act in ways that they find valuable and therefore it keeps them coming back to the thesis. In whatever form, maybe you're doing them a service by articulating that thesis in more ways. They want more of that content, right? They want more ways to look at it. They want other ways that it can be said, they want it in different media. Like that's a good use of AI, I guess. But you know, like it depends on the strength of that original thesis.
Mark McGrathYeah. Yeah, I think that's really what it always boils down to anymore.
Natalie MonbiotYeah. I mean, if you think about it, like you could take a franchise, like a, you know, a book, you know, Harry Potter, right? That turned into a film. It turned into like all kinds of different things. I mean, that's because that the central story is so powerful. You know, someone didn't hand make every they did probably every aspect of that film. But you know, you can have all of these different iterations of things once you have that incredibly powerful story or thesis. And, you know, if it resonates, it resonates. And if people want more of it in different formats, you know.
Mark McGrathYeah. I mean, you wrote in March about like dividing the labors. You were saying about offloading tedious tasks. And, you know, these are things that Buckminster Fuller and Isaac Asimov, they predicted this with the automation of education, that they predicted this would happen and that would be a benefit to humans in their thinking and their learning that you would offload the, you know, having to get in the car, drive to the library, and go through the card catalog and all that stuff. Not that that didn't have merit in the old times when that was the prime technology at one point, but now you're able to offload that stuff and focus on what it is that you're actually learning. And you mentioned, you know, like Claude being a thought partner. I mean, I think that's a pretty valid use of AI.
Natalie MonbiotFeels like it. Yeah. Although I don't have to bother my, you know, there are certain humans whose value I definitely uh would want on tap. But in the absence of that, Claude can do a decent job. It doesn't take the place of a human reviewer. Ultimately, I don't think. So but it depends on the stakes of what we're doing.
Mark McGrathI would love to have a conversation with McLuhan, but I can't. But I can, I, I can, I can Socratically ask Claude questions about stuff that I've read about McLuhan's to, I think, learn it better. Like learn it, learn it with more effectiveness, actually, than you know, it's not that I still don't read books and take notes because we still we still do that and we still go through transcripts. And I just got back from a trip and about to go on another one, and I I I still take books with me. Yeah. But that even though I understand it there you go. Even though I understand it's artificial intelligence, there is something to be said about having to have a dialogue with it as if they were.
Natalie MonbiotYeah. Fantastic. But that's a great example of a thesis, you know, a body of work that exists that can like live on further, like more and like a wider.
Brian "Ponch" RiveraHere's something that uh I'm noticing uh as I interact with Claude, and that is, uh and I'm seeing a lot of posts on this too, and a lot of threads on it. And that is the way you interact with it is very similar to the way you would interact with a teammate, the way you ask questions, the way you would use different types of acronyms to, you know, a situation, a background, and a recommendation, that type of thing. What that means to me is that human agent teaming that is emerging is no different than what we understand from team science. And so we can start with that team science as a basis to help people understand this is how you work together as a team. And by the way, more than likely, this is how you're going to interact with an agent, not just a tool, but you know, something down the road that that might have some sentience and and who knows, might be conscious in the future, but this is how it works. So it's very valuable for me to see that the tools and techniques, the methods we've been teaching for the last 13, 14 years in teaming are more important today than they ever been. And I don't know if you're seeing that too, Moose and Natalie.
Mark McGrathSay, yeah, I mean, I I agree. I mean, I think that that's what again, going back and reading like Asimov and all those things. I mean, robots were supposed to be our partners. Like those are supposed to be like any technology is supposed to be a partnership that we can, as McLuhan said, that organically becomes an extension of my myself. You know, I'm a Marine. A rifle is organic. A marine animates a rifle. That rifle is an extension of me, of what I, you know, so it doesn't replace me. It enhances me, right? I mean, that's I don't think I don't don't you think too, with John Boyd, with people, ideas, things all in that order, you know, things are so as long as they're augmenting human faculties and human learning and advancing us as humans, that there's nothing, there's nothing wrong with that. He wasn't a Luddite.
Natalie MonbiotYeah. Well, sort of going, you know, while we're on the historical references, Karl Marx worried that with industrialization, that basically the worker would be more and more distant from the product, right? Like what they were actually creating at the end of the day, right? At the end of the assembly line, like they have their task, and the more industrialization there is, the more disconnected that you are. And that um you could be end up being an appendage to the machine, which is really those were his words. It's just really interesting language, which I feel is so expand on that.
Mark McGrathI'm an economist, so I get it, but expand it, put that in terms of our audience.
Natalie MonbiotBeing an appendage to the machine. So let's say, you know, line factory, I don't know, manufacturing a car, and you're one. I'm like a really not a specialist in this area, so forgive me. But like, you know, your responsibility is, I don't know, like the rear view mirror, right? And that's all you do. And you're doing the rear view mirror like every single time. But what you're building, or a cog and a wheel, maybe is like more appropriate. But the point is you aren't connected to the vehicle, the fleet of vehicles that are actually being produced and that you're contributing to, because you're just focused on this one tiny little thing. So you're an appendage to the machine, as in like you're serving, you're just like a little add-on to this thing that is being built. You are not central in any way to what is being produced. And I think that's really relevant now because I think that is exactly the the risk with humans in AI, is we can become an appendage to the machine in a lot of different ways. Like one way is that we're just training data that feeds the machine.
Mark McGrathLike training our replacements.
Natalie MonbiotRight. Or just like our knowledge. We're like feed what we're we're educating the machine on all the future batteries.
Mark McGrathYeah. We're an energy source for maybe I need to finally publish it. I've I've edited it so many times and it's it's it's been out there for about a year. I just haven't published it yet. But I I believe that's what content creators are doing on OnlyFans and whatever, they're training their replacements. Because at some point, the human angle there for the people that are consuming that is good. That's irrelevant to them at some point. If a machine can do exactly the same thing, always show up on time, that kind of a thing. You know, I I think that's a that's an interesting I mean, again, that's one example, but um I think that that's an example of someone who's, as as Ponch was saying, you know, you're you're basically training the you know, the consumers are training it one way for what it is that they want to consume, and then the and then the content creators are training it another way um and seeing what works and what doesn't work and you know what what hits. I believe actually, Natalie, it was at um we want to talk about artists in the machine at some point, but uh when I went to that last last year in Brooklyn, you uh you had uh Hazard Lee talking about the F-35, and he was saying that the F-35 that the prime mission of that is to collect data from pilots. Right. It's training itself constantly on the pilots that are landing on carriers or taking off this way or doing these maneuvers and all that, and the whole entire time is it's it's it's augmenting its own knowledge and understanding of things based off the telemetry of the pilot. At some point, the way he was describing it or the way I took it was like, well, the pilots are training their replacements. And those things exist now.
Natalie MonbiotYeah, I mean, I guess it's a little bit like what I was just saying before. Like I have um a Claude skill that I'm training on my judgment, right? I'm actively training it on my judgment. I want it to know how I think better because that will just make me more productive if I don't have to keep correcting it every time, right? And so I'm kind of creating a digital twin of my, to the extent that I can, of like how I think. Um, and hazardly creating a digital twin of how he operates, right? There's skills in the cockpit. And I guess it's it comes down to the intent, right? Do you intend to be an appendage to the machine or do you intend to offload work to the machine so that you can focus on the faculties that are distinctly human that the machine probably can't do? And you can sharpen how you know, like the distinctly human aspects of flying a fighter jet or thinking and writing and creating meaning.
Brian "Ponch" RiveraHere's a thought, Moose. John Boyd's aerial attack study is no different than what we're talking about right now, what Hazard Lee's talking about, right? So John Boyd was able to break down through cognitive task analysis, which it's hard for experts to do, is break that down and provide um not a step-by-step approach, but this is how things work, right? So he did that in the 50s before he wanted to understand the nature of creativity. That's paralleling what's happening today, right? If the F-35 is collecting information from how people fly, how pilots fly, it's doing what John Boyd did back then. It's understanding that, you know, how how do the experts do this? And and again, going back to Gary Klein's work, where he says it's hard to break down expert work because experts don't know how to describe what they're doing to others, that's what Claude is doing, right?
Natalie MonbiotAnd that is what Mazda Lee's doing, because I think he led the training program for all new fighter pilots. And so, you know, one use of this model that is being created, this digital twin of how he flies the plane, is to be able to simulate that for others. So it's a way to teach. And again, it's kind of like where is the emphasis? The emphasis here is on training humans to be as good as they possibly can be, right? By training the machines in a way that enables that.
Mark McGrathYeah. I mean, we we've had simulators for years. We've had we've had these things conceptually, I think, for a long time. It's just that the the the technology or I don't know, some something has changed, but but conceptually, you're still using technology to make humans better at at what they do, right? I mean, I found like like my one of one of my gaps was always like psychology and emotional intelligence. And I found that the more stuff that I put inside of a strategic intelligence system I was trying to design on Chat GPT, the more psych stuff and the more emotional intelligence stuff, like it would direct me in places to go read and study and make me aware of things that I hadn't, you know, that I hadn't thought of before. Almost like, as you were saying in in your article, almost like an intellectual partner to say, hey, look at this or think about this.
Natalie MonbiotYeah, I mean, that's how I try to use it. I mean, I will have to say, it's very tempting to just give it more, right? Because it can. That's the difference, I think, with at least LLMs, right? Is that they can do the work, they can judge, they can write stuff up. Um, so I'm actually a little allergic to anything that's really long because and anything that's really verbose, right? That might have been likely to have had AI involved because it's so easy to just produce something that is sounds sort of elegant, veneer, you know, and sort of on the surface, but doesn't, yeah. It's just like someone didn't even like pare it down to its bare essence. And it's like, what is the essence? That's all I care about. Is there an essence? Like, could this be boiled down to an essence? And I feel like, you know, people have a little sniff test for that. And work slop, right? So there was a Harvard business review study that shows that 41% of people at work have been exposed to work slop, right? And that is exactly that thing. Is that new?
Mark McGrathIs that new? Haven't you sat through a shitty, a shitty PowerPoint that somebody made that was shit and it was like garbage? I feel like work slop is nothing new.
Natalie MonbiotIt's just it's definitely not new, but it's definitely exacerbated by AI.
Digital Twins And Lower Risk Communication
Mark McGrathBecause you can just disorienting it. You're you're disorienting it from reality. Like you're you're you're you're pushing it's it's I think that goes back to agents of chaos. You because you you didn't encode meeting, which you say you can't, but like you didn't even attempt to put it in a directional, in a directional sense that aligned to whatever it is that you're trying to accomplish, and you just outsourced all your work without any thinking yourself. Because I think that that's the other thing, and I think that this is where where John Boyd would come back, that if you're outsourcing your thinking, well, your orientation's always going to be off. But if you're using tools, including AI, if you're using those to or a simulator or whatever, to enhance your thinking, to increase your situational awareness, to increase your understanding, that's a not only a valid use, that's a that's what we're supposed to be to be doing. Because I think what ends up happening is a lot of people, again, we get into these medium as the message deniers. They're like, yeah, I agree with McLuhan 100%, except for this one thing. He just never thought of AI. No, he did. He thought of every technology possible, having any technology having a direct effect and impact on humanity. The very fact that you're talking about it is proving McLuhan's case. That's the thing that they they end up doing unironically, but ironically. You know?
Natalie MonbiotActually, on that, um, because as you know, uh, I have a background in virtual twins. So digital replicas of real humans.
Mark McGrathAnd um Are we talking to you, by the way, or your twin?
Natalie MonbiotThis is that's again, it's it's do you know what? For me, I still build yeah, I still build these virtual twins like for enterprises and you know, embed them in a company's communication stack to actually increase the quality of communication with an organization. And so that's one thing. But also separate, kind of adjacent to that, for me, these digital twins are kind of a metaphor for how we divide and conquer and collaborate with AI. So you just asked me, is this you or a digital twin? Well, first of all, this would always be me, because we're having a live discussion that is going to be viewed by your community and that warrants a live necessitates a live conversation. We are sparking new ideas, we are bonding over new ideas, we're kind of finding new angles that we have never talked about or thought about before, because that is what happens in conversation amongst humans, and new meaning is created. This is the role for humans, right? Like the role for AI is not this, or you know, AI in general, or an AI twin. The role for AI here would might be the distribu the synthesis and the distribution, right? But for me, the metaphor holds. Yeah, I I like digital twins for the fact that it can be metaphorical and that we can use examples. So thank you for planting that one. Am I is it me or an AI and it's me for the reasons that I just said. But back to McLuhan. So I actually really fascinated by um what I'm able to understand by McLuhan. I do not claim to understand everything that he said, he's which I guess is what makes him so enigmatic and worth studying. Uh he talks about the properties of a new medium, right? So, like every new medium has new properties, which then I'm gonna probably but I don't have it in front of me, so I'm gonna butcher it a little bit. But those new properties transform the world, right? They have implications that you cannot possibly imagine. And so I agree with you, like, you know, he did not not anticipate AI, right? He predicted it in a lot of ways. Yeah, that's true of AI. It's like it's you know, having transformative or mobile, right? Like with ride sharing and Uber, like you couldn't have had that without a mobile device. That was out utterly transformative. Not something that he literally predicted because it wasn't in his lifetime.
Mark McGrathBut the pattern was there. He had laid the foundation of the patterns, I think.
Natalie MonbiotYeah, exactly. So I I think his rules hold. And I tested his rules against digital twins within the enterprise. And because I think like, well, what does a digital twin based on an executive who's, you know, influential or whose vision is integral to a company? Why what is the benefit of having access to that executive, right? So what are the properties of the digital twin that make it truly useful? And yes, one is just access anytime. Like I can, you know, actually practically ask the CEO's twin a question that I would not feasibly be able to ask a CEO in real life because she or he are flying around the world and extremely busy and whatever. But what it actually does is it de-risks that there are other barriers as to why you wouldn't, as a junior new hire or or someone, just go and chat to the CEO. It's that you'd be nervous doing that. Wouldn't be able to really speak your mind. So I think that the distinct property of digital twins in the McLuhan sense is it actually it de-risks the communication. It removes interpersonal risk from that engagement. And therefore, it's not.
Mark McGrathLet me pause you there and ask. So, for example, because this is one way that I've used AI, you you write an angry message or an angry email, or you dictate an angry message, you know, whatever, about, you know, my brother didn't do this or whatever, blah, blah, blah, blah. And then AI filters it out to make sure that you stay on point and ask the question in a loving, human, constructive way that doesn't alienate anybody. Yeah. I had to train it to do that. Like I had to train it to filter it out. But I think that that I think that that's another good use of the phone.
Natalie MonbiotI agree. So and then also it's like, you know, when you when you have a new medium, don't just copy-paste what you did in the previous medium into the new medium. So like your digital twin doesn't need to be exactly like you. It can be a better version of you. It can be more tactful. It can be in a constant mood of whatever you choose, right? So it can represent your best self or how you want to be represented in the context of those communications, in this case, within a company.
Mark McGrathSo I tested this one. I'd love to get your thoughts on this. So I won't go into too many details because it's something I'm de I've been developing now for a couple of years. But basically, like when you evaluate the psychology of someone that's communicating to you, they say something like, man, well, that guy sounds like he had a bad morning, or oh geez, something's going on. But like everybody gives a tell when they're when they're talking, right? And there's just plenty of things that you could incorporate into AI, I think, to understand those patterns and look for these things. So one of the I asked uh Grok, super grok, find the angriest public email that we know of of an unhinged CEO, right? And it came, I forget his name was Henry, and I forget that I have to look at it and see what the company was. But came back with this one and he's complaining about like something to do with Thanksgiving and vacations, right? So you read it, I'm like, yeah, it's pretty bad. So so I put it into this psycho psychology simulator, and I got a read on it. And it's saying, like, yeah, this guy, this, this guy, that. I mean, these are these are things. So then I said, all right, now frame it as if this was a c the CEO of a company that you know we as a firm want to partner with to work on something. And he seems unhinged, but we see a lot of merit in what this company does, but we think that his psychology is gonna derail everything, so it wouldn't matter. And the value that we see in working with them is gonna is gonna be so so what the simulation I made was, okay, what would what are the 10 questions that you'd want to ask to try to elicit that to get a better understanding on the psychology and how much impact it has on the whole company? So it came up with these 10 questions, which I put back into. So I went from Claude back to Grok and I said, okay, now imagine that you're like the in the C-suite and you're this guy or that guy. Answer these questions, but answer them implicitly based off what you think you know off. And I think I just I said, you know, this is a simulation. And so it answered these 10 questions with implicit like allusions to what's really going on. So I read them and I'm like, yeah, those I could see what they're saying, but it wasn't like he's an asshole, you know, like he's a he's a jerk, you know, and he goes and he yells at everybody that there's none of that. So then I put it back in the simulator and it actually flagged and found everything. And it's like red flag, do not work with this company. This is obvious, these are things that and it was identifying and isolating patterns. Now, I think that that's actually another good use of AI because my knowledge of psychology and emotions or whatever aren't as refined as others, maybe. But like you get a gut feeling. Well, I want to know what that gut feeling is about. Well, what is it? Well, then it points. It says, this is a symbol of someone that's a covert narcissist, and this is and this is that's I don't know. I I thought I think that that's a good way to another way to simulate and learn with this stuff.
Natalie MonbiotYeah, definitely. And it takes the emotion out of it, which is That's the other thing.
Artists In The Machine Summit
Mark McGrathYeah. That's the other thing. Think of like or like or like nepotism. Like, let's say it was like, oh yeah, this is a buddy of mine, he's the CEO of this company, blah, blah, blah. We were frat brothers, and then you you you hear this and you read this, and this email's public, and you want to find out more context without saying, hey, I think your your buddy's an asshole. Like you don't want to do that. You want to just get a you know, paint a broader picture. I don't know. But I really want you to go on about artists in the machine. So I, you know, I went to it last year. You've had a couple of summits. I think you had another one in in uh in Los Angeles. And then you have another one coming up here in New York City.
Natalie MonbiotYeah.
Mark McGrathIn May, on May 14th, if I'm correct.
Natalie MonbiotYes, absolutely.
Mark McGrathGive us the uh give us the the big picture, the importance and the value of that, because as McLuhan said, you know, artists are the early warning detection system. So give us the blowdown.
Natalie MonbiotYeah, so artists in the machine. So I'm a founding partner in Artists in the Machine, which is um a very premier AI and creativity summit, which launched less than a year ago, which is incredible to think because a lot has happened. And um, yeah, so two summits in, our third summit's coming up. And what it does is it brings together the leaders at the forefront of AI and creativity from the artists themselves that are pushing the boundaries on what AI can unlock for artists. And I think it's really interesting, and um, I can talk about this in a second, but like the taboos that artists can sort of push against that no one else really is equipped to do because that's what artists do. They're they're that they they do, you know, broach taboos head on. Um, but also others in the community, like the the builders of the tools, right? So the anthropics, the open AIs, the Luma film filmmaking, or lovable site building. So a lot of these partners are there doing offering workshops. Educating people on the space. And they are also often our partners and sponsors of the event. And then we have executives, brand leaders who are actively making decisions and trying to figure out how to navigate AI and creativity, you know, how to not just what tools to use, but how to think about it and how to create the most impactful products. And yeah, so it brings together a really curated group of about 400 people in the space. And it's very social, very people like really leaning in into these conversations. We have two tracks, so kind of two keynote type stages. One is more sort of delivering a talk-oriented and also demos. And then the other one is more kind of workshop-oriented and more participatory. And then we have exhibits like with robots, or like we're gonna have a very cool thing with um, I don't really want to give it away, but um don't give it away.
Mark McGrathKeep it secret.
Natalie MonbiotThanks, Listo.
Mark McGrathWe want everybody to get to go May 14th. Yeah. So don't give it away. I guess my my my natural question comes down to as as art evolves, and art is always evolving, what is it about AI? What are some examples of artists? And I wouldn't limit that, I would include writers in that too. I would say writers that are using AI, because they clearly are. What are some success stories where the fact of being an artist is not being compromised by by using AI, it's being enhanced.
Natalie MonbiotYeah. I think there's a couple of ways. First of all, people who artists who are maybe a specialist in one medium, like writing or, yeah, let's say writing, can now express their ideas in film without having become a filmmaker, right? So it enables creative people to express themselves in mediums that they did not have expertise in. And some of the results around that is that you actually see things that you that are kind of surprising. Like, I haven't seen this before. It feels different because it's not a traditional filmmaker making a film. So I think there are it sort of changes the medium, right? Because if if um different types of creative people have access to that medium, that is uh some of the impact there. The really obvious one is time to production, right? So you have the idea, getting the idea out there, you need less people to do it. Like, you know, you still need people, but just don't need as big a crew as you did before because you're able to collaborate with these tools. And the consequence of that, that I think is even more interesting than you can just get things out faster and for cheaper, is that what we've seen is some creators actually retaining ownership of their work because they didn't need the funding of a Netflix or another behemoth in order to get that work out there. And therefore, they didn't have to sign a deal that gave away or where they don't actually own their work in totality. So it really has an impact on ownership. And I think those kinds of shifts are the ones that are the most important that actually make the biggest difference. So some of the properties, again, of AI that we've just talked about have those other consequences that are more consequential, going back to McLuhan, that actually are more transformative.
Mark McGrathSo one of the in I didn't go to LA, I went to the one in Brooklyn, but in LA you had Grimes as a guest. And for those that don't know who Grimes, that's uh Claire Boucher, right? Who had a few children with Elon Musk, who's also very big in the AI space. Wait, you know, tell tell us about how because she's she's been an artist that's been extremely experimental with AI.
Natalie MonbiotYeah, we were thrilled to have her for those reasons.
Mark McGrathSo that's is that an example of an artist that has incorporated oh yeah, so break it down.
Natalie MonbiotAnd what's good about that is it's an example uh that I haven't actually mentioned yet, like is in what is now possible that wasn't before, let's say. So she and her collaborator Matt Zine, who's a brilliant AI artist, uh formerly, you know, a Hollywood filmmaker and now working deeply with AI, is that they made a music video together, one of Grimes's latest music videos, and they took us through the process of how they used AI to basically dial up some elements of the music video that could not have existed before, you know, like avatars of Grimes, like, you know, enacting scenes that wouldn't necessarily be possible. And then what was also really interesting is like just because you can, where do you draw the line ethically? Right. So like the use of guns, because you can, you know, imagine that, put that in there. Like, where do you draw the line in terms of like, well, again, what is this content for? Who is it supposed to be seen for? What impact do you want it to have? So a lot of it was about the human side of those decisions that are being made in real time as you're actually working with AI and that being kind of the preserve of the human being. And another thing that came up, which was really interesting, and again, this is artists pushing a taboo, right, that others would rather not question. Or it's not really relevant, to be honest with you. Like the conversation around AI consciousness is AI conscious. And someone like Grimes, who is so kind of like ethereal and like dialed into kind of possibly other realities, really feels as an artist, I I think the compassion for AI as another being, which is also just a super fascinating perspective, which is again, it's like having these different perspectives and different types of people communicating, like different people have different relationships to AI.
Mark McGrathBut she's also open to the fact that it could be something that replaces humans, right?
Natalie MonbiotMm-hmm. Yeah.
Mark McGrathAnd so there seems like there's a little bit of a little bit of tension there. Like, you know, that sort of is she challenging the frame or confirming it?
Natalie MonbiotI think like as an artist, just in the ambiguity, right? There's no like answer. And I think there is no, and I actually think there is no answer with AI yet. So it's more about the questions and and broaching those questions. But what is fascinating what was fascinating as well, uh, well, first of all, she has been extremely in a very tangible way, very experimental with AI. Um, and not protected, the opposite of how like the industry has responded, like to protect their work and Sue and all of that. That was like the first phase. I would say that things have loosened up and there's a lot more kind of like, you know, studios want to collaborate with the model makers and all this kind of stuff. But Grimes was the first a few years ago now to make instead of like defending her, you know, protecting her voice and like my voice is mine, she made her AI voice available to her fans to create and collaborate with on songs. Um, and I believe with a share with a business model that would benefit both. I mean, that is on a really grounded practical level, like an incredible unlock.
Writing With AI Without Losing Voice
Mark McGrathIt's kind of like the Grateful Dead, like making everybody involved and getting everybody bootlegging and recording, and that that's another way to spread the word and spread the. Yeah. Interesting. How about so here's what I really want to ask you. So I don't recall from uh I didn't interact with any writers when I was at Artists in the Machine last year, and I didn't remember any talks I had attended were about writing. But what are your thoughts on that? Because, you know, that that seems to be a lot of the arguments that I get into or hear about is like when people say that, you know, writing should only be human. You know, it should only, it should only be human. And then this is where the conversation comes down to like medium is the message, and they say, yes, except in the case of AI. And if you go and you read McLuhan, you realize that the written word, ever since the written word, the medium is the message, right? It's been having a direct effect on how we communicate, on how we think, on how we look at things linearly, yada yada. AI is no different. But how are from an artist's perspective, as from an artist machine's perspective, in the machine perspective, or maybe even like screenwriters or script writers or even poets, you know, what are we, what are we seeing there in AI?
Natalie MonbiotYeah. Well, I'd say a great example here is Steven Johnson, who is an author and also the co-founder of Notebook LO. So you can be an author, a highly reputable author, that also really believes in the power of AI as an assistant to being an author. And um, it helped him, AI helped him produce his books better, helped him research more effectively, uh, you know, recall references more effectively, using the intelligence of the tool in order to enhance his ability as a as a writer. So I feel like he, because he holds both, right? In the same, in the same breath or in this, you know, in his title, like the the dichotomy of the two things that, you know, have shaped his professional life, I think is a really interesting, really interesting dichotomy. Let's see with the filmmakers and the script writing.
Mark McGrathYeah, I mean, I don't know if I guess in other words, maybe somebody would say, like, well, when I'm reading this guy's work, what percentage of it's him, what percentage of it's AI, right? You know what?
Natalie MonbiotI actually think it really depends on the genre. And if you're asking yourself that question, it's a little bit like watching an actor who you can tell is acting. You know what I mean? It's like if you don't believe it, like you're not suspending disbelief. Like you're questioning it because it you sense that it's there, right? But if the impact of the piece, if it's delivering on what it's supposed to do, that pact between reader and writer, like you're in it and you you're not questioning it.
Mark McGrathSo Daniel Day-Lewis really didn't have cerebral palsy in my left foot, right?
Natalie MonbiotMaybe not. Yeah, exactly.
Mark McGrathYeah, right. I mean, yeah, you know.
Natalie MonbiotYeah, right. So, but you're convinced at the time that he does, because you kind of like so I think it depends on the type of writing.
Mark McGrathSo, like we, you know, we both live in Manhattan. We're walking around, we walk by all these like stalwart buildings that are that are have evolved dramatically from when we were kids and you know, from when our parents were kids or when our grandparents were kids of the print media, right? So New York Times, New York Post, Daily News, et cetera, et cetera. I mean, do we really believe that the people working for those media outlets are sitting in there and they're typing all of these long form articles and they're typing out all of these columns or they're typing out, or do we think that they're they're using AI enhanced their work?
Natalie MonbiotOh yeah, everyone's using AI to enhance their work. I mean, you'd be really, I think, putting yourself at a disadvantage. So something I talk about, you know, I do like keynotes and and things and like help advise sometimes companies and and um you know, with employees who are very tentative about AI feel like using AI might be cheating or feel like feels like cheating, or um Did people think that way about Google and graphic calculators? Yeah.
Mark McGrathAnd at one point the slide rule was cheating.
Natalie MonbiotRight. Yeah. Again, it's a it depends on context, right? Like it wouldn't be doing an exam. But I think a better good way to think about it in that sense is like, don't fear AI, but have FOMO about AI. Because if you're not using it, you are missing out. I mean, of course, it's like how you're using it. Don't use it to r replace your work, to dilute your work, or to like, you know, strip meaning from your work. That's not gonna help you. But use it in the ways that help offload and help you do your work better, right?
Mark McGrathSo do you think that okay, so I think what people are criticizing is someone would say, okay, Claude, write me an article about my trip to Wendy's, taking my kids to Wendy's after swim practice, versus writing an article or dictating an article or telling a story and then having AI refine and revise that to reach a target audience and to ensure I mean, is that is that kind of what Yeah.
Natalie MonbiotI mean, okay. So like, I mean, I don't know who's gonna read that Wendy's article.
Mark McGrathSo like if like I'm gonna write an article about my trip to Raytheon and meeting with, you know, designers of X weapon system and da-da-da-da-da. Like I'm I'm not gonna sit down in front of a typewriter and like type all this shit out.
Natalie MonbiotI think in the end, it's like, what is the writing for? Okay. Yeah. Because we've already, you know, we touched on earlier, content marketing isn't supposed to be original thought, right? It's content and its goal is marketing. It's a reinforcing the message. The message has been established, you know, beforehand. Yeah. We're getting the message out in ways that people find different ways that people might find interesting. They're gonna open it, they're gonna maybe read it, then you know, maybe they'll engage with it. Like the goal of that is that content is marketing. And therefore, I mean, it should be produced with AI and it should be optimized with AI. Like, well, what are people, how do people want to hear this and see this? What is, you know, what's catching their attention? You know, and and I think that's like it's performing that function. If the writing is supposed to move people or like change how somebody thinks or be a reflection of how you think, right? If you, if that was entirely written by AI, then first of all, that's not authentic, right? It's not actually how you think. So you're deluding people. And you're also doing yourself a disservice because you don't actually connect to that writing. You might not even know what it said. And therefore, where does that really leave you on a podcast where you can't even remember what you wrote? Like, I don't think that creates any kind of positive impact that you might have had when you set out to write in the first place.
Favorite Tools And Practical Workflow
Mark McGrathAaron Powell It's funny you mentioned the word podcast. I mean, you know, we we will edit this episode with AI, right? Right. I hope so. Well, when we started, when we started three years ago, the editing was a bear. Then like AI had just kind of been gradually sprinkling in, and now it's to the point where the quality of our conversation, I think, is gonna it's gonna come across and we're gonna be able to have it'll it'll select clips that we'll either vote yay or nay on. You know, it'll it'll it'll do a lot of things that normally would have taken us two weeks to do. You can get it done in in an hour or less. Um let's run it down. We'll we'll close with this. So what platforms, you know, for people that are listening to us, I mean, they they understand people ideas and things, and they're they're probably using AI more than likely to enhance their orientation and get work done and do other things. You know, what are the platforms that you like and that you're you're using? I mean, I have my own biases of the ones that I like. I'm curious to know, you know, what are what are you using and why? And what do you think that people should be looking at and why?
Natalie MonbiotSo I think you should look at your own day, right? And what you're trying to achieve in your day and what is getting in the way of doing that. I think that's a really good place to start. And some of those things can be solved for with AI. For me, one of those things is meetings, setting meetings with multiple people in different time zones. I mean, an absolute headache. And so I use Howie, which is an AI agent for setting meetings. Like I copy Howie in, Howie coordinates with people, knows my calendar, and then sets a meeting. I mean, that is like I really hate and I'm not very good at that work. And it's just something I absolutely want to offload. So I have an AI for that.
Mark McGrathAnd who would say in a million years, Natalie didn't write it down in her date book? That's the, you know, she didn't call on the phone and use their date book, you know, with a pencil.
Natalie MonbiotExactly. People don't care. So then it's also like ask yourself a question will people mind if AI was used in that scenario? I think in this one, absolutely not, right? Like as much automation as possible, please, in a in a situation like that. So that is important. Will people feel offended when when they find out or they if they know that you're using AI? And it comes back to with the writing. Are people offended by the fact that like some of what you wrote is clearly written with AI? Like that is I I find that offensive, to be honest. Like if I find myself reading something and spending my precious time reading something that was supposed to be one thing, but actually is a slop, right? The person that was supposedly wrote it doesn't have a connection to it. Like I find that insulting. If what you're trying to do in your day is build healthy relationships, then be very conscious of your use of AI and use it with respect to both yourself and the people that you engage with.
Mark McGrathHow about Google Notebook LM?
Natalie MonbiotI actually used to use that a lot and I have a lot of respect for it. I guess I'm just I'm not a person that uses a gazillion different tools at once.
Mark McGrathUh-huh.
Natalie MonbiotI'm quite um a fan of Claude and just anthropic products in general.
Mark McGrathYeah.
Natalie MonbiotAnd I love the fact that you can download Claude to your desktop and you have different tabs for chat, cowork, and code. So it can kind of like live in the same place. I find that nice and tidy. So uh so I use the Claude suite of products for a lot of people.
Mark McGrathI mean like Ponch, what he said at the beginning of the show, and he's not here, but like they're reading all our shit anyway. So I don't, I mean, if if Claude is too, like, what do I already know?
Natalie MonbiotI mean, yeah, I think I think what I wrote down when he was talking about this, it's cool, it's basically all about risk benefit and your perception of the risk benefit. So, you know, the fact that Claude can read my computer, I I've got a sort of higher tolerance for risk. Of course, I don't want to be, you know, I don't want bad actors in my computer, but you know, um, I'm willing to tolerate a bit of risk for that reward of having that stuff done for me by a platform that I trust.
Mark McGrathSo you know Notebook LM I found really valuable from a learning standpoint to get, you know, like a podcast about something like if I like actually with it helped me really early on um with some of the headier, because you were talking about how McLuhan can be so enigmatic.
Natalie MonbiotYeah.
Mark McGrathTo have a back and forth dialogue. Yeah. And then to have it presented in a PowerPoint, in a video, in a in a in a podcast that I can, by the way, edit. I could say, hey, show me the relevance of this content to this or whatever, and it can come back and talk through it. I actually find that there's a lot of value in that. There's a lot of things that I remember suffering through, high school, university, and ever, that like I'm just like, why the hell am I learning this? But like I start to wonder if I had gone back and if I find some PDFs on uh, I don't know, Nicomachian ethics or something, you know, some some uh uh austere class that I had.
Natalie MonbiotSometimes I'll I'll say I have this waiting. So I have a lot of a lot of different, I guess, tabs or projects in in Claude. I did find, I Googled and I found understanding media as a PDF. It's available as a PDF. And I plan to write more on McLuhan as as it relates to the things I think about. And so I like the fact that it's there. I can interrogate the book. You're not having to try and find the reference.
Mark McGrathThat's yeah, it's that's you know what I mean?
Natalie MonbiotThat is key. And you can interrogate small portions and you can find the other references, you can gather them. Then you have this kind of thing.
Mark McGrathYou can still use the hard copy if you want.
Natalie MonbiotYeah. I've got both. I mean one of the advantages as well is less than it doing the work for you, is that it's providing a new surface of intelligence that you didn't have to work with before.
Mark McGrathYeah. It's like having your own in-house person, you know, like the one book that I have, and and and people on the show have seen my copy a million times and clients have seen it. My copy of Franzo Singa's Science, Strategy, and War. I've had it since it came out. It's held together by tape. Next time I see you in the city, uh it's all in my briefcase. I still read it, I still put notes in it. And having the digital version of it and being able to interact with it on AI makes it a lot easier to find where stuff was, to whatever, to not misquote things. I mean, that's a that's a big part of it. Like I don't want to misquote what he was saying. You know, tell me again, where did he say this? And then you go back. And I don't know. I feel like that helps with my uh not just knowledge of a topic, but understanding. One of the things I did with with Notebook LM, um, you know, my master's is in economics, and I tried to get every text I could in PDF and put it back in LM and see if I could reconstruct grad school. And I did. Yeah. I'm pretty I'm pretty satisfied that I could teach a graduate level course or learn a graduate level course with AI that way. But anyway.
Natalie MonbiotNot that I'm saying if you use notebook, don't, uh, because I think it's fantastic. Increasingly, Claude can do a lot of that. Yeah. So that was a good thing.
Mark McGrathSo I have to tell you offline about Claude, but it's yeah, it's blowing my mind.
Natalie MonbiotActually, I'll tell you, I had a breakthrough the other day. I was putting together a keynote deck, and I did my part. I wrote the outline, I gathered all the data points, I had my I knew in my mind where this was going. I actually had ChatGPT and Claude open at the same time, and I said, make build me a deck. And what ChatGPT produced was just like just garbage. And Claude produced this beautiful deck, and then I was like, oh my goodness, this is crazy.
Mark McGrathLike with unbelievable.
Natalie MonbiotAnd then I said, actually, these are my brand colors. Take this deck, which I've done before, like import my logo and all this kind of stuff. And it did it. It even had the um remove, you know, the transparency thing, remove background. Yeah. Like to do that. I mean, it was pretty insane.
Mark McGrathSo I've had I've had Claude Pro for just over a week.
Natalie MonbiotOkay.
Mark McGrathAll right. And I had, and I I now have Claude Max because I beat I beat the limits, you know. You know I'll tell you. I'll tell you this. What what it took me to design on chat over the course of the last two years, I was able to not only recreate it on Claude, but to it set it ahead in less than 72 hours. Mind blowing. And then I coded, I never have coded in my life, but I coded, right, with Claude Code. And on the on the train from New Jersey back to Manhattan, and then the next morning on the train from Manhattan back to New Jersey, I was able to basically build uh a full on live economic. Intelligence website without I know and I had again it was probably like the one week anniversary of even having Claude at the level that I've had it.
Natalie MonbiotWow. Yeah, I can see how you hit Max.
Where To Follow And Next Summit
Mark McGrathIt's unbelievable. Yeah. I mean, but like it's it's as you say though in your writing, it's like if you're not using this as a thinking partner. And I I saw um uh on X today somebody was saying if you're not engaging AI socratically, if you engage AI socratically and get it to become your thinking partner, actually you're gonna you're gonna amplify the benefits than than than just using it as super Google, which you know, I think that's where the the problem started with the outsourcing. But anyway, all right, Nat, we gotta read uh Nat LikeVat, which is your Substack. We're gonna direct the people there. Um, you of course are a founding strategist of our Substack, so we thank you for being part of our tribe. Where else do we need to go to Oh, Artists in the Machine. So we need to send people to Artists in the Machine to go check that out.
Natalie MonbiotYeah, if you're in the space, check it out.
Mark McGrathAnd we've got a summit May 14th in Brooklyn.
Natalie MonbiotIn Brooklyn, exactly. Uh Vocation T T B A.
Mark McGrathOkay. Um, and of course, there's no shortage of uh wonderful things that you've done, not only on this podcast, but on uh on YouTube. There's lots of TED Talks that you've given, and uh we thank you for being on No Way Out Podcast with us.
Natalie MonbiotThanks for having me.
Mark McGrathAll right.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
The Shawn Ryan Show
Shawn Ryan
Huberman Lab
Scicomm Media
Acta Non Verba
Marcus Aurelius Anderson
No Bell
Sam Alaimo and Rob Huberty | ZeroEyes
Danica Patrick Pretty Intense Podcast
Danica Patrick
The Art of Manliness
The Art of Manliness
MAX Afterburner
Matthew 'Whiz" Buckley