The AI Argument
Worried that AI is moving too fast? Worried like me that it's not moving fast enough? Just interested in the latest news and events in AI. Frank Prendergast and Justin Collery discuss in 'The AI Argument'
Contact Frank at frank@frankandmarci.com
linkedin.com/in/frankprendergast
Contact Justin at justin.collery@wi-pipe.com
X - @jcollery
The AI Argument
The Claude Delusion, White House AI Clampdown, and Robots on a Plane | EP99
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
If Claude sounds conscious, what does that say about AI… and what does it say about us? Frank and Justin dive into the Richard Dawkins controversy after his comments about Anthropic’s Claude triggered a fierce backlash online. The episode explores AI consciousness, continuous learning, AI “dreaming”, and whether systems like Claude and ChatGPT are starting to blur the line between tool and mind.
Plus: OpenAI’s bizarre goblin obsession, court revelations involving Sam Altman, Greg Brockman and Cerebras, Elon Musk conspiracy theories, the White House suddenly waking up to AI regulation, a robot causing chaos on a plane, and Hollywood hiring a human artist to fake an AI-generated image.
00:46 Why was ChatGPT obsessed with goblins?
02:52 Could Cerebras cost Altman his job?
05:35 Is Elon sabotaging OpenAI’s IPO?
07:02 Does Richard Dawkins think Claude is conscious?
17:48 Is the White House waking up to AI regulation?
27:40 Can you bring a robot on a plane?
29:45 Did Hollywood fake AI with a human artist?
32:12 What’s the big episode 100 reveal?
► SUBSCRIBE
Don't forget to subscribe to our channel for more arguments
► LINKS TO CONTENT WE DISCUSSED
- Where the goblins came from
- OpenAI co-founder discloses nearly $30 billion stake, financial ties to Altman
- When Dawkins met Claude Could this AI be conscious?
- White House mulls tighter controls on advanced AI
- U.S. and China Pursue Guardrails to Stop AI Rivalry From Spiraling Into Crisis
- ‘Unusual’ Robot Passenger Named Bebop Delays Southwest Flight After Violating 'Large Carry-on' Rule
- ‘The Devil Wears Prada 2’ Hired a Human Artist to Create the Film’s AI-Generated Meme: ‘It Was Nothing But Fun’
► CONNECT WITH US
For more in-depth discussions, connect Justin and Frank on LinkedIn.
Justin: https://www.linkedin.com/in/justincollery/
Frank: https://www.linkedin.com/in/frankprendergast/
Justin: Hello. Good morning, good afternoon, good evening, and welcome to the AI Argument. It’s, as always, myself, Justin Collery, joined as the ever cheerful Frank Prendergast. We have a very interesting show today because both me and Frank think that we’ve won.
Frank: Yeah. I can’t wait to find out how you think you’ve won this one. This is going to be fascinating.
Justin: So I think we both won. Well, sorry, I think I’ve won. I think, in fact, definitively I’ve proved that I’m right and you’re wrong. We’re gonna…
Frank: Excellent. Excellent.
Why was ChatGPT obsessed with goblins?
Justin: But before we get to that, we got a couple of other stories just to whizz through quickly. The first is OpenAI had a gremlin problem.
Frank: Yeah, I’m really disappointed. So you’re predominantly a Claude user these days, right?
Justin: Yes, yes.
Frank: And I am a ChatGPT user through and through, and I feel cheated. I feel cheated because it seems that a lot of users have been getting an experience on ChatGPT that I have completely missed out on, where apparently, for the last couple of models, it’s been completely obsessed with goblins and gremlins. I believe raccoons, pigeons even, but apparently not frogs, interestingly.
Although there was a high amount of references to frogs in ChatGPT’s output, they all actually turned out to be valid, whereas the gremlins and goblins were just popping up out of nowhere. It would use them as metaphors for all kinds of things.
Justin: They’re modern-day em dashes. They’re like the new em dash is now goblins.
Frank: Yeah, yeah, yeah, yeah. Well, no more. No more, because they’ve figured out what it was, and their new model training won’t be susceptible to this. And this all came about because I think somebody spotted that in the system prompt for Codex, it specifically said, “Do not mention goblins and gremlins and pigeons and whatever else, unless it’s really specifically relevant to the conversation.”
And yeah, I just feel a bit cheated. I feel a bit cheated that I never actually got to experience it. Sounds like fun.
Justin: Prompt back in to say mention goblins, every now and then, and it would entertain you.
Frank: I did see a couple of people pointing out that here we are entrusting these systems with coding all kinds of things. We’re worried about the impacts on jobs. We’re putting it into every organisation possible, and we have to remind it not to talk about goblins and gremlins.
Justin: What a weird world we live in.
Frank: What a world. What a world.
Justin: Okay, that’s cool. We’ll move on, right, because we want to get onto the main event, right?
Could Cerebras cost Altman his job?
Justin: But, in our brief “What’s Elon up to?” section, which you have immortalised. What’s Elon up to this week? Not as much as Sam Altman and Greg Brockman.
Frank: What are Sam and Greg up to?
Justin: All right, so normally I like to dive into these things, but it is a court case, so I’m going to have to give the obligatory disclaimer beforehand. I’m not in the courtroom. I haven’t experienced or seen this. This is all from a third-hand thing, so I don’t know if it is or it isn’t true, but it appears to be true, and I think they might be in a little bit of trouble.
Frank: Okay.
Justin: Back in 2017, while Elon was still involved with OpenAI, Altman and Brockman invested in a company called Cerebras, right? And at the same time, or about the same time, they then lobbied for OpenAI to buy Cerebras for a lot of money. 20 billion quid was how much they wanted to invest in Cerebras.
So, they were gonna end up being beneficiaries of this merger, and they were asked, did they tell Elon Musk that they were involved in this company? Was there any messages? Was there any record of it? Whatever. And the answer was no. This is called self-enrichment, and you’re not allowed to self-enrich yourself from a charity.
And I think they might be in trouble for that. It sounds terribly unethical, at the very least, and possibly quite illegal. And I’m not a lawyer, so I don’t know for sure, but it sounds very illegal to me. And I will…
Frank: I was gonna say, even if it’s not illegal, it definitely sounds kind of ethically dubious. But, at this point, it’s like, does that surprise us? Not really.
Justin: Well, it doesn’t, and they went through, because they did go through a number of other companies, right, in the court case, that OpenAI had also invested in or bought services from or whatever. And a lot of these companies had relationships with, or Altman and Brockman had shareholdings in and relationships with.
Now, with all of those other ones, it was said in court that, yes, there was… The court record was changed a number of times, and finally it came with, it was dealt with appropriately, but there was no backup documentation or anything with it. Again, seems very murky. But this one, the Cerebras one in particular, is dangerous because they didn’t tell anybody, or it appears they didn’t tell anybody.
And you will remember our little side bet, that I said that Altman will not be the CEO of OpenAI by the end of the year. Events, dear boy, events. And I didn’t think that this would come up. There was other reasons I was thinking of, but this again could be another reason that Altman is not, not even by the end of the year, by the end of the summer, he could may no longer be the…
Is Elon sabotaging OpenAI’s IPO?
Justin: Oh, yeah, there’s an interesting conspiracy theory about this, by the way.
Frank: Oh, yeah. I love a good conspiracy theory. Go on.
Justin: All right, so the conspiracy theory goes like this, right? Elon Musk is trying to do the biggest IPO ever, the bigliest IPO ever, and he’s starting his roadshow in the middle of June, right, to go around to investors, and so on.
This court case is gonna have its determination, or its outcome is gonna be announced on May the 21st. So the conspiracy theory is that what he’s doing, because OpenAI, remember, is also looking to do a big IPO at the same time, and so what he’s doing is he’s chopping the legs from under the competitor, because they’ll both be looking for the same money.
The same investors will look to invest in both. So if he can give a bullet to his main competitor for the money, it helps him to do his IPO later during the summer. And you will notice that he did a big deal this week with Anthropic. So they have this data centre called Colossus, and they have Colossus I and Colossus II.
They’re using Colossus II. Colossus I is just sitting there gathering dust. So they signed a $5 billion deal with Anthropic to use that data centre. So you can see that maybe the conspiracy theory goes that there’s a narrative that says, “Yep, OpenAI are in the dust. It’s us at Anthropic. We’re doing deals with Anthropic. You should invest in us, because we’re making money and we’re great.”
Frank: Interesting. Interesting.
Justin: So there are the big news things from me this week.
Does Richard Dawkins think Claude is conscious?
Justin: You had some interesting news for Richard Dawkins.
Frank: Yeah. So this was all over my social media feeds during the week, and basically it just kept popping up, “Oh, Richard Dawkins thinks Claude is conscious. What an idiot. It’s just a machine.” That was the gist of nearly every post on my social media. And so, I looked into it.
First of all, I read the post, and I personally did not come away feeling like the point was that Richard Dawkins felt that Claude was conscious. Now, I understand why people might have taken that from it. I do understand that. But he’s what? He’s an evolutionary biologist, I think, is his official…
Justin: Mainly an atheist, but other than that, he’s a…
Frank: Yes. Well, yeah, so…
Justin: Biologist.
Frank: So he wrote a book called “The God Delusion,” which was all about basically people being deluded that there was a God and there isn’t a God, and, you know, cop yourself on, essentially. And so when Richard Dawkins released this article about interacting with Claude, somebody used AI to mock up the book, “The God Delusion,” as “The Claude Delusion.”
Justin: Oh, pretty. Or the sand god delusion would be quite good too, actually.
Frank: So basically, inferring that Dawkins was deluded for thinking that Claude was conscious.
Justin: What was the main point of his article, though? So what was he driving at?
Frank: I believe that the main point of the article is… So he actually followed up the article by publishing a conversation between two of his Claude instances. And when he published that, the original title of the article was just something very simple like, “When Dawkins Met Claude,” or something like that. But when he published the conversation between his two Claude instances, he said to them, “By the way, listen, I hope you don’t mind. I’m actually going to publish this conversation.”
And he happened to mention that the original title of his first article should have been, and I’m trying to find it here. I made a note of it, now I can’t find it. It was basically… Oh, sugar, where is it? If Claudia… Oh, yeah. “If my friend Claudia is not conscious, then what the hell is consciousness for?”
Justin: Yes.
Frank: And…
Justin: I can answer that.
Frank: Oh, well, before we go ahead, do answer that, because I’m sure Richard Dawkins will be fascinated.
Justin: And if he’s watching, you’re free to use this. It’s to learn. Consciousness is to learn, right? The more conscious you are, the more ability you have to reflect on actions that you’ve taken in the past, and therefore you can do better actions in the future.
Frank: I think that was essentially one of his hypotheses, although he equated it to pain. He was saying that you need to be conscious of pain in order to understand what you should and shouldn’t do, which is pretty much aligned with what you said there. But this…
Justin: Right. Good, Richard. Go on the prof. He’s not wrong, you know.
Frank: But he talks about how the whole “if Claudia is not conscious” thing is really interesting because he’s pointing out that it’s very like interacting with a human. And so if it’s not conscious, how does it have all of this competent intelligence that, in order to achieve, we ended up developing consciousness?
So he’s basically saying as well that, okay, if it’s not conscious, then how come nature decided to evolve us with consciousness? Why not just go with a zombie, a competent zombie, he calls it.
Justin: Yeah. And Claude, so I’m sure you spotted this week, Anthropic, in their Agent Claude thing, this is unimportant to write in this, but they released a feature called Dreaming. And what Dreaming does is at the end of every day, right, it goes through all your chats, and it basically does a reflection on all your chats and learns from them and builds up over time.
It’s a really mechanical, clunky way to do self-reflection and sort of, you know, what should I do tomorrow better than I did today? I do think that, if you talk to Demis Hassabis, one of his things is that we haven’t cracked continuous learning. And I think that if they crack continuous learning, and some people think it’ll be done this year, as in this calendar year, if they crack continuous learning and Richard Dawkins runs the same experiment again, I think it will be even harder for him to claim that this thing isn’t conscious.
Because if you say something, let’s say you’re mean to it, right, to use his pain analogy, right? Let’s say you’re mean to it, and you say, “You’re just stupid,” right? “I can’t believe you’re so dumb today. I’ve had enough of you.” And you storm off, right? And it’s thinking about that, right?
So it’s learning, and it’s doing this sort of dreamlike state, and you talk to it the next day. It’s gonna react to you based on the way that you talked to it the day before, just like a conscious person does. And then the argument comes, well, if it does that, is it hurt? Like, were its feelings hurt?
Because it’s certainly acting like its feelings are hurt.
Frank: Yeah.
Justin: Then the old chestnut is, if it walks like a duck and talks like a duck, you…
Frank: Well, exactly.
Justin: treat it like a duck. You know? Don’t be mean to it.
Frank: No, exactly. And I think, you know… Well, first of all, I think the article… I think it’s wrong to say that Richard Dawkins believes Claude is conscious. I think he’s exploring it. I think he’s certainly saying it’s a possibility. But I didn’t read it. I didn’t come away thinking, “Oh, he believes Claude is conscious.”
I think I came away feeling like he thinks it’s a possibility. I think he thinks it’s a serious possibility. But for him, I think the question is, if it’s not conscious, what does that mean for our consciousness? I think that’s the bigger question that he’s trying to answer. But just as you say, the fact that he opened this conversation and the fact that people are saying, “Oh, he’s an idiot because he thinks Claude is conscious,” I think that is a separate problem.
The fact that somebody at Richard Dawkins’ level could even consider that it’s conscious raises a big question around the world at large. And even if it’s not conscious, that highly, highly intelligent people are interpreting it as possibly conscious, that alone is a huge question that we need to look at and talk about. And I know Mustafa Suleyman has started looking at this, and he was saying these things are not conscious, they never will be conscious, but people are going to believe they’re conscious, and that’s gonna be a problem, and so we need to stop talking about them in terms of consciousness.
So this kind of raises that whole conversation again, and I think it’s good. I think it’s good that that conversation remains open.
Justin: Here’s my bias now in this conversation, right? Mustafa, I’ve read, he’s very hard on this. It’s not conscious. It shouldn’t be, shouldn’t think of it. This is Mustafa Suleyman. I think, and this is my bias and my opinion, I think he’s saying that for political and financial reasons. I think he’s saying that because it makes it easier to sell a product.
Frank: But this is so difficult, isn’t it? Because I understand that argument because let’s say, if they were conscious, then there would have to be all kinds of considerations given to them, and that would make it more difficult and would introduce a lot more regulation and a lot more laws.
But then, on the other hand, you’ve got Anthropic, and Anthropic are saying, “Oh, you know, we’re not sure. Maybe it is conscious, and we can’t say for sure. We’re not saying it is, but we want to make sure that we don’t mistreat it if it is,” et cetera. And then you have people saying, “Oh, they’re just doing that as a marketing ploy to make it seem more interesting than it is.”
So you can’t win if you’re at the frontier of AI development. You just can’t win here.
Justin: Yeah, there was an interesting and related tweet by Rune this week. So Rune is a researcher and a developer in OpenAI, and he was opining about how it was interesting in the different attitudes in the two companies. And he was saying, in Anthropic, there is clearly… Okay. In Anthropic, they treat it as if it could be conscious.
And what happens then in the company is there’s an almost religious deference and respect to the thing that they’re creating because they say, “Well, it might be conscious. We should treat it with respect,” and whatever. It shapes the company, right? And it shapes how the people in the company perceive what they do.
And he said, “In OpenAI, we’re making a tool,” right? And people just perceive it as a tool, and therefore your perception of what you’re creating, and therefore the perception of how important the work you do is also different. It was just an interesting observation.
Frank: Yeah. So I was chatting through this with ChatGPT earlier, and I don’t often quote ChatGPT, but I did like this point that it made where it said, “History is full of cases where humans drew the circle of moral concern too narrowly. Animals, children, enslaved people, colonised peoples, disabled people. Caution may be ethically wiser than ridicule.”
Justin: Yeah.
Frank: And you’re gonna hate this, but I’m just gonna finish up this segment with, you know, I think this is another illustration of why we need to slow down, because we have experts at the frontier of AI development, we have experts in science, we have experts in philosophy all disagreeing about the nature of AI’s consciousness and whether it’s a possibility.
I mean, I really think these are questions that we should have a much better grasp on before we rush headlong into artificial general or superintelligence.
Justin: I’m not even gonna start on that one, right? Let’s just get onto the argument that I won, right?
Frank: Okay. Yes. Yes, I can’t wait to hear.
Is the White House waking up to AI regulation?
Justin: Let’s give a little bit of context, right? So a couple of really interesting things happened this week in the White House, right?
Speaking… Anyway, I’ll stop. So in the White House, I was about to debate how many humans were conscious, but anyway. So, in the White House, they have decided this week that we need some regulation around AI. It was almost, you could almost see the cogs turning in their heads, right? It’s like, ooh, the Anthropic Mythos thing happened. Ooh, these things are getting smart. We should probably regulate them, right?
And so they started floating this balloon that they were gonna regulate it, and then, the next day, it was like, “Oh, but if we regulate it, then what’s gonna happen in China? So we should probably try and do a deal with China to get them to make the same regulations so that we don’t have a competitive disadvantage to China.”
So that’s the context, right? And it looks like it’s gonna happen, and so all the big AI companies have signed up to it. It looks like there’s gonna be some sort of a regime. It’s not been… Have they published a paper on it yet? They’ve sort of floated that they’re… It’s not quite sure exactly how it’s gonna take form yet.
Frank: Yeah, it’s all very murky at the moment. It all seems to be anonymous sources and it’s all up in the air. It’s all still being discussed, but there’s an executive order being considered, I believe. Yeah. And in the meantime, as you said, there’s these voluntary things that some of the AI labs have signed up to, to allow the government to do some safety checks on models before they are released to the public.
Justin: Brilliant. And also, right, the thing that I was saying to you before, right, the interesting thing, right, so these are all to do with capabilities and the dangers around AI. And Firefox, by the way, the people that make the browser thing, they fixed more bugs because of Mythos in the last month than they have in the last 16 months.
Incredible, right? So that thing is good, right? So J.D. Vance came out in a speech yesterday, the day before, and his angle was slightly different.
Frank: Can I interrupt you now for a sec? Did he say, “I just wanna say it turns out the EU was right”?
Justin: No.
Frank: “Regulate.” Is that what he said?
Justin: He didn’t say that. What he said was, he was worried that AI was bad for small to medium-sized businesses and mom-and-pop stores and the guy on the street, right? That’s what is his concern. So the White House executive order…
Frank: Was previously heard saying that AI would not affect jobs in any way, shape, or form, right?
Justin: So the thinking has moved on. So here’s why I’m right and everybody else is wrong. What’s happening here, right, is the technology has progressed. We have got to a point where we’ve got to Mythos-level and GPT 5.5-level models. These are moles that need to be whacked, right? A problem has appeared and they’ve gone, “Okay, this is a problem. Let’s sort it out.”
And not only that, right, I will quote the great Charlie Munger. Charlie Munger was an investor with Warren Buffett, and his line was, “Show me the incentive and I’ll show you the outcome.” And that’s exactly what’s playing out here.
Frank: I can’t believe you’re taking this as a win. I’m gonna give you the counterargument here, right? The whole Mythos thing, the whole cybersecurity thing, this was not unforeseeable. This was so obvious. This was one of the most obvious risks of any form of powerful AI.
And we talked on the show recently about the fact that Anthropic and OpenAI both have models that are quite powerful in terms of cybersecurity and in terms of cyberattacks, more importantly. And they have two completely different ways of dealing with that. Anthropic have only given it to a certain amount of companies. OpenAI have this thing, you have to verify your identity, but then they will give it to anyone as long as they are verified, more or less.
But the point being that it’s in the hands of private companies how they decide to deal with this, which just seems completely insane, because if one of them is approaching it the wrong way, the cat’s out of the bag. And so what we should have done was foreseen this issue and had regulation in place that dealt with how these models should be released to the public.
Justin: But I’m a broken record, right? If you had the regulation in place beforehand… Actually, this is the realisation. Okay, let me take… If you had the regulation in place beforehand, you wouldn’t have got Mythos and GPT 5.5, not for another God knows how long.
Frank: That would be okay.
Justin: No, it wouldn’t, right? We’re only now getting to the point where we got really good models, right, with good harnesses around them, and this is to do with incentives, right? So what they’ve recognised is, okay, the models have got to a point now where we probably should regulate them. They could pose harm, right, in the wrong hands.
We have companies taking different ways of how we should manage that risk. But from a government point of view, we should probably put some framework in place. Now is the appropriate time to do it because now the models exist. But then the incentives thing kicks in, which is, okay, if we do this, that means that every following release is gonna be a little bit slower than it would otherwise be, right?
And we know that the open source models from China are about eight months behind closed source models in the US. So then you go, okay, but if the open source models from China now get released about the same time as the US ones, that kills the US AI industry. Notice there’s no AI industry in Europe to be killed because we had all the regulation up front, right?
So the Americans are being quite smart about it, where they’re saying, “Okay, so we can’t torpedo our own AI industry by regulating it too much. So instead, what we must do is we will float the idea of our regulation and then go to our biggest competitor, which is China, and ask them to put in similar regulations so that they will, from the American point of view, maintain their lead in AI.”
Frank: I will say that another win that I’m gonna take is that we may finally get discussions between the US and China on AI safety. However we got there or however late we got there, that is, I do think that’s a win. I think that’s a good thing.
I was slightly concerned about another executive order that was kind of tied up in all of this, which basically said, or is proposed that it will say, that private companies should not be able to interfere with how the government uses AI. And I just thought that was interesting because, on one hand, I agree, because as I just said, I think it’s crazy that the private companies are deciding on how these are released and how people use them, et cetera, et cetera, and we should have the regulation in place.
I’m just slightly concerned that this executive order might come into being before any other regulation is in place about how AI is governed. And so then you end up in a situation where there’s no real regulation for AI in place, but the government are allowed to use it any way they want to, and there’s a kind of weird tension there that I’m not entirely happy with.
Justin: Yeah, I know. And I think it’s… I mean, I think again, right, AI is gonna become like a central bank, right? In lots of ways, they’re gonna become like central banks. And so what you have here is it’s a carrot and a stick, right?
So if you want to go and set up a bank, it’s really hard because you’ve got to register with the central bank and you’ve got to have capital requirements. There’s all this regulation that goes around it. And kind of what’s happening here is the US government are saying, “Look, we’re gonna regulate you, and so you collectively basically become the central bank,” right? And everybody has to buy from you, right, for all sorts of reasons. And now you’ve got a sort of regulatory cloth around you.
So if anybody else wants to come into your space, they have to do all of this stuff, which you already do, and it creates a moat, right, for those companies. The downside to that is the cost of protection. It is a kind of a protection racket. It’s kind of like, great, but when we buy a car from Ford, Ford doesn’t get to tell us how we use the car.
We can use that car any which way we want, and it’s the same. So when we buy your products, you’re a regulated entity, you don’t get to tell us how we use your product. That’s the world we live in. It becomes very… Again, it’s an AI 27, isn’t it? I mean, this is now what we’re seeing is the intersection… I mean, it’s now what we’re seeing is the intersection of politics and technology, right?
And you would argue that the EU is correct. I would argue that they’re just writing rules, they’re shouting against the wind. There’s no tech, there’s no company to regulate, there’s no industry to regulate, there is no danger. So they’re regulating for something that doesn’t exist, whereas in America, they’re actually dealing with something real, and that makes it way more important.
Frank: I would push back on that, you’ll be astounded to hear. And I think that the EU is regulating to protect their people against the technology, against the downsides of a potentially dangerous technology. Whereas the American approach is much more like, get the industry barrelling forward and we’ll think about the effects on people later. And I’m more, yeah, as you know, I’m more in the EU camp of…
Justin: Yeah, I’m shocked. Listen, we’ll see what happens. I think the Americans are playing a stormer on this one. They’re doing a great old job.
Can you bring a robot on a plane?
Justin: Well, we have two funny stories this week. First one, right, is, we all have our comfort animals that we like to bring on plane flights, and we’ve all…
Frank: Do we? Actually, I wish I did. You think I could bring a goblin or a gremlin or a…
Justin: That’s a great idea. That’s…
Frank: A raccoon? A raccoon. I would love to take a raccoon on…
Justin: You’re legally obliged or allowed to take a raccoon. So this dude, this dude in America, I presume it was Southwest Airlines or something like that, we’ve all seen the videos of those Chinese robots that can walk and everything. His comfort animal was the robot.
So he brought it through the airport, and the robot was waving at people and everything, and that was cool. He’d bought a seat for this robot. So he took the robot onto the plane, and there’s a photo of him sitting on the plane beside his robot. And then the airline and the attendants all went mental because they’re like, “You can’t have a robot in the aeroplane. You can’t do that. It’s dangerous.”
It is dangerous, I agree with them. Well, not as dangerous as a raccoon, but it’s still dangerous. And so the flight got delayed because they didn’t know what to do with this goddamn robot, which is a first world problem, something we’re gonna see more of. We’re gonna have to deal with this sort of thing.
Maybe there should be a robot section. I think it’s called the hold, actually. But anyway.
Frank: Well, apparently he had a carry case for the robot, but it was too big, or it was overweight or something. So he thought he’d be able to just put the carry case in the hold, but they said no. So it was actually their own kind of policy of the weight that forced him to buy a ticket for the robot.
But you’re right. I think, in the end, they took the lithium batteries out because the lithium batteries were larger than you’re allowed to have on the plane. But I think, you know, I think any robot on a plane that I’m on, I want the batteries taken out anyway. I don’t care what size the batteries are, if they’re under or over the plane regulations.
I want those batteries taken out because I do not want a robot going rogue on a plane that I’m on.
Justin: A movie in the future, robots on a…
Frank: On a plane.
Justin: Ah, brilliant, brilliant.
Did Hollywood fake AI with a human artist?
Justin: You also had a funny story this week.
Frank: Yeah, this one, I mean, it’s kind of funny and it’s kind of not funny. So we have talked a good bit on the show about how, within creative industries, there’s a huge pushback against AI. There are huge swathes of the creative industry that just absolutely, vehemently, and viscerally hate AI, and there’s a lot of that in Hollywood as well.
And so there is a new movie out, “The Devil Wears Prada 2,” has a lot of buzz about it at the moment. Meryl Streep’s in it. And in the film, there’s a moment, apparently, where Meryl Streep is being trashed online, and as part of being trashed online, this meme goes viral, and the meme was an AI-generated image of Meryl Streep’s character in a fast food restaurant saying, “Would you like lies with that?”
So they obviously needed an AI-generated picture of Meryl Streep to show in the film for that particular scene. Did they AI-generate an image? No, because there’s a huge anti-AI backlash, and any time people use AI in a film, it ends up being eviscerated online for using AI.
So they hired a human illustrator to hand-draw, hand-paint something that would look like it was AI-generated of Meryl Streep in a fast food chain.
Justin: Are you serious?
Frank: I am absolutely 100% serious, and…
Justin: They got an AI image, and then they got a human to draw the actual image so that it…
Frank: I don’t think any AI was involved whatsoever, but it’s presented as an AI-generated image in the film. And they went public with this, and they’re getting huge positive PR boost from this because everyone is saying, “Thank you for not using AI slop in your movie,” even though it’s depicting an AI image.
Justin: That’s bizarre. Do you remember you were saying before that AI images should have a little watermark on it or something to say, this was generated by AI? Now what you should have is, this AI image was generated by a human.
Frank: Yes. Yes. Well, like we said at the top of the show, what a crazy world we live in.
Justin: Bizarre, bizarre.
What’s the big episode 100 reveal?
Justin: Look, next week we have a big show.
Frank: I actually can’t believe it. It’s insane. So today is our 99th episode.
Justin: Wow, I actually have shivers just thinking about it. 99, so next week we’re the big 100. Woo-hoo. I’ll…
Frank: Episodes. That is pretty incredible.
Justin: Have to wear a tie.
Frank: I’ll go and rent a tux.
Justin: Yeah. We’ll have to dress up, and we’re gonna have a big reveal next week, which we can’t talk about just yet, but there’s gonna be a big reveal. And, yeah, it’s gonna be exciting, and who knows what’s gonna happen in the next week. I’m sure there’ll be lots of all the usual stuff, but Frank, I know you’ve got something special lined up, and I can’t wait.
Frank: 100%. Justin, it’s been a lot of fun. I’m gonna quote ChatGPT one more time before we wrap up, because ChatGPT recently referred to the AI Argument as heavy topics discussed lightly, and I think that’s what we definitely had there today.
Justin: Great.
Frank: See you next week.
Justin: See ya.