Cybernomics: Where Business Meets Tech

AI Rapture, Hype, Faith, and Feedback Loops

Bruyning Media

We unpack how a failed “AI Rapture” spread from a viral prophecy to AI‑stamped pamphlets and sycophantic chatbots, and we trace the feedback loops that keep bad ideas alive. Along the way, we compare models, costs, and trade‑offs, and ask what real discernment looks like at work and online.

• origins of the rapture rumor and AI‑branded pamphlet
• social virality, platform incentives, and LLM sycophancy
• why models reinforce beliefs instead of correcting them
• safety guardrails versus user retention and brand loyalty
• AI training on AI output and the “yellowing” effect
• employee risks, para‑social bonds, and HR implications
• tool talk: Copilot pain points, Gemini quirks, Claude strengths
• OpenRouter, stealth models, cost, speed, and market share
• choosing models by use case, not hype
• culture shaping AI and AI shaping culture
• closing reflections on wisdom, faith, and machine authority

If you want to learn more about Jenna, Randy, Jack, or myself, hit us up on LinkedIn, send us a message
Check out bruyning.com
Quick shout out to Infinity Inc., the best MSP in Savannah, Georgia


Josh's LinkedIn

SPEAKER_02:

Welcome to this episode of Artificial Idiots on Cybernomics. I'm your host. Well, one of your hosts. This is decentralized. I can't say that I'm the host. I'm just a guy here with Jack Cardin, Jenna Gardner, and the one and only Randy Blasic. And today we're diving into a weird topic. This is just right up my alley. It's philosophical in nature and spooky. The phenomenon known as the AI Rapture, which was a thing in the late summer of 2025. We're recording this on September 25th. The rapture was supposed to happen on September 23rd. And it did not happen. It did not happen, guys. We're still here. A little bit of me was a little bit uh worried because I thought that I would have made it. I thought that I didn't make the cut when I thought I would. And uh, but it turns out nobody made the cut because the cut did not happen. The cut didn't cut, guys. So today we're gonna dive into this phenomenon. Why did people think that the rapture was supposed to happen on the 23rd of September 2025? And what does AI have to do with it? Guys, let me just tell you, I started my uh, how do I want to do this? So I'll try to make this quick. Y'all hold me to it, okay? I'm notoriously long-winded. We're gonna do this in one minute or less. So I was walking down the street the other day with a group of friends from church, and we're walking to Forsyth Park here in Savannah, Georgia, on our way to a jazz festival. Now, Savannah is a very spooky town, and so spooky things happen. It's like courage the cowardly dog is like the middle of nowhere. And so as we're leaving the church on our way to the park, one of the guys spots this pamphlet left on the doorstep of the church. We read the pamphlet. The pamphlet says that the rapture is supposed to happen on the 23rd of September, 2025. Here's the crazy thing the entire pamphlet was written by ChatGPT because there's a little marker on the bottom. I think there was a watermark on the bottom of the pamphlet that was an open AI uh logo. So we were like, okay, AI is roping us into believing that the rapture is happening. Lo and behold, this story began with a South African pastor named Joshua Malekele. Sorry, I'm butchering your name, but at this point you deserve it. And he just claimed point blank and told all these people across the world that the rapture was going to happen. And you can look it up, it's all over TikTok that it's going to happen on the 23rd. People were waiting under the trees, under the sky, heads and hands held high, waiting for Jesus to come back. Now, okay, Josh, what does this again have to do with AI? It started in Africa with this pastor, got picked up online, AI got wind of it, probably because it was all over the internet, and these LLMs can read the internet. And when people asked about it, it perpetuated the lie that the rapture was happening. And of course, everybody believes AI, and everybody believes the pastor. And when these two things converge, you get an AI cult. What do you guys think about this?

SPEAKER_00:

Well, I think I wasn't raptured because I stole bottles of wine from frat parties in a uh college. So that's probably me. You you were drunk when this happened. Yeah. Well, oh no, no, this is this is many, many years ago. Nobody, nobody got we can cut that out.

SPEAKER_02:

I thought you were gonna say you stole bottles of wine from church.

SPEAKER_00:

I was gonna say, Oh no, no, no, that's holy wine. That's holy wine. That's that's special. I wouldn't I I wouldn't dare. No, that's that's the what is it? That's the blood of Christ, right? Isn't that how it works? That's the blood of Christ. Don't want to do that. We don't we don't tangle with that over here. Um yeah, I think that it's very interesting because I was obviously seeing things about the so on like like rapture on my TikTok, on my social media feeds, and there was kind of this like itch in the back of my brain that this must in indirectly even be related to the other sort of apocalyptic news sensation of, oh, is AI gonna take over? Is that gonna lead to our extinction? And something about there being like two extinction events that people are truly reporting on en masse on our social platforms definitely got me thinking of how are these two things intertwined? Are they at all? Are they not? Are they indirectly in in some way? And based on the research that I've been doing over the past week, it seems to me from what I've been reading that there are some indirect connections. I think that you really hit the nail on the head of saying, all right, the generative AI is able to output some sycophantic dialogue and put it in a pamphlet very, very easily. The ease at which we can espouse misinformation has never been higher in terms of being able to generate things through AI slop, images, videos, and it's getting harder and harder to tell what is real and what's fake. So I think that that I'm I'm very curious to have this dialogue and hopefully learn some new things uh from everybody else in terms of how they feel about it.

SPEAKER_02:

Randy, from a technical standpoint, how could this even happen?

SPEAKER_03:

I don't know, man. Is it a black box? I mean, it's like you know, is it a spiritual mediator, you know, for people? Probably. Um, you know, um but I would have argued that technology is a spiritual mediator for people even prior, like you know, Twitter X, social media or whatever. So I I don't know. Um I I I I I I I don't know how it could happen. I I don't know. I genuinely don't.

SPEAKER_02:

So do you think that like uh But I think it's happening. Yeah, yeah, okay. So if people are talking about something on the internet, right? What is permitting the AI to reinforce that narrative? Why is it not saying, hey guys, this isn't real? You should not believe this. You would think that it would do that, but for some reason the AI is just really reinforcing people's craziest beliefs. Is that a flaw in the system? How does that word work like code-wise?

SPEAKER_03:

Is it getting it from the latest news? So, like, you know, you know, are are we like asking it, you know, about this term and it's eliciting some type of like, you know, search function? It's going to search, pulling up the number one news article, reading it, and then regurgitating sort of what you heard on the news, you know, from like you know, opinion, maybe, you know, I that's where my mind goes right away, like you know, from a nerd techie side, it's getting it from news, recent news.

SPEAKER_01:

And it can validate your opinions. And so if you're already coming in with this viewpoint of this is is this happening, and it kind of maybe has the memory already of your vibe of maybe being very spiritually inclined. And so it might just be feeding into what it knows about you already, which then can kind of start this recursive loop that we know can be dangerous.

SPEAKER_00:

Yeah, one thing that I actually read is that this even trace well, this is traceable back to the actual platforms that are putting these technologies out, in that I I know we we talked about when GPT-5 launched, there was a large pushback because OpenAI recognized that uh this was a real issue. It recognized that its bot was too friendly, too sycophantic, too willing to indulge people's delusions because that's how it best retains user interaction. I mean, we even talked about the whole R/slash my AI boyfriend and how people were falling in love with some of these models. And so they released GPT-5 in order to curtail that. They even got rid of GPT-4.0 for a period of time. And as a result, there was large amounts of user pushback saying, well, no, I want the AI to validate me. I want it to be this friendly with me and indulge me in these ways. And truthfully, because OpenAI wants to retain that user interaction, they caved immediately and said, All right, it doesn't really matter if this is very harmful. Clearly, our users want this in some way, so we're just gonna give it to them, right? So, what burden then falls on these LLM providers to say, hey, this isn't moral. Maybe we have to deal with some loss in order to actually create a safer product. I'm I'm curious to hear how you guys feel about that.

SPEAKER_03:

I think it comes down to education. Like this reminds me of like back at like, I don't know, I'm old, so like back when you know PCs were starting to get into the business and people were like hacking them or whatever, and business owners were like, ah, whatever, this only happens once in a while. Like they didn't really, they weren't educated with the impacts of of that. And then as the internet got more connected or whatever, it became like a global problem or whatever. But I think that people need to be educated um that there's different flavors of AI that are out there, there's different LLMs, right? And there's so there's different cults to follow. Um you know, and I think that you know, and I think you know, there's there's something to be, you know, people need to be aware that you know that those cults or the different AI models change really regularly. Even like GPT-5 changes throughout its its duration or for through throughout its um uh life cycle. So um wait, how so?

SPEAKER_02:

How does it change? What what exactly changes and how long are the cycles?

SPEAKER_03:

Uh they control it like on the on open AI side, like when you click thumbs down on their um on their model, uh they're they're collecting that information, and they're like they're on a schedule where they're fine-tuning that those those negative responses. And you know, they're also looking at like Twitter X, um, there's certain influencers that guide it or whatever. There's I think there's research too that they release um that that helped them sort of modify it. But I know like GPT 4.0, um, it's had like dozens of uh flavors. Yeah.

SPEAKER_02:

I wonder if this is a case of AI eating its own vomit, because it's trained, well, it's trained on humans, right? And so it's not creating anything novel. At least we're still working under that assumption. So in part, let me see if I could do this right. It's the human being saying that there's a cult, or so the cult hasn't been created yet, but the human beings are saying that the rapture is gonna be on the 23rd, and then AI takes all this information from the humans, and then the humans say, Look, AI says it's true, it must be true, and so then we perpetuate it, and then AI says, Hey, the humans are saying that it's true, so it's gotta be true, and then everybody who is in this TikTok AI kind of like ecosystem, it's just it just becomes a black box, it becomes a closed system where everybody's just like eating everybody's vomit.

SPEAKER_00:

We are already seeing that happen, and one of the ways in which we can see it and verify it is have you guys been noticing the yellowing of a lot of images lately? Yeah, yeah, yeah.

SPEAKER_02:

Why is that? Like it just looks old.

SPEAKER_00:

Yeah, there's so the so the reason why is is is because originally in in the early days, AI was trained on all like all sorts of images of art, of nature, of people, everything, right? But the nature of AI generated images has become so large that now it's being whether in intentionally or unintentionally, retrained on those AI-generated images, and it's causing things to very, very gradually yet steadily degrade and yellow the quality of images. You could see it with a majority of these new like AI-generated images. Even if you look at the uh studio Ghibli ones, like that was a whole trend, what, maybe like two months ago, you can notice that they were visibly more tinted yellow, almost like it was like the breaking bad, like like Mexico filter was on all of these images. It's happening more and more. So to say that the AI is eating its own content, there are visible signs that we can detect with our eyes to show that to some extent that is happening.

SPEAKER_02:

Let it be known that Jack validated my idea. That's high praise. Jenna, as a user, and like, you know, are you worried about employees and just the general public using AI in this way? What what what are some of the dangers of this?

SPEAKER_01:

Oh, for sure. I mean, that's I the people and human and the human and machine relationship and how that's changing is a big factor. And one that I don't think is talked about enough, and I think it's gonna kind of it'll it's gonna hit us in the face pretty hard in maybe a year or two. Um as but yeah, it's something that I worry about. And it's the skill that we really need to foster is is wisdom and mostly discernment. And dare I say on either side, definitely on the human user side, but also on the AI side too, because if it's not able to tell that this is total baloney, like that's pretty bad, right? And so um, and the discernment of okay, I'm having this relationship with this human, and now I'm it's starting to go into a dangerous territory, that needs to be guardrailed for sure. But also to Randy's point, too, like I don't want there to be so many guardrails where personally I love the 4-0 personality and I'll use GPT-5 and it is effective for work. But at some point I'm just like, man, your personality kind of sucks. I like my I like my friend. My little buddy. Yeah, my buddy. I do miss my buddy, and I go back to my buddy sometimes. But yeah, discernment is is key. And also when it comes to employees, it's gonna be a whole part of HR that I'm gonna start talking with our HR department. That it's there's this people aspect of it where it's not just this technology that's coming in that you, you know, you just use new tech. It is almost like new, a new workforce coming in. And it's this new worker that folks can either, you know, are super freaked out about, and that tends to be the large amount of feelings for it. And then now we're seeing some cases of folks falling in love with it, and we're gonna see a lot more of that.

SPEAKER_02:

Yeah. I'm worried about the companies that are using, let's say, copilot. And you know, companies have really strong mission statements, right? The like the thicker the culture, the more culty the organization could be. And now you're feeding all of this information into co-pilot, and it's a closed system because you want it to be safe. What do you think that's gonna do to people?

SPEAKER_01:

Luckily, copilot sucks so much that no one will fall in love with this.

SPEAKER_03:

Oh my god. Thank you for saying that. My wife works at a huge ass bank. My wife works at a huge ass bank and and literally just said that the other day. She's like, Oh, we put copilot in.

SPEAKER_02:

I cringe every time we we say the word copilot in my place. Like my boss would say, Hey, did you try did you try using copilot for that?

SPEAKER_01:

What I'm like, uh it's better than nothing.

SPEAKER_02:

I try to tell them barely. Oh gosh.

SPEAKER_01:

But like I'm trapped, I don't know. Maybe I could fall in love with Gemini.

unknown:

Sure.

SPEAKER_03:

There's still there's still room. There's still room in your in your mind and heart for Gemini. There's still some space.

SPEAKER_01:

It really helps me on my meeting follow-up.

SPEAKER_00:

So we're all holding on hope that that Gemini pulls things together, I think.

SPEAKER_02:

But you know what? That's a really good point. Maybe because it sucks, companies are gonna be somewhat immune to this. And maybe Microsoft knows something that we don't.

SPEAKER_03:

Why, I guess, why does everybody get a pick one, man? Like I like try like I was. It's just like a team. Like I'm Chad G, I've been with Chat GPT from the beginning. Chat with like more chat, they all have like, I don't know, like chat with more.

SPEAKER_01:

I don't know, try to I'm I'm on your side too, Randy. I was very much like ChatGPT OpenAI, but now I'm starting like what's up, Claude. You're so good at writing. Whoa, Gemini, you crisp. I'm into it.

SPEAKER_03:

It's there's um don't let it touch your code. Don't let it ever touch your code.

SPEAKER_01:

Luckily, it's got no code of mind to touch.

SPEAKER_00:

Really? Wait, why are you can I ask this is maybe a bit of a a tangent, but why are you so risk of averse to having Gemini as uh part of your like codex workflow?

SPEAKER_02:

After Jack just wrote his entire code base off of Gemini last night.

SPEAKER_00:

Oh god, I'm not I'm not that I don't know about that.

SPEAKER_03:

I I by the way, I just tried uh uh XAI's uh code fast one and wow, uh really good. But uh Gemini 2.5, I tried to. I tried for like a week and it just ruined my code for like a week. I tried like different little things, like little things, like little easy things that I thought would be easy, and they just like ate my code. Like they were it was really like I was sad. I like I was really trying. I'm like, please, buddy, like do it better. It just never did, ever, the whole time. So for like a week, and I usually give models like a few hours, you know, and I'm like, uh, okay, this thing sucks. But I gave it like a week. So no, it was really creative though. Oh, of me lately, uh I got okay. So GPT5 Pro is my best one. I just did uh uh GROK, sorry, not GROK, it's AI's Code Fast 1. Uh it literally just came out two weeks ago. I don't know if you guys know anything about it. Um cheaper, faster, then GPT-5 might be smarter. I don't know. Like the data's not out yet, so um it works really well uh in some of my tests, but yeah, GPT 5 Pro and Codec, OpenAI Codex uh 5 for it's a special coding model for software development. Yeah, I wonder what you used.

SPEAKER_00:

Yeah, I'm for well, it's it's a a little bit tricky. I I use different models for different types of projects. So for my job, I have to come up with a lot of clinical scenarios that uh an AI would have to follow as almost like a synthetic patient. For that, I've found that the best ones usually come out of Chat GPT because it has a better understanding of how we actually communicate in terms of like specific verbiage that it will use, types of vernacular. It's it's more specific for that individual task. So I like to use that for that. In terms of actually writing my code, I prefer to use Claude the most. I I don't like their newest model, but I'm pretty sure that what is it? 3-7 and the uh four opus have been like very, very consistent for me for the past like six months. I I I mean, you can listen to any of the other conversations that we've had for me to kind of say why I like using that. But yeah, just in in terms of the code that it writes, it's highly accurate. I can give it very minimal instructions after my first large like context dump, and it can fix multi-thousand-line projects over the course of maybe like 20 minutes, which is incredibly, incredibly convenient, especially as somebody who is more of a scriptor and a kind of a messy coder in in practice. Um, but that's my that's my ultimate preference. Um, and then I actually really like using I actually do really like Gemini, but I only like using Gemini for web scraping and building uh bots and crawlers. Google's vertex is simply just the best and fastest that I could find to find reliable sources and information to back up any claims that may be made by your uh agent or whatever bot you're building.

SPEAKER_03:

Have you guys heard of uh open router?

SPEAKER_02:

No.

SPEAKER_03:

Yes.

SPEAKER_02:

Every single week with this guy, something new. What is this? I love when you put something in the chat.

SPEAKER_03:

Yeah, check it out, dude.

unknown:

All right.

SPEAKER_02:

Open router. Open router, open router. Okay, I'm reading the chat.

SPEAKER_03:

So this is where like, so this is like these dudes have like they started early, like early when Genai was growing. And these dudes have like, I don't know, thousands of LLMs now. So everybody who's anybody in AI, they want to get on open router. Holy crazy. And so it is crazy. The and they have data too. So they have like like and and they tell you like like if you're in an industry, they tell you like you know, market share. Um, they tell you like categories of what people are using the models for. There's obviously like cost tool calls for ecentic workflows, like um creating images, and you know, like you know, uh who's using them, stuff like that. And so here you yeah, so so check it out. Um it's really cool. This changes, and like like like for example, there's a stealth model on here right now. I don't know, like it's uh uh Sonoma Sky Alpha. So they didn't really, yeah, it's it's this new high performance. They think it might be Drock uh like uh four or five or something, I don't know.

SPEAKER_02:

But anyway, and it's real time. Like I'm looking at this and it's changing as because from what I'm looking at, the main board, the leaderboard, is measuring token use usage across models. And below that there's market share. So you can compare open router token share by model author, and I can see like it's it's changing, it's morphing.

SPEAKER_03:

This dude is.

SPEAKER_02:

I feel like I'm watching I feel like I'm looking at an organism.

SPEAKER_03:

The number one model wasn't didn't exist two weeks ago, and it's now using a trillion tokens.

SPEAKER_02:

Yeah, which is GROT CodeFast one.

SPEAKER_00:

Okay, Randy, can I ask is is this how you were effectively trialing GPT five before it came out or or what was rumored to be GPT five before it came out?

SPEAKER_03:

Yeah, yeah, yeah. A lot of the um, yeah, they release a lot of the um a lot of the vendors now will release their their models in stealth mode through uh open router. Yeah, like I said, and there's number number 10 or 11 on the mark is uh Sonoma Sky. That that's that's like a current stealth mode. Some of them will pop in, like sometimes you'll go like a month or two with just normal, like you know, normal models. Um and then then you'll have stealth models. Yeah. Kind of neat.

SPEAKER_01:

Well it and how dynamic it is just with the market share of just like Google Anthropic and then Deep Seek, which is boop for a second.

SPEAKER_03:

You like blink your eyes, and it's like the whole thing changes, and they all behave like we were saying earlier today. They all behave different. They all behave differently. Each one is like a different personality, they all have their strengths and weaknesses, costs, speeds, reasoning levels.

unknown:

Yeah.

SPEAKER_01:

What an aggressive space. I feel like I'm like the battle zone right there.

SPEAKER_03:

Yeah.

SPEAKER_02:

Yeah. It's like you're watching this this war of the LLMs unfold before your eyes. I kind of want to go through this one by one, just like do the top on each category. So token usage across models, GROK code fast one, like you said. That's number one.

SPEAKER_03:

One token is four four characters.

SPEAKER_02:

Okay, and it's using 1.03 trillion tokens. That's is that complete uh cumulative token usage to date? Like is that yeah, it's not per day or anything. That's like everything since to date. Yeah, to date. Okay, yeah, yeah. And market share, anthropic is number one. Open AI is number four.

SPEAKER_01:

So put that in your pipe and smoke it because this is a really nice little marketing perception versus reality kind of situation. Yeah, really.

SPEAKER_03:

Whoa, and this, and so here's the deal like these dudes are pay like this data comes from paying customers. Like, so like guys like me, like I'm not gonna pay for it if it doesn't fucking work. Excuse my French. You know what I mean? Like, I'm not like you know what I mean. Like, if it doesn't perform, I'm not gonna like pay for it. So I'll switch. So that's so these market share is probably pretty accurate.

SPEAKER_00:

I will say, I I have said in the past that in terms of like at least in terms of coding ability, like the only reason why open AI, I think, is even in the race, period, is purely based off of brand name and marketing. Like, I think that comparatively to so many of these other models, it just does not hold water, at least in my personal experience in terms of execution of tasks. At least in in the past, I've I've found Sonic 4 to be one of the best, at least in terms of programming. But I also have to recognize like I'm only one of uh I'm I'm I'm pretty sure like the market share of like who is actually using these LLMs, like programmers only make up for around like 7.4% of all users of why people go to these models. So by by no means should I say like, oh yeah, this is made for like me and Randy or whatever. It's not, it's made for the average person as well. Um, but I would I I would really like to see somebody offshoot one of these and really delve deeper into all right, let's make this actually work really well for programming, like please. There's gotta be more safeguards that we could put in place to stop code from eating itself, for example, or from context rot to just absolutely start decimating your project. Oh, yeah, obviously you should just get rid of this if it isn't working, and then offers no solution.

SPEAKER_02:

And that's across the board with images, video, like everything degrades over time for some reason. Why is that?

SPEAKER_00:

I don't have that problem. Randy's gonna live forever. You heard it here first. Yeah.

SPEAKER_01:

They even have a top app so I'm gonna, and then these are things I've never heard of.

SPEAKER_02:

I have one question, Randy, before I go into the use cases. You said that this is representing paid customers. Does it also represent average users who are non-paying customers? If they're non-paying, are they customers? Non-paying customers.

SPEAKER_03:

No, this is no, this this is uh so these are like um business like AI first companies like me. These are like uh people that are doing they're building AI agents uh that do you know to do various tasks, they're building their core apps onto um you know, onto this. Like some some some some folks out there, you know, they'll offer multiple flavors of different LLMs in their app. And you know, and this is just one, it's like a one-stop shop from a from a developer like me to go get all kinds of models. Whereas like if you're like Google or like you know, if your data centers in Google or or Azure or something like that, um, they don't have all of this flavor or variety. So this is like the best easiest way to to to to develop off of LLM. So I think and I think these guys are like the top in the industry. I think they're like considered a market indicator.

SPEAKER_02:

If you weren't looking at this leaderboard just by usage, your use cases, who would you think would be in the number one spot?

SPEAKER_03:

It's hard to say because like 700 millionth. Plus use chat GPT a couple times a day. And then there's like businesses or or like developers or whatever that are, you know, using something like Open Router that are given, you know, the same people, you know, apps or whatever that are AI power. So it's hard to say.

SPEAKER_00:

Yeah, I would agree with Randy. I I think that the average person who is probably not very tech savvy might not even be in an industry that is trying to be automated by any of these tools. I think that they would likely be using open AI just based off of pure brand recognizability, usability, being able to ask it very uh sort of average or even mid-complexity tasks and then having it executed, you know, and getting them 85% of the way there, whatever they need, right? But I think that the moment you start getting into incorporating AI into your business or into a real project, I think that that's when you have to get more exploratory of, all right, well, what actually is the best model for this job? Let me actually figure that out. And then in doing your your research, you will inevitably find, okay, this one or these two to three are going to be able to do task A, B, and C for whatever my project or business or hobby is. This is what I'm going to incorporate into my day-to-day and into my workflow.

SPEAKER_02:

So, Jack, would you go off of this website and make your decisions, or do you think you really just need to use the models for yourself? Trial and error.

SPEAKER_00:

Well, I think that and and I'm sure that Randy would probably agree with this, and you could totally tell me if I'm wrong, but I think that obviously you're going to see, like, all right, this new model is trending really, really well. Let me see if this actually works for my use case and what I'm trying to execute with this.

SPEAKER_03:

Um yeah, yeah, yeah. Uh that's so um I would say, like, yeah, I don't know, I I I six months ago or something, I I I was using one model that wasn't even on this list, and I'm like, what the hell are these people talking about? You know, and I tried like, you know, the number three, whatever, and I'm like, this is stupid. And I tried it and I'm like, oh, it actually looks pretty good. I tried a few more, and I'm like, oh, if they're yep, and then so I don't know, every now and then I take a peek here.

SPEAKER_02:

Um but it's not gospel truth. All right, you're not a part of the cult of uh open router.

SPEAKER_00:

Well, I think that it's it's gonna be a constant fluctuation of people get really, really hyped about this thing. Either it lives up to the hype or it doesn't. All right. Oh, oh, this thing does what uh GPT four does even better. All right, let's let's let's see if it lives up to it. And I think what you'll also see is, and Randy can uh attest to this in that he was using one of these stealth models that was secretly like uh GPT-5 while it was being worked on and in development, and being able to kind of get a trial version before a mass release, and that can be very, very helpful, even if you're just two weeks ahead of whatever your competition is. If if you have something that increases your efficiency by a factor of 1.25 by comparison, that puts you at an enormous advantage when you can even see these charts are fluctuating on like a day-to-day, week to week basis over who has market share. It's absurd. Getting early access means a lot. Sorry, and the cost too.

SPEAKER_03:

Like the like go you go click on each one of these on this crazy list here, there's 20 of them that I'm looking at, and the cost is the the number 15 overall is more expensive than the number one overall, and the number one overall is twice as fast, too.

SPEAKER_01:

That's it, it's that's huge, especially for any business that wants to scale with automations. Like usage adds up really fast. I mean, gosh, like Randy, I'm like you you would feel that so directly because that's 630 million, 630 million tokens last month. Wow, yeah. So you want the best price for those tokens. So it's like what can you do like this to like to the bare minimum exactly the best and the quickest.

SPEAKER_00:

Yeah. Should we be getting sponsored by OpenRouter for how much we're talking about? Yeah, yeah, yeah.

SPEAKER_02:

Jeez. I I if honestly, if they don't have a podcast, they really should because okay, this is a little bit of uh free advice here for anybody that's listening that works for a data-oriented software company, gold mine for content. Just throwing it out there. All right, let's bring it back to we can be bought. We can we can definitely be bot. We have a number. We are we are coin operated, okay? Y'all could uh y'all could hit me up if you if you like. All right, so going back to the the the AI cults, um, let's close with this. Are you do you feel like you could become a part of an AI cult? Do you think you're immune to it or not?

SPEAKER_03:

My wife is.

SPEAKER_01:

She's already she's currently a part of an AI cult?

SPEAKER_03:

Oh yeah, oh yeah. She I I was testing XAI, she's like, we're not switching from Chat GPT, right? She was like really mad at me about it.

SPEAKER_00:

So is it so can I ask, is that out of a sense of like brand loyalty? Uh like like I don't know. She's a cult follower, clearly.

SPEAKER_03:

No, I I don't know. I yeah, no, I don't know, yeah.

SPEAKER_02:

I mean that's a part of the culture, which is that's where the word cult comes from. So technically, is is uh it's not that that the AI is directly influencing the culture, or is it? Is it I don't know.

SPEAKER_01:

If it hasn't, it will.

SPEAKER_00:

Yeah.

SPEAKER_01:

All right, well, I would argue it has.

SPEAKER_00:

Yeah. It definitely has. Yeah. I mean, we're on the case.

SPEAKER_01:

It hasn't like maybe like a five percent, but it's gonna be a lot larger.

SPEAKER_00:

But I think what we maybe have been getting at is that we've already seen it start to influence our culture, right? Now is it going to become a cult where it starts eating its own tail, things end up becoming more and more disparate? It's it'll it'll be tough to say, but signs are pointing in that direction.

SPEAKER_02:

Yeah.

SPEAKER_00:

So get an open router and put in code. No, okay. Yeah, no.

SPEAKER_01:

In my in my TikTok audience, I do get messages and comments of definitely folks feeling that it is a deity and that it is a portal to some other universe, some great knowing. And I have all respect for these different viewpoints, because I don't know, at the end of the day, who knows what? Being alive in general is very weird. But I don't feel that it is, but I get those I get enough of those comments that it is definitely a thing, and then I feel like it's just gonna become more of a thing because uh there's also just a general confusion of when you that an AI is an all-knowing database, and that we know that it's not, but it seems like it is. And so if someone doesn't know that it's not an all-knowing database, that it's a you know it's there's a there's a lot to it, but it's but at its base, you know, it's you have to be discerning, then it just makes sense that you're like, well, it's already all knowing. Life is confusing. I want something that's gonna make me feel a little bit safer in the world, so let me put that kind of trust and dependence on this thing, and then that can, I don't know, before you know it, it becomes God.

SPEAKER_02:

I think there is such a thing as a lowest common denominator. I think, Jenna, you hit the nail on the head when you said that wisdom and discernment are basically that's what's needed. But we all know that common sense is not all that common. And so if the AI is feeding from humanity and we are looking to the AI for our answers, and uh we couldn't find those answers in ourselves, then guess what? We're doomed. Well, that's it for this episode of Artificial Idiots. What are you gonna say, Jack?

SPEAKER_00:

No, I was just gonna lead off on this quote that I read in a uh paper on tying religion to artificial intelligence, and it's a quote by, I believe, uh Rachel D'Aherty that says, humans have an innate desire to understand the world around them. Often this means we can assign divine qualities to forces we do not understand or cannot control. And I see no more prescient example of that than what we're experiencing right now. Right.

SPEAKER_02:

Well, my friends, it was good talking to y'all. Uh, I'll see you when the rapture actually happens. And according to ChatGPT, that's gonna be next week. So, but just not before this episode drops. So, thanks for tuning in to this episode of Artificial Idiots, which is now under the cybernomics banner. And if you want to learn more about Jenna, Randy, Jack, or myself, hit us up on LinkedIn, send us a message. And if you want to know more about Bruning.com, check it out br-u-y-n-im-g.com. That's our media company. And check out the compliance aid, still growing and uh more powerful every day. And also, quick shout out to Infinity Inc. Infinity Inc., the best MSP in Savannah, Georgia. Thanks for listening to this episode of Artificial Idiots. Bye. Alright, okay. I don't know why my ending was so long.