Hey Huzi...

Why AI is Fuel, Not Fire (And Why That Changes Everything)

Eric Post Season 1 Episode 13

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 54:10

Send a text

Summary

In this conversation, Mo Naboulsi and Eric Post explore the transformative impact of AI on user experience, the shift from traditional user roles to operators, and the implications of agentic AI. They discuss the evolution of prompt engineering, the role of AI as a fuel for innovation, and the changing landscape of education and thinking in the age of AI. The conversation also touches on the competitive dynamics within the AI industry and the importance of understanding the tools we use.

Takeaways

  • The shift from user to operator signifies a new era in AI interaction.
  • Agentic AI is taking actions in the physical and digital world, raising concerns about accountability.
  • Prompt engineering is evolving into agent orchestration, requiring new skills.
  • AI serves as fuel for innovation, but builders are essential for creating effective applications.
  • Education systems may need to adapt to prepare future generations for an AI-driven world.
  • The competitive landscape of AI is rapidly changing, with new players emerging.
  • Understanding the limitations and capabilities of AI is crucial for effective use.
  • The value of niche expertise is increasing in a world where general knowledge is easily accessible.
  • AI can enhance productivity but should not replace critical thinking and problem-solving skills.
  • Human creativity and 'weirdness' remain irreplaceable by AI.


Chapters

00:00 The Death of the User and the Rise of the Operator
02:26 Understanding Agentic AI and Its Implications
05:42 Defining Chatbots vs. Agents
08:36 The Evolution of Prompt Engineering
11:11 AI as Fuel: The Role of Builders
14:00 The Competitive Landscape of AI Models
16:52 Navigating the Future of AI and Learning
20:02 The Impact of AI on Workforce Development
28:33 The Impact of AI on Education and Society
31:27 Understanding AGI and Its Limitations
34:37 The Role of AI in Knowledge Acquisition
38:14 The Evolution of AI and Its Economic Implications
41:01 Navigating AI's Rapid Advancements
45:00 The Importance of Prompt Engineering
46:46 Human Uniqueness in the Age of AI

Keywords

AI, generative AI, agentic AI, prompt engineering, technology, education, innovation, user experience, automation, future of work



Support the show

Mo Naboulsi (00:00)
Imagine that you're in a car and for the last three years you've been in the passenger seat but you've been shouting directions. Turn left, paint me a picture, write me a poem and you felt like the captain because you were giving the orders but the steering wheel wasn't yours and now in 2026 the car has stopped listening to your voice. It just drives. It files your taxes while you sleep. It talks to your doctor while you're at work and it trades your stock while you're at dinner. You aren't the captain anymore. You're the cargo. We are

witnessing the death of the user and the birth of the operator. And the question that keeps me up at night isn't whether the car will crash, it's whether we'll even notice it locks the doors. Today, we have an insider who's seen the code behind the curtain. And of course, y'all know and welcome Eric Post. What's up, buddy? How you doing?

Eric Post (00:45)
Hi guys.

I'm waiting to see what your intros are gonna be. It's like a little mix of a poem and some research and yeah, I love it. yeah, let's man, what? Okay. Yeah, let's do it. What do you got? Where do want to start first, man?

Mo Naboulsi (00:49)
Hehehehe. Hehehehe.

Yeah,

yeah, I just want to get into, you know, this whole concept of the death of the user I find really interesting. I feel like we're having a lot of paranoia being spun out of this intellectual curiosity. I feel for the last few years, we've been playing with toys. And generative AI has been this party trick, you know, make me a funny image. We know a couple episodes ago, we were doing the the the the shrimp Jesus and the walrus Jesus.

Eric Post (01:21)
Shrimp Jesus.

Yeah. Yeah, sorry.

Mo Naboulsi (01:25)
You know,

Eric Post (01:25)


Mo Naboulsi (01:26)
which is awesome. Very low stakes type stuff. when we're talking about like agentic AI, you know, I feel like we're going from the shift of generative to executive. We're handing over agency. At least that is the narrative, right? And then the software isn't just creating text anymore. It's taking action in the physical digital world. It's executing. So that's where the paranoia is starting to stem from.

And then you talk about liability. If a chat bot writes a bad poem, who cares? But if agentic AI liquidates your savings, right? Because they thought that the market was crashing, Or books you for a surgery that you didn't need. who is to blame in that scenario? We're kind of walking into this legal tightrope, know, in psychological minefield of software. And who really has more autonomy

Eric Post (01:55)
Yeah.

Mo Naboulsi (02:12)
We are effectively outsourcing, our own volition and it gets a little scary. So I wanna just talk about that a little bit. do you feel there will be some stop gaps installed? Because I haven't really seen any at this point.

Eric Post (02:26)
Well, personally, I'm super excited about this because now we got to real world practical application. The problem with the whole thing is that there was this certain beautiful, wonderful marketing job by all these AI executives knowing they needed to push adoption of the product early. And so the whole mantra was, if you don't use AI, your competitor is going to beat you sort of thing. Right. And this was the cliche thing. You won't use AI. You won't be beaten by AI and won't take your job.

Mo Naboulsi (02:52)
Right.

Eric Post (02:56)
But your competitor using AI will. What a whole marketing blame shift thing that they did to get us to compete to use the product to train their product so that AI could actually replace your job. You could see this thing happening. And so they pushed everybody into their product to pay and fund and train the thing so it would be able to take your job pretty soon. It's just a wild time to watch. And there was a certain rush towards using things. People were too shipping.

things too early.

Mo Naboulsi (03:28)
Yeah.

Eric Post (03:28)
over-promising,

under-delivering. So that's going to continue. That Pandora's box has opened. It was not as scary until you get to the agentic side of things. know, like rushing, you know, a chat GPT-4 to market, you know, isn't as bad. Although even just that you saw that there was sort of like schizophrenia, you know, uses, was some mental illnesses it created and, you know, like the idea that it was lying caused this

Mo Naboulsi (03:51)
Yeah.

Eric Post (03:58)
were mortifying of it, people thought that it was sentient and conscious and all these things. know, society wasn't quite ready for a computer to spit words out on the screen because we've always kind of conflated words and the mastery of words with intelligence. When in this case that's not true. For instance, there's a debate on can an LLM do math or give the appearance of doing math.

Mo Naboulsi (04:13)
Yes.

Eric Post (04:22)
Right? Do we just have the appearance of it doing math or is it actually doing math because it understands math? Right? Or is the predictability of the algorithm so good that it gives you the appearance of knowing math? So there's, there's, you know, lots of debates and discussions around that, but really this rush does start to give me a little bit of like, we get to the agentic, we saw this in Gentic browsers and people just giving it, you know, their, their files on their computer and everything, their passwords and logging into everything. And yeah, there are stories of the agentic.

agents wiping the computer free and sending files it shouldn't have access to and send those off. yeah, where do want to You want to start with kind of definition, make sure we understand the difference between the chatbot and agent or where do want to start?

Mo Naboulsi (05:06)
Yeah, that would

be great. Yeah. What is the definition between a chatbot or an agent or a user and an operator?

Eric Post (05:12)
Yeah, like so just to make sure that we're all listening on the same page, you know, that like the chat to be T4 or check the T5.2 or whatever essentially is a is a chatbot. It's like a, you know, walking talking encyclopedia and an agent is more like a junior employee, you know, or a Fiverr employee, let's say at this point in time. Okay. Where the difference is, you know, you have a question to answer. It lays dormant until you give it inference until you ask a question spins up answers a question produces token one word at a time in a regressive manner.

Mo Naboulsi (05:29)
Yeah.

Eric Post (05:42)
looks back and that's to give you the illusion of intelligence because it's it's using everything in context to give you the next word based on each word looking back to guess what the next word is. It's a it's a beautiful math problem that we're watching. It's incredible to be honest with you, but there's still it's not AI in that way and the way most people think of AI. By the way buddy, have we ever been talked about you know how the term AI is not even real? Have we even talked about that on this podcast?

Mo Naboulsi (06:09)
If we have, we probably blew right past this. So let's touch on that. I'm interested.

Eric Post (06:14)
Just to be clear,

the word artificial intelligence or the name artificial intelligence is always such a weird thing. We just accepted it. But if you look back where the origin was, I think it was in 1954, 1955, there was this college ⁓ student in, I think it was Dartmouth University actually, and they were trying to get funding. And then it was called Autonomooperations and had these very non-sexy terms. And he was trying to get funding for a project that he was trying

Mo Naboulsi (06:32)
Mm.

Eric Post (06:44)
kind of portion off what his research was from everything else and so he came up with the term artificial intelligence to get funding the marketing things essentially to repackage what he was working on so the first like known documented word you know use of artificial intelligence wasn't because of the technology it was because of it he needed to get funding to this product this program and it's stuck right so to be clear like chat gbt is not AI chap GPT is a portal

a consumer portal to access OpenAI's large language models. Okay, just to be clear what we're talking about here. Where AI agents comes over though is that they have access to tools.

So there's a couple of things that are different. One, they can sort of work recursively in loops where they can go a task and then check on our task to see how they're doing and then go a little further and check on their task to see how doing and they have access to tools. Tools being things like your CRM or your email or your calendar or online, right? Or to be able to ⁓ access an API to get data and bring it back to you or go online and bring it back to you and it creates a checklist and it follows a checklist. So it's really a series.

of chatbots if I just want to like dumb down just really simple a series of chatbots that have access to tools they talk in a recursive way to produce a result so an earlier ⁓ chatbot was like you asked for a recipe an agent might be able to make you the meal right as an analogy

Mo Naboulsi (08:09)
Right.

And now it's getting even more efficient to that point because it'll, I remember when we first started, you had to, like I would ask the chat bot a question, but then I would ask it to ask me questions if it didn't understand or ask me pre-qualifying or qualifying questions to be able to get it more depth so that we can, it's definitely becoming more streamlined to the point where it will automatically do that and or,

will just give you a list of variety of different things, different themes, what have you, which I find to be interesting.

Eric Post (08:42)
you made a comment at beginning, like the death of the user, what did you say? The death of the operator, what did you say? That's it.

Mo Naboulsi (08:46)
The death of the user

and the birth of the operator.

Eric Post (08:50)
So the term prompt engineering was born, you know, a few years ago as the, as the phrase of how do you talk to AI? What is the incentive instructions that you get? We went from having to write code to interact with computers, to be able to use our natural language. So we're going to produce, you know, well-formed chunk of natural language that would help the chat bot be able to perform something we call that prompt engineering. It's kind of the new way of coding. So we were really moving now from prompt engineering to agent

orchestration, if you want to think of it that way, right? To agent management. So for instance, in the backside of HOOZI, we have a new product coming out called SparkPad, where we essentially spin up a swarm of AI agents to help perform a task. And in order to build that up, we have a swarm of six or seven AIs that are agents working in parallel to produce the swarm.

Mo Naboulsi (09:22)
Mm-hmm

Mm.

Eric Post (09:44)
So they are doing a task, so we'll give them a task and then they each spin up their task and then pass on the results to the next one, to the next one, to the next one to fulfill the whole project instead of me having to prompt each one. Right? And so it's it's it's, it's much smarter. And of course, a lot of the technology that's been able to do this isn't just in the advancements of the LLMs, but it's in the advancements of the chips and the compute as well.

Right? So our limit on what we're able to do this way is not so much in the technology as it is how fast can we bring more servers online to handle all this processing? You know, how fast can we build more chips and increase the efficiency, the effectiveness and decrease the cost of running each inference or building new models? Right? So it's really the bottleneck is in the compute. Not so much. My personal belief is in the technology. ⁓

Mo Naboulsi (10:31)
you

Eric Post (10:39)
We have some other limits and we can probably talk about the bottlenecks of training data and whatnot, but really right now it's how fast can they build the servers.

Mo Naboulsi (10:47)
So I actually have a follow-up question, so.

whether or not, without getting too technical, API keys, this, that, the other, you have these large language models. You've got OpenAI has ChowdGBT, Google has Gemini, Anthropic has Claude, but there are dozens if not hundreds of...

AI tools that are borrowing this technology in order to manipulate or create a new version or a specified version of a tool in order to help produce a result or fill a need for a specific gap. What I've found is just like any technology, right, any kind of new development that comes into play, there's always going to be

Eric Post (11:27)
Mm-hmm.

Mo Naboulsi (11:35)
the technology itself, like gasoline as an example, is a fuel. Whoever developed it, created it, found it, discovered it, doesn't matter. I developed a car, I'm using that fuel in order to power this car. It doesn't mean that the fuel itself isn't ⁓ a lesser thing or that my car is a lesser thing. It's just, this is the vehicle, right? So the way that I'm looking at AI now viscerally is that

These guys that are developing this tech, they're the fuel. And people like yourself are utilizing this fuel in order to build something substantial. at this point, there's so much competition, but a lot of these tools, they're just an afterthought. They're vibe-coded, they're developed overnight, they're trying to release a concept.

maybe get some funding or whatever. I'll disclose this, I've had access to the new Spark pad. And I'll tell you this, what I find most interesting is how much thought has went into it and the utilization of so much specificity. just, the things that you want and what you would use a typical chat bot without even thinking they're there. And...

all the concepts and all iterations and all the things that you've hoped and wished that it would do, it does it. And that's what I think really separates a phenomenal tool versus an everyday tool. Cause I can use Google docs, but I can also use some third party note taker that is pre-installed on my computer. It doesn't allow the same leverage. It doesn't give me the same value, right? And that's kind of

Now how I'm starting to look at this whole AI tech revolution thing is that AI is just fuel, but there are builders and founders like yourself that are actually starting to build that vehicle in order to leverage this fuel. Does that make sense?

Eric Post (13:30)
Yeah, man. It's, I'll tell you, it's really hard. So there's a lot of debate. Like what is a defensible moat? You've been building software anymore, you know, or building a company, even building a company anymore. You know, is it integration? Is it data? You know, is it branding? Is it trust? Is it your interface? You know, what is it? What are the defensible moats these days when you can spin up complicated software that used to take two years to build and build it in a week? Um, you know, each engineer, a reasonable engineer today with, with cloud code.

Mo Naboulsi (13:36)
Mmm.

Yeah.

Eric Post (14:00)
as an example, ⁓ just from 12 months ago is probably a 5x improvement. And that's fairly conservatively in terms of output and quality of code written by a single engineer. 5x is a pretty reasonable number, I'm saying.

Mo Naboulsi (14:16)
That's so you're saying that

5x is you're being like conservative.

Eric Post (14:22)
I'm typically like, with even our small engineering team we have here at huzi you know, we've probably gone in the last 12 months, 15x in the amount of cost of tokens.

that we're using to write code, probably 15 X, simply because the quality of the code is better. And we've now developed a little more trust and a little more expertise on how you now take one of my senior engineers from, say he was writing a hundred or 200 or even 500 lines of codes a day. Now he's managing for code agents and they might write 20,000 lines of code, you know, in 24 hours.

Mo Naboulsi (15:04)
That's

insane.

Eric Post (15:04)
You know,

yeah, and I'm using that number a little bit loosely, but I'm telling you it's such a substantial increase. It's, hard to comprehend in that way. Now that could be taken. want people to hear me clear that can be taken as it's running loose and foot free, you know, or a little, little, you're running a little loose or whatever, or where's your QA and you're running unit tests. Yeah. With all of the really like, you know, engineering best practices.

Mo Naboulsi (15:07)
Yeah.

Eric Post (15:30)
play you know that's why we were a little bit slower I think than some of the earlier firms to really power your engineers heavily simply because of the knowledge of how much extra work goes into code when you get up the wrong way and you know that one what you know and it causes a mess and so now though with 4.5 and above and haiku it's not like

You know, Anthropics done such a great job of tuning their LLMs, tuning their models to have a specific, you know, expertise. And they've done a really nice job in the specific expertise of code writing. And, and so you see now, man, that cloud code, you know, cowork. Um, I mean, the amount of the adoption rate that's happening right now with Claude and Google both is actually a faster and a quicker adoption rate than what happened for open AI over the last couple of years.

just recently.

Mo Naboulsi (16:24)
I'm noticing that. mean, even like

the quality of like what you're like the output that you're able to get, ⁓ Coding aside, conversationally, intellectually, dynamically, what they're able to produce. Like, I don't think I've touched open AI personally in like three months.

Eric Post (16:44)
And obviously still great product and all that stuff, but that's why a product like ours is kind of fun for our users because we have the best models in there for the use. So you do like 5.2 thinking from open AI. You can use that for a project in the same conversation. If you didn't want to switch because you like the way is Claude 4.5 sonnet writes, then you can just switch and then have it right. And then if you want, you know, for instance, like Gemini 3 is a great, you know, it's, it's great at producing,

Mo Naboulsi (16:46)
Right.

Yeah.

Eric Post (17:11)
I don't know, think Gemini 3 is kind of an all around beast. It can do a lot, you know, really, really well. So then if you want to take that and turn it into like a really well researched and thoughtful white paper, all in the same chat, know, like one of our users can switch between the models, you know, all in one chat to produce the thing. And so you just, you know, you're starting to see people understand that these different frontier, large frontier models have different aspects that are good and not as good as the others.

Mo Naboulsi (17:14)
I was just about to say, yeah.

Mm-hmm.

Eric Post (17:40)
Right? So if you

Mo Naboulsi (17:40)
Mm-hmm.

Eric Post (17:41)
sit down and you just say, okay, I am a chat GBT user, you're really missing out on the leverage of just AI in general and being able to understand the quality and what the different attributes are in some way. And that's through the consumer portals. that's using the API as both, you know, so for them. I would, I would definitely suggest that people get, ⁓ very familiar kind of with what they're good and bad at and be able to at least, you know, ⁓

Mo Naboulsi (17:47)
Mm-hmm.

Eric Post (18:07)
say, okay, today I needed Toyota truck. Tomorrow I need a Camry and the next day, you know, I want an MR2. Remember the old MR2s? Remember the old little MR2s? I'm just pulling some out of the pack. But more like a Porsche, right? Like you just, you might, or a mountain bike versus a road bike, right? They're all bicycles, but a beach cruiser is way different than a downhill mountain bike, right? And yet people say, oh, I'm learning to ride a bike.

Mo Naboulsi (18:16)
yeah.

Eric Post (18:33)
Like I'm learning to code or I'm learning to use AI. Still there's, there's nuances to all those things. So I'm, I'm very optimistic about not all of the value being abstracted all the way to top to the, the frontier models.

I do think that there's players are going to, like I hope to be one of them that has an industry niche with a certain integration into a, into an area where one of those large frontier models just simply doesn't have the interest or the expertise to drill down that far. Right.

Mo Naboulsi (19:06)
Yeah, yeah. ⁓ think the Miata was trying to do the MR, too, and it ended up being like an old, like every 70-year-old lady. Like, I can't tell you how many friends in high school, their grandparents had a Mazda Miata. Yeah, literally. Like, it's just like, my god.

Eric Post (19:13)
Yeah, yeah, yeah, yeah, yeah.

with Momo wheels.

Mo Naboulsi (19:35)
They're just trying

to do something. I was like, you trying to be a Porsche? You trying to do the MR2? Like, what's going on? Anyway, it's just kind of funny. So you touched on a little bit on Anthropic. I just, I don't like to dig in the dirt too much. I just thought this was interesting. And I wanted to bring it to attention to gain your perspective. Publicly, Microsoft tells every enterprise client to buy GitHub Copilot. They sell it as a gold standard, but the leaks...

Eric Post (19:40)
Yep. Yep.

Yeah.

Mo Naboulsi (20:05)
are saying that Microsoft is telling its own engineers to use cloud code to benchmark their work. So it's like, guy selling you the shovel is secretly using a different shovel to dig his own hole. Makes zero sense. like, all right, so you've seen the internal benchmarks of this A-B testing, an admission that OpenEye's architecture has hit a wall, or is it just like a, hey, I need a sovereign model, you know, like Minstrel in Europe to be able to be safe?

Eric Post (20:09)
Yeah.

Yeah.

Mmm.

Mo Naboulsi (20:32)
from US surveillance. Which is, by the way, interesting. I've been testing out a lot of these. like Kimi and DeepSeek and some of these Chinese models, which is very interesting. Is it Kimi? I think it's Kimi. Yeah, yeah, yeah, which has been interesting. then ⁓ Mistral, I haven't looked into yet. But there's quite a bit of other things to consider. Not to say that that's, you know,

Eric Post (20:44)
Yeah, KimmyK2.

Mo Naboulsi (20:57)
better or worse, is it a sovereign thing? They want to have zero surveillance. Are we talking about internet fracturing to American AI? it because they think Anthropics Cloud is better? And I don't know. I think it is, because I wouldn't want to use GitHub Copilot over Anthropic. You know what I mean?

Eric Post (21:19)
Well,

they're both great, but I think if I'm to guess, I'm not in those boardrooms, so this isn't any, yeah, because of the unprecedented pressure and competition that every CEO and every senior level manager, all these companies are looking for any way they possibly can to stay or get ahead.

Mo Naboulsi (21:24)
I know, this is all hyperbole.

Eric Post (21:42)
And so using another model or mixing models up or like doing something behind the scenes. Like, unfortunately, the pressure, the competitive pressure for literally hundreds of billions of dollars, hundreds and hundreds of billions of dollars, that's a number that's factually hard to even comprehend or on the line here. And so there are, there are so much happening behind the scenes. We have no idea. That's just simply a one little, gives you a little glimpse into the desperation of sorts to be relevant.

Mo Naboulsi (21:50)
Yeah. ⁓

What would you compare like Mistral AI to? if it would be out of the US based LLMs, because in terms of pricing, there are at least I think for the pro models are like five or five to 10 bucks less. Would that be plus you get to call it Le Chat.

Eric Post (22:12)
you know, speed.

You can tell with video. But like we actually

use KimmyK2 behind the scenes mode just so you know. Yeah and so.

Mo Naboulsi (22:37)
I didn't know that.

Eric Post (22:40)
You know, think about these models as well. So some of the models and open source models, you can actually run them locally. You don't have to like get an API and, you know, tap it somewhere, pay for those calls or, um, you know, you can actually run them locally to do certain things. Kimi K2 is a really cool, just a fast model. There's, and there's some politics in all of this too. Like we don't serve up models like DeepSeq or Kimi K2, cause there's some political pressure of like, oh, that's a Chinese model or they stealing my data. It's going to Bangkok or Beijing or there, you know, like it's not.

Mo Naboulsi (22:46)
Yep.

Hmm.

Right.

Eric Post (23:10)
It's that's just not anything I want to be part of or need to worry about because we've got some great models here in the models we've already talked about. So, but behind the scenes, know, communicate to is a relatively inexpensive and it's lightning fast for a few things. And so to generate, you know, you know, text for tests and things like that, it's, it's really fantastic, but it's not something that we use in the product public facing. ⁓ so, you know, understanding, you know, the cost of the models, the benefit of the models, the sort of the.

rubric of what they're good at. ⁓ That's something we're trying to be very, very good about and thoughtful about so we can produce a great product and produce great responses for our users when they're inspecting those and safe, know, and as reliable as possible, you know, with as much uptime as possible, at least, you know, interruption as possible, least, you know, hallucinations as possible and things like that. So.

Mo Naboulsi (24:01)
Yeah.

How is something like a DeepSeeker or a KimiK2, I mean, forgive me if I'm wrong here, but even the report models are almost free, if not free, minus like maybe the code portion.

Eric Post (24:13)
Oh yeah,

we're also just so you know behind all the scenes, we're also in like a Cold War era sort of thing. So when they came out, they dropped Deep Seek like they did. That was a Cold War era sort of move. That wasn't a technology business move. That was, we're going to disrupt the thing that's booing your stock market. The thing that everything, 94 % of gains or whatever is a result of the direct impact of AI in those businesses in our stock market gains and whatnot.

Mo Naboulsi (24:38)
Mm-hmm.

Eric Post (24:42)
So they're like, hey, let's F that up. You we're going to drop those models almost free and like show how fragile, you know, the American infrastructure is in AI and things like that. So it wasn't just a business move and just see how many downloads they can get of apps and steal your data. There was definitely a Cold War era move there to disrupt the billions and billions of dollars that are at stake here.

Mo Naboulsi (25:04)
No, I completely believe that, I wanted to transition a little bit, but it kind of ties into it. So Google's Gemini 3, which there are deep researchers I've done quite I've used quite exhaustively because I do that for these things and other ventures. And I found it to work as not just a junior analyst, but almost

I would say halfway between a junior and senior analyst because it is so good, especially if you can follow up with the right questions and the detail and what have you. And for productivity, it has just been tremendous. Now, once it's able to gather details of what, answer the public or answer Socrates, some of these other.

Far more comprehensive detail what is actually people? What are people actually saying in the interweb social media be able to gather that data? And then put it in a way where because what I like about like an answer the public is that you kind of get this web and you're able to see Dynamically what what people are saying how they're saying how often it's being said. What are the things? Where does this bike those kind of things are and I'm sure I could do that in canvas, That you have excuse me satori

But if we automate this grunt work, the research, the synthesis, the organization, my way of thinking is we've had to establish this type of thinking brain. How does the next generation ever learn to think? Are they learning a different way of thinking? Is it going to be more proactive, more instructional?

Or we kind of like engineering a workforce of editors, not just writers who have to become editors. It's like a different, I don't know man, I don't know if I'm explaining this right. It's a level of competence, you know what I mean?

Eric Post (26:47)
You are. There's

there's a few ways to slice it and slice it in parallel to what has happened before. So what happened before we talked about public school systems and whatnot.

Mo Naboulsi (26:57)
Yeah.

Right.

Eric Post (27:02)
There were systems that

were created because of the way the economy was and the industrial revolution was and the direction of our country and the way the technology was sitting, that they produced factories of kids that needed to have a certain mindset to churn over the huge flywheel of the US economy. And so the school system, everything was designed to produce factory workers and assembly line workers and the mindset to be in that mode.

Mo Naboulsi (27:21)
Mm.

Eric Post (27:33)
Meanwhile, the elites at that time didn't send their kids to schools to be that way. They were reading philosophy.

and they were painting and they're writing poetry and they're doing other sorts of schoolwork and scholastic learning but it wasn't in the same rote you know memorization raise your hand you must sit in the class you must do the things right you're gonna sit in a box sit in your chair shut up get in line when the bell rings is the only time you can leave sort of programming that wasn't what the elites did okay and they didn't put their kids to school there now the same thing sort of happening again

Where I think that what's happening is in the same way where we have a consumption economy and we have a creator economy. I'm not just talking on social media, but in general. And so there's a consumption mindset and programming that's happening right now. And there's a creator mindset and programming that's happening right now. And lot of that is being done on the socioeconomic divide.

Mo Naboulsi (28:17)
Right, in general, yeah.

Hmm.

Eric Post (28:34)
that way. So when we talk about ⁓ like thinking ⁓ in the same way we talked a couple podcasts about you you can see the statistics of screen time on a phone being divided that why that poor families definitely have more flat screen time than rich families do as an example. ⁓ Same thing is happening with AI usage. ⁓

We're seeing some of the higher-end schools going back to the philosophy and painting and arts and things like that. And the other people are going to be ⁓ basically intertwined with the AI in some way in their life. It's one of my predictions, and I think we'll see that to be true. ⁓ But all this guessing is not so much guessing when you can look at just simple patterns throughout history.

They're simple patterns. just were wearing different costumes at the time. know, so technology continues to disrupt because we have an imagination that's so grand. We have, you know, we have, we have just wild imaginations and those that think of do it, you know, and you can imagine something, we can bring it into existence and eventually these wild, you know, obsessions. And right now you can see it. You can see it.

in these tech CEOs twinkle in the eye when they talk about creating AGI or ASI. They talk about a world with a super intelligent thing that they helped give birth to. It's really fascinating psychologically to watch them be so fixated on being a part of what they believe the most fundamental greatest invention of human creation ever. Right. And so what a megalomaniac

Mo Naboulsi (30:03)
All right.

Eric Post (30:25)
does it take to get there? And again, I'm not trying to say that this whole thing is good or bad, because there's nothing that's not relevant. But that is a weird, I've just been noticing this fascination with wanting to talk to something so much smarter than themselves.

wanting to interact with something so much smarter than themselves. But I will say, they have to have that level because they are promising their shareholders massive gains and they're again, raising hundreds of billions of dollars. So they have to have that level of insanity that this is gonna happen. However, in our lifetimes alone, there's been multiple places in history Mo where we've been told, well, in the next 10 years, we're gonna have a computer smarter than us.

Mo Naboulsi (31:07)
Right.

Eric Post (31:09)
This is not a new story. We just forget about the old stories. So now the debate is, well, can an LLM or current versions with their current infrastructures and neural networks and all those things, are those even the foundational elements of what we've defined as AGI or ASI?

Mo Naboulsi (31:26)
Mm-hmm.

Eric Post (31:27)
Okay. So, you know, I, I don't know. I, I don't think the current infrastructure will get us there. I don't see that happening because hallucinations as an example is a feature, not a bug of a current LLM, you know? And so I don't think we can have AGI or AICI until that that's a completely different sort of architecture, ⁓ with a fundamentally different premise for the way it ingests the world.

Right? So right now it's a very 2D understanding of the world. It doesn't actually know what physics is. It can just give us the illusion that it understands physics because it can do the calculations, but it can't experience physics. So it's really hard to have an AGI ASI without the experience of physics. It's just one small example.

Mo Naboulsi (32:13)
No, no, I appreciate that thoughtful response. I've been playing in my head. These questions aren't my, sometimes they're aligned with what I believe or what I think or my experience. Most of the time, these are questions that I find collectively are being asked. And one thing that strikes me,

You've touched on this before. I think you posted this, I don't know, a year, two ago, I don't remember. But how everybody's like, everybody's looking down on their phones nowadays. Nobody's paying attention to the real world. And you put it side by side where it was like 1950 and everybody was sitting on a train glued to their newspapers.

Eric Post (32:54)
yeah. Yeah. Yeah.

Yeah.

Mo Naboulsi (32:56)
you know, no communication,

so it's just like, it's the same. The thing that is triggering me right now, where we talked about, you know, how you have to have some sort of competence, what does it look like in terms of like research, because you have access to AI. But if you look at how the education system is set up, and I'm not gonna get into a deep dive here, but you go to school, it's not learning.

The idea is that you learn how to learn, but even that I think is fallible. the majority of the time, you're not learning how to learn or learn at all. I think you're just learning how to memorize information. And whether it be an encyclopedia or a textbook or Google or AI, you're memorizing information.

and some concept, just like through grade school, high school, college, doesn't matter, you're essentially renting knowledge. And I can't tell you how many doctors I know, how many lawyers I know, how many pharmacists I know that tell me I just had to get through it, memorize it, and then the real work begins. I don't apply 90 % of what I learned in school. Yes, some factual information, yes, but a lot of that stuff is

It becomes learning through osmosis, especially like a doctor. Like if you're becoming a physician, you have to learn, you know, the terminology and all that stuff, biology, anatomy, whatever. But most of the time, like you're forgetting most of the stuff and it's not really applicable. It's just like, it's getting you to that next phase. So isn't AI just getting us through that next phase? Aren't we just still renting knowledge to be able to get us to the next phase? That is just a question that is popping up in my head.

Eric Post (34:33)
Mmm.

Well,

as it is, LLMs are insanely as a productive tool, you know, when applied appropriately. So any, you know, thing that I say about its limitations doesn't take away from the advancements it's made and the, the, and the beautiful discoveries that's unlocked and the, and the, ⁓ cancer, you know, that it is.

Mo Naboulsi (34:45)
Right.

Mm-hmm.

Eric Post (35:01)
seen and caught before the doctor did and like all those things are great stories we shouldn't bury in all of this. ⁓ But it isn't it isn't exactly what the CEO that needs to convince you that needs to 400 billion dollars this year is to be clear. Okay. ⁓

Mo Naboulsi (35:08)
Totally.

Mm-hmm.

Eric Post (35:22)
And so that's, that's to be true. Now it's also exposing a lot of things. Like you're talking about using, you know, like research and whatnot. So, you know, like McKinsey, you know, the huge consulting and for him, right. just so you know, they surpassed a billion open AI tokens in usage in a single month.

Mo Naboulsi (35:32)
Yep.

God, I mean, they're a big company, but still.

Eric Post (35:44)
So my point is, but they're hired to do this deep research as human experts, these Ivy League students that are meant to be able to digest data and interpret data in a way that these massive corporations can make these big decisions upon.

Mo Naboulsi (35:49)
Right.

Eric Post (35:59)
And then it got exposed that they used a tokens in a month. And so there's some reports or there's some obvious, you know, hallucinations in the reports because they just didn't double check the work and they took the generation from chat.gbt and, you know, included in the report and whatnot.

So, you know, it's just fundamentally, people are saying, well, why would I give you $250,000 as a research thing if you're handing it off to chat GBT almost? Okay, but a billion tokens a month is quite a bit. so I just, you know, what has happened is we've really seen the value of knowledge collapse and the value of

particular niche expertise exponentially increase is what we're seeing. And the other fundamental thing that's changing is that, like as an example, as an engineer, an engineer, although there's the now people looking for jobs and there's half as number of engineers being posted on Indeed as there was a few years ago. ⁓

a great engineer that can now orchestrate AI's commands a really great wage. Right? So it's not, you know, people are saying, oh, let's take an engineer's job or whatever. Yeah. And then there's probably a good point to be said that nearly almost every line of code now at Claude, they're saying that things like 95 % of 99 % of all code.

That they write and I was all written by their own AI's so like Claude co-work was actually supposedly written 100 % by Claude code Which is

Mo Naboulsi (37:39)
That's so they build the

they build the machine that then builds the machine. I mean.

Eric Post (37:44)
Yeah, yeah, yeah,

yeah, yeah.

Mo Naboulsi (37:49)
You build a computer so that you can essentially write software, right? It's like you can't necessarily write software without a computer. You can't really write anything without a keyboard. And then eventually, whether it be digital or physical, I don't know. To me, it's nothing new. It's just an advancement of the tech itself. But it's the same cycle that we've been going through, In essence.

Eric Post (38:03)
Mm-hmm.

I don't,

yeah, I think so. I I think, well, there's the same capitalistic sort of cycle with hype burst bubble, you know, and flat line. And then you find the footing and you find out what's real and what's not real amongst all the hype. And then you can really build a solid infrastructure from there. So that's normal. And that's what we're going through that. And we'll see what kind of bursting there.

what happens, you where does this whole thing pop in some way? Cause there is a correction in some way. ⁓ and that's necessary. That's healthy. Unfortunately, forest fires suck, but the forest has to have a forest fire to cleanse things out, you know? And so cycles and seasons are important. and I've never been on open AI to win this race. Even early on, I was always betting on Google or some dark horse that we hadn't even heard of yet to win, ⁓ just based on the infrastructure of how things are fundamentally. ⁓

Mo Naboulsi (38:43)
Yeah.

Hmm.

Eric Post (39:03)
Laid out and who has access to data and who has access to resources already ⁓ And so even though Google is like struggling, you know way way way hard at the beginning There was completely irrelevant in this whole race now They're really picking up and they did they're doing the deal with Apple, right? They were open AI was at the table. They kicked open out of the curve and went with Google right

Mo Naboulsi (39:18)
Mm-hmm. Mm-hmm.

Eric Post (39:26)
So, you know, that was a blow to it. But people are saying like, oh, well, Apple, you know, fundamentally, you know, fumbled this whole thing and Apple really screwed the pooch and all that stuff. I'm not certain that's true. Actually, I'm not certain that Tim Cook isn't a genius and you realize what they're really good at and what they're not good at. I'm not sure he's not looking back and thinking, man, they're going to spend $400 billion figuring this out. Why don't I just let them figure it out and I'll use the technology.

Mo Naboulsi (39:28)
It was a

that part. Yep. ⁓

Eric Post (39:57)
I'm not certain, he's not one of the smartest guys around to figure this out and knows where his lane is and how to leverage whatever the people are doing and where their fit is in this. I'm thinking he's looking at what the hardware play is in all this.

Mo Naboulsi (40:11)
He doesn't have to build it. He's renting the intelligence instead of building it. Like you looked at the fact that they're spending nearly half of a trillion dollars, right? Like that.

Eric Post (40:20)
Yeah, I'm pretty

sure he knows the cost is going to go down and what they need it for. Like for example, to run in their devices to run Siri, they don't need Chachi BT9, you know, run Siri, right? For most people, Chachi BT4.0 was a great companion and a chat assistant. And they were fact pissed when 5 launched because it felt cold. And most people knew that the intelligence was good enough in 4 for the general consumer usage.

Mo Naboulsi (40:26)
Right.

Mm-hmm.

Mm.

Eric Post (40:50)
So I don't know, man. I think that they're making the play for the hardware play and they'll be smart about it. And then as the cost goes down, then they'll put their shoe in the ring in another way.

Mo Naboulsi (41:01)
Yep. Man, we touched on a lot of good stuff. And we have so much more to cover. That's going to be in the following episodes. So I appreciate everybody that's been jumping on here. We do want to get into the rapid fire section. If you're cool with that, Eric, I don't know how much time you have left. I love this section. People love this section. So I just want to dive right into it. Question number one. One skill you're refusing to let an algorithm do for you this year.

Eric Post (41:14)
Thank you.

you

the skill of breaking down a problem for myself. Instead of saying, it to the Chadupt or give it to Hoosier or give it to whoever and saying, hey, tell me how to do this thing. I really am trying to be thoughtful about it first because I cannot manage or orchestrate and swarm of AIs unless I understand the problem that they're working on fundamentally.

Mo Naboulsi (41:42)
Okay,

you discover, you essentially try to problem solve or you problem solve first and then you expand on that.

Eric Post (41:49)
I tried to know the lattice that's necessary to build whatever I'm trying to understand the lattice because like, a beautiful side product of working with AI just all day, every day over the last three years is I'm a better communicator with my kids. I'm a better instructor of tasks in those ways. When I taught a class yesterday and I found myself breaking down a problem much more comprehensively and thoughtfully than I have been over the years past.

Mo Naboulsi (41:53)
Okay.

⁓ Yes

Eric Post (42:16)
Because in order to be successful with an LLM, you have to really give it a chance to succeed. And so there you have to be the expert to help it guide. And then it's just leveraging you so you can get it done faster. But you understand every step along the way. And it's just providing you kind of fuel the little turbocharger. I'm not handing any of that over yet. So for this year, man, it's like really understanding how to break down the complex problem, understanding basic frameworks for ⁓

Mo Naboulsi (42:31)
you

Eric Post (42:42)
like Six Sigma, Blue Ocean, Ada, OODA loops, right? Really understanding fundamental business frameworks that I can apply problem solving to. And lastly, some sort of mastery of our natural language.

Mo Naboulsi (42:45)
Mm-hmm.

Eric Post (42:59)
I'm continuing to like want to learn and increase my vocabulary, increase my understanding of NLP, increase my understanding of how syntax affects an LLM, you know, and so what these sort of generations, if I want to get a great or novel generation, how do I then interact in such a way that it makes that happen? Yeah, yeah, yeah, yeah, yeah. Thank you.

Mo Naboulsi (43:17)
By NLP, do you mean neuro-linguistic programming or some other? Awesome.

I have a funny question for you. If your phone or your LLM specifically ⁓ knows you better than your spouse, who do you trust more in a crisis?

Eric Post (43:26)
I love it. Okay,

Mo Naboulsi (43:42)
You

Eric Post (43:45)
going

to go for the first time ever in the history of rapid fire, I'm going say no comment.

Mo Naboulsi (43:47)
That's probably smart. ⁓

Next. Yeah, for sure. Yeah, no, I think I think that was a smart play. ⁓ I've already asked you this question before in our podcast, but it's coming up again right now. It's in the interwebs, which I find interesting because I thought we've moved past this, but.

Eric Post (43:55)
Or insert anybody. I know it's just a funny question. Yeah. Yeah. think I just said no comment.

Mo Naboulsi (44:12)
the whole concept of prompt engineering, people are still asking if it's dead or if it's a viable career path, which is really interesting because especially Reddit, Reddit is like, are hundreds of subreddits on prompt engineering to this day where they're trying to find value in that specific skillset. But I was under the understanding that it's kind of like, we've already moved on. Like this thing can kind of do its own thing.

Eric Post (44:41)
It can and yeah, thank you. I mean it can But I do think it's too early for sure Yeah, I do Matt and the reason why I'm pausing because like an example in our in our canvas product mo We built the prompt studio So yeah

Mo Naboulsi (44:41)
Especially Whozzy.

Okay.

That's what I'm saying. You built a prom studio.

Eric Post (45:00)
So my point is, like, I recognize it's still a required skill. And so instead of making somebody learn how to do it, I kind of provide a tool there where you can just essentially fill out a form and it generates a thoughtful, comprehensive, well-formed prompt that you just hit the button to run the prompt. But you didn't have to spend the time to actually generate or create the prompt. That's great little hack, but it still kind of actually proves the point that prompting is still very important and thoughtful prompting that's structured in a way that that particular LLM.

Mo Naboulsi (45:11)
Mm-hmm.

Got

it. Okay. So that makes sense. What is, a specific agentic capability that scares you the most right now? Or just in general? If there is any. If there's not, then we can just go to the next question.

Eric Post (45:30)
this lecture is really important.

No, not really. I'm gonna always go to the deep fake thing. So spinning up a deep fake and says it's Mo Naboulsi and I can point it at your entire family all through social media and it sends them all personal messages. It sounds just like you, has a picture of you in trouble, all this stuff. You can do that with the agentic capabilities. And I don't love that idea. That's not it. That sucks. ⁓

Mo Naboulsi (45:46)
Yeah.

Eric Post (46:07)
And I do think part of it has to do with what I'm scared about is people using technology, they don't know how it works. And so they don't know how to safeguard themselves, like using agentic browsers and not understanding how they can be prompt injected from a site on the internet. Not knowing that is insane to me. And they install it, apply it, and start like letting it loose without understanding what the possible downsides are. So I think just make sure when you use an advanced technology, that you actually know what the downsides are.

Mo Naboulsi (46:20)
Mm-hmm.

I agree. And last question, one thing humans do that an AI agent will never be able to replicate.

Eric Post (46:40)
Okay.

I mean, they make, we're so weird. You know, like the Strip Jesus example, right? We're just so weird. And I don't use weird as a bad terminology. Weird is like hard to define. And so, you know, when you have something that is based on an algorithm, which is what an LLM is, even though now it went from 175 billion parameters to three to five trillion is what they have now. And they don't even measure them by parameters anymore.

That was also a marketing thing in beginning. like, you have 175? Oh, we have 300 billion parameters. And there was a flex off for parameters the first few years, if you remember that whole thing. Oh, yeah, like it would come out and they'd say, this is 175 billion parameter model, and this is a 5 billion parameter model. And that was how they gave us the understanding of how large the models were.

Mo Naboulsi (47:25)
I don't actually. What was that?

Eric Post (47:41)
Okay, was by that so you understood and so now like 5.2 is somewhere between three and five trillion parameters. And what I mean by that is, have you heard of like what a weight is in an LLM? Like when somebody says the weights inside of an LLM? Okay, so just for fun, imagine like 100 billion tiny little dots. Okay, and each one of those is a parameter.

Mo Naboulsi (47:57)
No.

Eric Post (48:08)
And imagine each one of those dots having a little dial on them. And you can tune them in like they're all little knobs. Imagine a hundred billion knobs, like a big soundboard or something like that, because you like music, right? Each one of those are tuned and each one of them has math. So when you ask a question, it runs through those and based on the tuning of those is what you get the answer back.

Okay, so the weights effect, like if you have everything set to zero, all the knobs set to zero, and I ask a question like, ⁓ like let's say, what is two plus four? It would say purple cat giraffe target tree in the moon.

Mo Naboulsi (48:56)
wow.

Eric Post (48:56)
Right?

It would make no sense because there's no tuning. There's no adjusting. The model doesn't have any weights. Okay? So that's how you get this sort of illusion. I don't want to say illusion of intelligence because it's not fair. It's a very smart system. But when we say intelligence, we anthropomorphize it. It's not intelligence the way we think of intelligence. It's very smart because the algorithm is there. So what I'm trying to suggest is that you can't have a lot of weirdness.

Mo Naboulsi (49:01)
Got it.

Eric Post (49:26)
in when it's an algorithm like that, right?

Mo Naboulsi (49:30)
No, it makes sense.

It's like without the tune, like you're tuning a note and otherwise it could be sharp or flat. It just, doesn't have any context. Is that what you're saying? Essentially? Okay.

Eric Post (49:36)
Yeah. Sure.

It's just

these each and then basically pre-training and post-training. I also think people should understand that when you're talking to AI, you're not the only one talking to that large language model. What I mean by that is if you're talking to a large language model through chat GBT, there's this system prompt.

Mo Naboulsi (49:51)
Mm-hmm.

What do mean?

Hmm.

Eric Post (50:03)
that's

being injected into each one of your questions or each one of your thoughts or whatever into that's also simultaneously talking the model to give you the response back.

So whatever the instructions are in those system prompts is why it might be political or not, or might be able to generate an image with boobs or not. You talk to Grok, you can generate an image with boobs. You talk to it open and out, you can't generate an image with boobs. It's not because of the model capacity, it's because of the system prompts that they've layered in between you and the model capacity.

Mo Naboulsi (50:34)
Interesting. Yeah, that makes sense.

Eric Post (50:34)
Makes sense? Yeah, so that's the

important thing to really know. You're not the only one talking to models if you're using a consumer portal to talk to an AI. That system prompt is in there, affecting the responses.

Mo Naboulsi (50:47)
I didn't know that. That's very interesting. So I'm glad that you shared that with us.

Eric Post (50:51)
It's really

nice to know, man, because you put a tool, you know, if I hand you a chainsaw and don't explain the dangers of it, it's kind of, not, I'm not very good. I'm not very good instructor, I'm not very good father, right? And yet they're handing us these tools without these like knowledge of what this stuff is. And it's a little wild.

Mo Naboulsi (50:59)
Right.

Yeah, gets pretty intense. It's like, here, I need you to saw this piece of wood, and then you hand me a drill. It doesn't make any sense.

Eric Post (51:16)
Yeah,

it's kind of like what's happening. Like, hey, I need to do this thing. I'll just use Chaggbt for it. Maybe or maybe not. That's the right tool. So I think that what we are starting to see is the specialization of AIs or the fine tuning of the AIs for a particular thing. so companies, like when we talked in the real estate space or some of the niches that we have, the reason why they are

choosing us over another company is not because we have the billions of dollars of resources, is because we can tune the thing so their users don't have to be so good at prompt engineering to get a good response.

So if you take, let's say you have the whole ball of an open AI large language model and use ChatGBT, it's kind of like using the whole thing because it's generic in there. It's like using all of it for your consumer purposes. But like, let's say we might use the whole thing, but we've essentially, you know, through some of the conversations sort of like portioned off a piece of it. So you're able to like get a more robust expertise like answer.

than kind of a milk toast best answer has been on the internet sort of thing. And that's important. know, either become a great prompt engineer and really understand how to do that or use one tuned and trained and specialized for what your application is. It should be somebody, a thoughtful approach to using AI.

Mo Naboulsi (52:38)
Or just use Canvas prompting tool that you have, kind of just, I use it all the time. I honestly didn't, you know what's funny? I didn't even know it was there until, I don't know if it was the podcast where you mentioned it. No, no, I think you were instructing like, there's a work call or something. I can't remember. like, I didn't even know that fucking existed. Like what?

Eric Post (52:42)
I think it's great. I use it all the time.

Yeah, and

there's one for the image generator too, you know, so you can...

Mo Naboulsi (53:02)
I didn't know. Yeah,

I didn't know that. I'm like, dude, what? I kept it to myself because I felt like like I was just like, Oh, God. All right. No, that's my bad. I

Eric Post (53:05)
Yeah, it's great.

No, sorry. That's not me. Geez

Louise.

Mo Naboulsi (53:13)
That

was another thing. Listen, man, I so appreciate these conversations. it's so phenomenal. And listen, everybody that is listening today, you know, the industry thrives on noise because confusion is profitable. And if you're ready to find the signal, I do want to mention Project Satori bypasses the industry futile cycle of endless training by directly installing a proprietary done for you infrastructure. It bridges the gap between abstract strategy and concrete execution.

System leverages over two and half million in research to transform established commission earners into dominant market authorities through automated AI engines and operational order. If you're interested, you want to check it out, want to understand it more, head over to heyhusy.com. And for everyone that's listening, you can find us everywhere that you can listen to your podcast, Spotify, iHeartRadio, the interwebs, Apple podcasts. We're there. We love you. See you next time. Eric, it's been great, man. Thank you so much.

Eric Post (54:08)
Take care everybody.