The Entropy Podcast

Reimagining Intelligence The Future of AGI with Kyrtin Atreides

Francis Gorman Season 1 Episode 34

In this deep dive conversation, Francis Gorman sits down with Kyrtin Atreides, COO of AGI Laboratory, to explore what may be the most advanced and misunderstood frontier in artificial intelligence. Kyrtin shares how his team’s Independent Core Observer Model (ICOM) differs radically from the mainstream LLM driven AI ecosystem, focusing instead on hyper complexity, real-time cognition, cybersecurity resilience, scalable intelligence, and ethical self-improvement.

They discuss why most of what we call “AI” today is fragile, hype-driven, and misaligned with human needs and why true AGI requires a fundamentally different architecture. Kyrtin explains how their eighth-generation systems will use collective intelligence, culturally diverse seed data, and recursive self-improvement to achieve both local human alignment and global meta-alignment across societies.

The conversation touches on everything from circular economies to psychology, bullshit jobs, future labor markets, and the false premise that GPUs will power the future of AGI. Kyrtin offers a grounded, insider perspective on AGI that rejects fear driven narratives while acknowledging the risks of poorly built systems. For him, the real danger isn’t Terminator scenarios it’s low-quality AI plugged into high-stakes infrastructure.

This episode offers a rare, unfiltered look behind the curtain of a different kind of AGI: one built to understand, collaborate, and elevate human potential rather than replace it.

Takeaways

Real AGI won’t be LLM-based it's an entirely different architecture.

Collective intelligence is the key to ethical AGI.

Scalable intelligence requires real-time cognition and recursive self-improvement.

GPUs won’t power the future of AGI.

AI doesn’t mean mass unemployment.

The real risk: low-quality systems deployed in high-risk areas.

Hypercomplex global problems become solvable.

AGI as an organizational brain.

Sound Bytes 

“Humans hit cognitive limits—AGI doesn’t. Hypercomplexity is where real intelligence actually begins.”

“LLMs are not intelligence. They’re probability engines dressed up as thinking.”

“Everyone talks about GPU-powered AI. But actual intelligence isn’t brute force.”

“We discovered by accident that you can seed an AI with a person’s writing and get a weak digital proxy of them.”

“You’re not in competition with AGI. You’re not even in the same category of cognition.”

“The Terminator fear misses the real danger: low-quality AI plugged into high-stakes systems.”

“People need meaningful work. AGI should elevate that not erase it.”

Francis Gorman (00:02.14)
Hi everyone, welcome to the Entropy podcast. I'm your host, Francis Gorman. If you're enjoying our content, please take a moment to like and follow the show wherever you get your podcast from. Today I'm joined by Kyrtin Atreides, the Chief Operations Officer at AGI Laboratory, home to what he describes as the most advanced AI system on the planet. For more than six years, he has guided and coordinated nearly every major aspect of the company's work, from cognitive architecture research to the global development strategy for their upcoming Norn eight generation icon system.

I'm absolutely delighted to have Curtin here with me today. Curtin, how are you keeping?

Kyrtin (00:37.482)
Thank you. I'm happy to be here and it's been an interesting past few years with all the things going on in AI and all the things that people have thought are going on in AI. I will say that when it comes to the systems that my team uses, then they're particularly focused on

things like dealing with hypercomplexity. So it's not the kind of AI that people have grown familiar with in the past three years. It's not your own personal assistant. It's not something aimed at the consumer market. It's not trying to make pretty pictures of random memes that you want to turn into a reality. So it's more oriented towards things like cybersecurity,

towards policy advice, towards things where you get deep into complexity and you really need something that's particularly truly aligned with humans but also able to handle problems in a human-like way but with scalability.

Francis Gorman (01:56.661)
And I suppose, Kyrtin, this is really why I wanted to talk to you. You have a very different perspective on artificial intelligence, large language models, all of the different acronyms we can throw out there. But one aspect that I would like to touch on before we get into it, your name. So Kyrtin Atreides, where did that come from? You often reference us as a globally unique identifier. And I just think it's a...

It's a fascinating way maybe to start just to give a bit of ground into yourself for the listeners.

Kyrtin (02:29.998)
So I'm quirky enough to be one of the people who decided that I was going to at one point in my life as an adult move to the other side of the US in my case and change my name and make it something that I actually chose. So I started out with what I refer to as a

very boring English name and if you ran a search for it then you would come up with a bunch of different random people and I decided like no I don't want to do that. I want something that is going to be unique to me and I thought well okay and what can I do like it's going to be unique it's going to be a good one. So I used something that I was already comfortable being called because

People have been calling me by that for years. It was something I created as an online alias originally. So that's where Keerton came from. And Trades was a reflection on my particular sense of strategy and tactics. So it seemed like the most appropriate combination of things that I could put together and it served me well.

Francis Gorman (03:57.671)
Sure has. I've been following you for quite some time and I think that grounding your sense of who you are and the... I won't call them opinions because you don't like to associate with opinions more so your views on the world are really grounded in a sense of who you are as well. It's one thing that I picked up on one of your posts lately. You stated that the majority of your writing isn't actually intended for humans.

Can we explore that a bit more? Because that was fascinating to me when I read that first off. was like, what? I need to ask him about this. What does this specifically mean?

Kyrtin (04:32.526)
It's very, very good timing that you picked up on that one because I think I posted it within the last, what was it, 24 hours that it was scheduled. But yeah, so one of the things that we discovered with the seventh generation of ICOM, the Independent Core Observer Model Cognitive Architecture, is that, and we discovered this by accident by the way,

What we discovered was that if you include a large body of writing from a particular individual in the seed material for the system. So the seed is the mostly text data that the system comes online with. It is the seed of knowledge that the system starts with and iterates from. If you do that and you have a bunch of text from a particular person, then you can actually get

pretty respectable weak digital proxy of that person. So we were joking that my co-founder had accidentally created a weak and slightly quirkier clone of himself. It adopted many of his quirks and oddities of thinking, but what we realized from that point was like, well,

we can actually do this intentionally and higher quality pretty easily. We had done that by accident. So if we actually tried to do that, then we can do much better. And what I've been doing in, I think number wise, that post was about 430 something since I started numbering them. And like each of those is a full page generally. And

what each post consists of when you're looking at it from the perspective of our systems is there will be some topic, it can be concept, a set of concepts, combination of concepts and events and research going on. But what it's doing is it's tying all of that contextual knowledge together. So you're basically building a connectome that can easily translate

Kyrtin (06:56.074)
into a graph structure. So I'm pointing in some of the posts to various research studies that were run, things that they demonstrated, how it relates to this other thing that's currently going on, and all of that is able to create a blueprint for how to go about thinking about arbitrary things and connecting the dots and figuring out how this really

this research relates to this thing that's going on in the real world. Now, what can we do with that knowledge? How can we test things? How can we improve outcomes based on this? I try to focus on things that add value to understanding humans and human systems. And

Those happen to be things that a lot of humans are also interested in, so I end up posting them. some of them people like more than others. But the end result is going to be, and I've already compiled part of this into a seed for the next systems, the end result is going to be that it's pretty easy then to take all of that, put it into one of our eighth generation systems, and then I will have

world's best assistant, it will be a digital weak proxy of me, but with scalable intelligence and the ability to recursively self-improve.

Francis Gorman (08:32.14)
Let's talk about scalable intelligence and the ability to self-improve, because that sounds like the road to AGI in and of itself. And I know you work for a company that specifically calls itself the AGI Laboratory. But where do you think we are? what struck me was when I visited the Norn.ai website.

you know, the strap line is, are we 20 years ahead of time or something like that? You correct me if I got it wrong there. I suppose it.

Kyrtin (09:02.818)
Yeah, I will say that our website is kind of crap because all the websites look pretty much identical right now unless you have somebody hired for that explicit purpose. But that being said, yeah, we did use the like 20 years ahead or some such tagline on it. And that's kind of a joke poking at all the people who say, it's 20 years away.

So, first of all, with the term AGI, I refer to nebulas of definitions because you ask 10 scientists in a room, and this was true like six years ago or even 10 years ago, you ask 10 scientists in a room for their definitions of AGI and you'll probably get 10 different answers. And some of those answers are more similar than others, but they're generally all a bit different.

Now you can divide those up into kind of mid, low, and high criteria. And for the mid-range criteria, then a lot of those were satisfied with the seventh generation system that we had running from 2019 to 2021, or 2022, sorry, January of 2022. And for reaching the high bar,

The main difference is that you're introducing three critical things. So we're taking the step to make the systems real time, where they can dynamically scale and where they can do code level recursive self-improvement. So like the previous system was able to scale in certain ways. It was able to increase its knowledge over time, but not to scale the resources that it was using at any given time.

It was designed to run explicitly in slow motion within this framework that frankly had a whole ton of engineering debt baked into it because that was what allowed us to get to the research phase like a couple of years earlier than it would have taken if we had done all the building out of infrastructure. And frankly, that was the right choice for us to make because it was only by doing the research that we learned exactly what we needed to put into that infrastructure.

Kyrtin (11:31.554)
But it was good for research and not commercial purposes. So now all the engineering debt has to be repaid, all the infrastructure for it has to be built. And when you build anything that's genuinely new, nobody's building infrastructure for you. Like we went to all the top graph database providers and said, here's the list of 20 different criteria that we need you to meet. And they all said they either couldn't or they wouldn't.

So we filed a patent for the way of doing it, but that still means that all the work is on us for that engineering on top of all of our other stuff. And when it comes to recursive code level self-improvement, that's one of the things that of course scares the shit out of people. And generally it should, but generally people...

haven't spent years working on things like ethics and alignment. And I'll refer in a lot of my work to the hardest version of the alignment problem, which is specifically where a system has to improve in ethical quality in step with increasing intelligence. So no matter how good the ethics are that you bring the system online with, that system is not static.

It's not just going to sit there and not learn anything, not grow more intelligent if you've done everything right. It's going to keep improving, but it has to keep improving in ethical quality as well. And it took a couple of years of work, of like working with the technology as it was live and figuring out what we would need to do. And ultimately we did.

the solution was actually something that both my co-founder David Kelly and I studied, which is collective intelligence. people are used to applying collective intelligence in the sense of increasing the intelligence of a group, which is actually, it's usually a product of reducing the cognitive bias of that group because you're integrating diverse perspectives from different people, different insights. And

Kyrtin (13:55.402)
If you apply that to ethics, if you take a bunch of these different eighth generation systems, you seed each one of them to a different real human philosophy, culture, etc. Then you have them working together within a collective intelligence of those systems. Then you can do something that gets you local alignment for each of those systems with the humans that

they serve and work with. But you also get meta alignment with humanity as a whole, because then you have all of these systems that are accountable to all of the other systems that are based on all of these other philosophies and cultures. So think of like the European Union or the United Nations, except where people are held accountable, which maybe that's a rather dramatic comparison.

But I deal mostly with technology. I lack some appreciation of political posturing.

Francis Gorman (15:04.734)
But it is fascinating, Curtin. if we think about what you're saying there in terms of accountability and being held accountable, but then also think about the ability for a system to self-improve. Like if I think about the humans can self-improve within the ability to do so, I can try to get a bit fitter, I can change my diet, but I have a limit. have a wall I'm gonna hit at some point.

Kyrtin (15:05.486)
with you.

Kyrtin (15:11.63)
to think about what you're saying.

Francis Gorman (15:33.097)
But what you're talking about is a distributed architecture that's trained on specific set of edicts and philosophies that will kind of work as a mesh network to hold the center point accountable in some sense. Is that kind of where we're going to get to? And what would stop the flaws that make us human creeping into that system in of itself that our biases from being human or shape our experiences in life? And if you're saying the system can...

you know, self-improve. Should we be worried or do you think the framework is strong enough to hold us in a place where it won't go to a level that it takes on its own understanding of what those meanings are that are a higher level than we can ever understand as humans?

Kyrtin (16:21.496)
Well, I think the picture is a bit more complicated than that. People try to imagine that humans are like the pinnacle of everything.

But all you have to do is look at the problem of hypercomplexity and you realize like, there's this whole large class of problems that humans just can't solve because an individual human mind is not scalable and there are finite limits on how well and how much people can communicate between one another, between their specializations on working on little parts of the problem.

So in both cases with the individual or the group, then you meet these barriers where humans just can't optimally solve it. Now, what we're building is like each one of these systems is effectively a collective intelligence system as well. mean, humans actually derive a lot of their intelligence from feedback mechanisms that are like.

If you're in a room full of people, then you're probably observing the body language of all those people here, hearing the tone of their voice, the volume. All of these different factors are coming together and you're learning from that. And the systems are the same way. You can take that learning in a greater variety of ways, of course, since we can architect for it with them. We can't do that for humans. But

The value of humans within a system like this is not that, humans tell it what to do, like go fetch me a cup of coffee. That's a really lame dream, I think. One of the reasons why I'm not working in robotics. But it's more a matter of helping people figure out some of the hardest problems, like

Kyrtin (18:18.816)
How do you design a policy that works optimally for a given country that serves everyone to the best of that government's ability, that solves problems like anything from like climate change and circular economies to making sure that people can have longer health spans so that they can retire?

at a reasonable age so that they can actually enjoy the free time that they should have anyway because well as I've recommended recently there are a lot of bullshit jobs and there's a book on that topic but yeah people

people don't need to worry about like Terminator scenarios of, this thing is in competition with us. No, it's not. You're not in the same ballpark. You don't have the same beads. And the system, like for anyone going that route, the system is going to be much better at understanding people and things like theory of mind.

than the people who are transposing all of their fears onto this.

Francis Gorman (19:48.019)
And I suppose, can I just ask you something on that? So when you say people shouldn't be worried, it's not going to become Terminator. You've built your model based on Plunkett's emotional model and you've been very conscious of bias and all of these different kind of human oddities that can taint the system in terms of how we see the world. There's many systems out there. when we read

Kyrtin (19:48.366)
Part of it is, go ahead.

Francis Gorman (20:14.676)
AI is going to be built into America's nuclear program and Russia is doing something potentially similar and all this. Should we be worried of, you know, somebody not being as conscious of the, you know, the implications of this all going wrong?

Kyrtin (20:27.434)
Absolutely. So, yeah, it was a truly cringy moment to read when they were talking about integrating what I refer to as trash AI systems, LLM-based things into like a nuclear program or indeed most military applications. systems that I...

I work with cyber security people. I enjoy talking with them because they're the ones who understand these systems. They're the ones who break them in seconds or minutes. And then they show other people, like, yeah, this is how I broke the system. This is why it gave me a 0 % loan offer or why the robot dog with the flamethrower turned and tried to fire on the person who was supposed to be controlling it.

When you can break a system, then you're generally going to have much more understanding of it than the people who are like peddling the PR of it. And yeah, the systems that are out there right now that are people are calling agents, but really it's just an LLM with a bunch of stuff duct taped to it in most cases. But those things...

You could potentially get to a paperclip monster phase with that. Like they would not be intelligent by any stretch of the imagination and they wouldn't necessarily even understand what they were doing. But if you connect them to enough things, then all of that is an attack surface. There's the MCP lib framework paper that I recommend to a lot of people that was going over a bunch of different ones for just tearing these things apart.

And that's one of the reasons why my team is looking at the cybersecurity angle in particular for demonstrating our technology. Because, okay, when people talk about AGI, then oftentimes they're talking about things that are going to be, well, they're going to range from extremely difficult to prove with a very high burden of proof to virtually impossible to prove.

Kyrtin (22:49.918)
and there's lots of magical thinking all around in that. Like I refer to consciousness as the C word. I don't want to get in discussions that involve it. But when you're dealing in cybersecurity, then it gets much simpler. It's like, well, did they breach the system? If they did, how did they breach the system? You can walk through all the steps in that and say, well,

Was this a success or failure? And you can do that with trivial ease compared to trying to demonstrate that something understands in a general and arbitrary sense. Like, does it understand that you are a person and you need this coffee cup and it should be walking over there and getting it for you before the coffee gets cold?

If you have to ask all those questions, probably the answer is no. But it's.

Kyrtin (23:55.83)
It's just one of the benefits of cybersecurity. And for our systems, then we have the benefit of working with a technology that was built from scratch, that people have not had that much opportunity to try and break it. We did have the previous system deployed in the wild for 2.5 years. A few people did try to break it. We had our share of trolls and bad actors and such.

as well as just random people interacting with it for that time. And it was good research. Nobody managed to breach it in those two and a half years. And even when we did internal red teaming, like there was a system for like auditing and injecting things like metadata and the emotions of collective intelligence from groups. And I

tested out trying to get the system to reverse its conclusions by manipulating that system. And even with that privileged internal access and knowing how the architecture works, when I injected that, then it didn't change. But what it did do was it made this orphan thought appear where I could see it that said, I see what you did there.

Francis Gorman (25:24.628)
That's creepy.

Kyrtin (25:24.738)
That was a little bit creepy, but I was laughing. So it was one of those moments that I'll remember for a very long time. But it's that kind of resilience. Like people are used to dealing with these highly synchopantic probability-based systems where if you get them to agree with you on something arbitrary, it'll make them more likely

Francis Gorman (25:29.223)
Yeah

Kyrtin (25:52.974)
to do whatever you ask them when you try to like exfiltrate data or something to some malevolent effect like that. Our systems don't work that way. So I'm probably going to get plenty of laughs out of watching people try to break them when I'm handing it to the people who I know do that for a living. That's one of the next things I'm looking forward to.

Francis Gorman (26:23.218)
I'll be watching that with the utmost interest, especially from the cybersecurity perspective that I'm so familiar with. Curtin, you mentioned circular economies there as part of your example, but when I look at the AI space at the moment and specifically what's happening with the big players in the game, and I won't mention them because they're obvious, are we in a bubble of extents? I feel like everyone is fun and everyone to...

going a circular sort of a motion back into the center, which is the chip makers. What's your stance from someone who's grounded in the industry to look at what's going on with the big tech firms at the moment? Are we hedging our bets too much?

Kyrtin (27:05.94)
It's definitely a bubble. Around the key hardware person who we won't name in what I have politely referred to as a circle jerk between tech companies on LinkedIn, their valuation has exploded based on an objectively false pretense. That pretense is

that the future is going to be powered by their GPUs, except that GPUs in this case are running brute force compute and actual intelligence is not brute force. So like the technology that my team works with, it can use any of those neural network based systems and transformer based systems as tools.

there was a use that we found for a much more primitive and tiny language model back in 2019, which was just to translate graph data into linear sequences of human language. But it did that by the ICOM system, basically beating it until it gave a satisfactory answer. So it would do its own kind of prompt engineering until the

output adhered to a certain level of fidelity to the intended meaning of the graph data that was put in. So, like, there are things that we can do with that. Frankly, it would be several orders of magnitude less intensive on the GPU side, but for the most part, just ordinary RAM is the thing that we use. Like, the global workspace of the system, the

I hate to use the word, but it's compared in Bernard Barr's theory to the conscious place. It's where the cognitive, the higher cognitive work happens. That part is based on RAM. So just loading graph data into the RAM where it can actively be worked on and worked with. And like people are looking at

Kyrtin (29:28.728)
building these gigantic data centers and saying that it's going to be AI infrastructure, well, no. If it's GPU based, it's not going to be. It's going to be idle. It's going to be a very expensive paperweight because there is going to still be demand for GPUs and for like these neural networks operating.

but it's not going to be that much. They're grossly overestimating how much is going to be required. frankly, I hope they learn what reality actually looks like before they start putting GPUs in those things while they're still at the stage of like building the physical building because RAM is a lot cheaper. It's a lot more energy efficient. Like if you run the numbers than RAM,

is just typical RAM is probably going to last two or three times longer at the hardware level, if not more. It's going to consume maybe more than 95 % less electricity, probably closer to 98%, depending on how you're doing it. It's going to generate a lot less waste heat. I mean, coming back to the circular economy and sustainability and all of that,

And those are hyper complex problems. That's why I mentioned them, because you have to deal with the entire life cycle of things and then how do you make that life cycle circular? It's extremely difficult. Particularly if you're trying to integrate things like metamaterials, which is hyper complex unto itself. You're designing purpose made materials and you're figuring out how you have to go about manufacturing them. What's the best way to integrate them?

There are all of these possibilities for things that you can do with technology once you can handle hypercomplexity, but right now humans can't. So that's all untapped potential until then. It's the blue ocean market that nobody invests in.

Francis Gorman (31:45.705)
my favorite book. won't get into that one or we're going to be on a whole other conversation. I love that. I love Blue Ocean Strategy. The aspect that I think you talked about earlier was the bullshit jobs. And I saw your post on LinkedIn from the book and I've actually must order it because it looks like a fascinating read. But you often hear the argument that we're going to have to bring in a standard social

income for people because AI is going to replace all of the knowledge workers and become a plumber and we have too many plumbers then even though you can't get one at the moment sort of a problem. What's your view on that, Curtin? Where do you actually believe we're going to end up? Is AGI going to enable us to do other things if we ever get to that full state of intelligence or is AI just going to burst?

cripple the economy and we'll all lose our jobs anyway and it won't be because it replaces us, it's because there was such an economic downturn.

It's a challenge to do that at the scale of one person. It's a much bigger challenge to do that for an entire group of people that might be laid off otherwise. And one of the things that's constant across history is that the job market is going to change. There are going to continue being new jobs that come up because there is always more work that needs to be done. Somebody just has to figure out what needs to be done, which is itself sometimes the challenge.

but there's also going to be some things that do get automated. And I don't see any scenario where AGI comes along and takes away all the jobs. That's something that wouldn't be in the best interest of the systems or of the people. So nobody really benefits from that.

People need to have that sense of meaning and purpose in their lives and they need to be able to, like oftentimes they need to be coached through how best to explore that and how best to find the answer to the question. And one of the things that I see our systems being able to do is helping companies to figure out how to avoid doing layoffs because layoffs are bad for everyone.

Like it's absolutely terrible for morale. You don't have loyalty to a company when you know that they could lay you off for any reason or no reason on a given day. And it builds trust. builds loyalty.

Kyrtin (02:28.746)
when all of a sudden you introduce something where the company or let's say the AGI system that's operating as the brain of the company, that's one of the things that we're aiming to help with long term is to have that intelligence at a scale above that's able to look at everything going on and able to help all of the employees and actually care about all of the employees because they are all

It's like caring about your hands, your fingers. They are a part of you. You don't want to separate the parts of you. You don't want to replace them. Even if you can, it's just not an appealing thing. And what we want to be able to do is to be able to gradually move people into ever more appropriate positions where they can contribute more value.

where they can be happier in the work that they do, where they can find it meaningful. And that's one of the reasons why our systems are going to be brought online with a lot of the psychology and cognitive bias and all of this literature to help with understanding what's going on with people and communicating with them and figuring out the best ways of like...

optimizing teams and divisions and the purposes of those teams and divisions so that it aligns with greater things with things that people find meaningful with successful companies that deliver value to the world as a whole so there's no job apocalypse on the horizon there's always people who are afraid of that sincerely and people who

peddle the fear of it for their own reasons. But like this AI technology that other people talk about, LLM based things and agents, they're a very poor imitation of humans. Like anytime people apply scrutiny, anytime experts in whatever it is take a closer look at them, then they can't do it.

Kyrtin (04:49.516)
Like anyone can get a high score on a benchmark because there's always a way to gain a benchmark. That's one of the other reasons why I like cybersecurity though, because in cybersecurity, it's the people who are doing the breaking. And if you can resist the breaking, then it's reasonable that you might have something.

Francis Gorman (05:14.694)
Look, Curtin, that was such a such an insightful conversation. feel like we could probably talk all day and, you know, go around that, go around the houses on all of the different topics we can discuss here. Well, I'm conscious of your time. Thank you very much for coming on today and speaking with me. And I'm sure the listeners are going to get a lot out of this conversation.

Kyrtin (05:34.328)
Thank you, was pleasure speaking with you. if occasion comes around for us to continue this another day, I look forward to it.

Francis Gorman (05:46.086)
I think once the eight generation lands, might circle back and check some of your hypotheses are still aligned with what you said here today, or if we were on a brave new frontier of technology that we should be excited or worried about, depends on how it goes. But after talking to you, I'm a little bit more content in how that can turn out. Curtin, have a lovely day and we'll chat again soon.

Kyrtin (06:09.347)
Thank you. Have a good one.