The Entropy Podcast

Original Intelligence in the Age of AI with Jonathan Aberman

Francis Gorman Season 2 Episode 5

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 41:49

In this episode of the Entropy podcast, host Francis Gorman engages with Jonathan Aberman, CEO and co-founder of Hupside, to explore the concept of 'original intelligence' in the context of artificial intelligence (AI). Jonathan shares his extensive background in venture capitalism and education, emphasizing the need for a new framework that values human originality amidst the rise of generative AI. He discusses how AI, while efficient, often leads to homogenization in business practices, making differentiation crucial for companies to thrive. Jonathan argues that businesses must leverage human creativity and insight to stand out in an increasingly AI-driven landscape, warning against the dangers of relying solely on technology for innovation.

Takeaways

  • AI is a tool, not a deterministic force.
  • Businesses must compete on differentiation, not just efficiency.
  • Originality is key to standing out in a homogenized market.
  • Education needs to adapt to include AI as a tool for creativity.
  • Servant leadership will become more important in tech-driven companies.

Sound Bites

  • "AI is a self-referential echo chamber."
  • "We have to reframe education."
  • "Technology isn't deterministic; it's a tool."

Information discussed:

If you want to access some of the resources discussed on this episode you can find them on the Hubside website: https://www.hupside.com/ 

For international listeners outside the US you can use the following postcode: 99999

Francis Gorman (00:01.902)
Hi, everyone. Welcome to the Entropy podcast. I'm your host, Francis Gorman. Before we dive in, if today's conversation challenges you, sparks a new idea or sharpens how you think about the world, don't keep it to yourself. Subscribe, leave a review and share this episode with someone who enjoys staying curious. Today I'm joined by Jonathan Aberman an entrepreneur, investor and innovation strategist, best known as CEO and co-founder of Hubside, a technology company focused on what it calls original intelligence, a framework and platform designed to measure

cultivate and scale human originality in the age of artificial intelligence. Jonathan, it's lovely to have you here with me today. You're very welcome and I have to say I was intrigued with the concept of original intelligence. Can you talk to me a little bit about what that means for you or how you came across that concept and decided to do something with it?

Jonathan Aberman (00:39.642)
Well, it's lovely to be with you as well. Thank you for including me in your podcast.

Jonathan Aberman (00:58.586)
Well, so the backstory is I've been in the venture capital industry around software for about 25 years. And during that time, I've seen the emergence of machine learning. And I saw the beginnings of what now we're seeing in the world of gen AI and agents. And as an investor, I became concerned that we were not educating our young people to prepare for that world. So I actually took some time off.

from 2019 to 2023 from investing to be the Dean of a business school here in Washington, DC, where I combined computer science and business and creativity, art, creativity, art and design into a single college and started to teach this idea that technology in itself is great, but technology and human insight is what's truly valuable. And I went back to investing in 2023 with a firm called Rockston Ventures. And I was looking at different

investments and making investments and seeing more and more that AI was a tremendous enablement for business models and provided tremendous efficiency, but it was really starting to drive very, very hard against the idea that humans had something valuable they contribute. It was very much a technological solution rather than a holistic one. And while I was feeling that, I ran across a couple of scientists at two local universities here.

that it developed a technology to objectively identify when any output is demonstrably different from what you get from artificial intelligence baseline in a typical performance, i.e. when AI hallucinates. And I started to talk with them about AI's limitations, particularly around the idea of homeogenity and the homogenization of information.

And what I realized was that they had stumbled into the basically the Holy grail for a post AI world, which was how do you demonstrate where differentiation is going to come from in a world where knowledge becomes fungible? And that's really what the Hubside technology is all about. It is a brand new, a brand new AI model that's proprietary and self-contained that

Jonathan Aberman (03:21.848)
objectively identifies when output is in fact original and novel and not the manifestation of what you would describe as the shared novelty of AI. You know, truly unique. And that is, as I'm sure you know from other guests you've had on the show, the biggest challenge right now is now that we have Gen.AI, what does it actually mean for business?

And my hypothesis is that we have sort of forgotten Business School 101, which is the businesses most to compete on differentiation. And we have lost track of the fact that humans are actually the source of differentiation, not Gen I. So that's how I came to the company and why I decided to stop investing in companies and to instead become the founder of this one.

And here I am again, living the founder's dream. And I'm sure you'll ask me why I would choose that yet again, but it is an addiction, I suppose.

Francis Gorman (04:30.062)
I certainly will ask you about the founder journey, but you said something there about differentiation and companies needing to have their own kind of identity. And I see AI almost as a differentiation or order. It's making everything the same. And I think the best example of this is LinkedIn. If you go on LinkedIn at the moment, all of the posts follow that same robotic text with the dashes in it, with the

punchy line up front, nobody's standing out anymore. We're starting to blend into a sea of bots and generative AI. Is that going to become a real problem when companies start to kind of adopt AI at all the different levels of the enterprise without having that originality, that kind of edge, that differentiation factor to say, we are unique here as our value proposition? That's going to become a major problem. Yeah.

Jonathan Aberman (05:25.003)
Absolutely. Absolutely. This is the fundamental problem and we already seen a play out and it will continue to play out. So the fundamental problem is that GenAI at the moment is still a perspective engine. It still looks back and it gathers knowledge and then probabilistically determines what's most useful to us.

It doesn't have the ability to create true novelty because it's trained on, frankly, now it's being trained in its own output. So it's a self referential echo chamber that's really, really effective at homogenizing and presenting information. Now that in itself is not a good nor bad thing. It's a fact. The problem is that we are not being told or discussing the true limitations of AI today in a way that allows enterprises.

to deploy properly, meaning it's being presented like a technology solution similar to other technology solutions, but it's not. It's a new way to think. And the new way to think is to use AI as an adjunct to your individual creativity, not as a substitute. I use AI all the time. I'm sure you do too, but I use it in a very discriminating way. I use it as an assistant to help me create

inferences which I then use to create better output. And your AI example in LinkedIn is a perfect example. What is happening is today people mostly are using GenAI as a substitute for their own inference, as a substitute for their own judgment, and then falling into the group think that AI creates. That today is a huge problem for businesses because

They are finding that their people are unable to take these tools and create differentiated output. We call it work slop. So the problem for today is AI transformation is failing to generate a value add because it's not being presented in a way that actually brings people into the conversation and utilizes their originality. The bigger long-term problem that nobody's talking about, in my opinion,

Jonathan Aberman (07:37.241)
is that longer term, if businesses rely upon Gen AI, they will all become the same, right? Unless they either own their own Gen AI engine or more likely they pay a premium to get access to the Gen AI because at some point, if we listen to our often friends in the West Coast, Gen AI will start to generate novelty, true novelty. So to me,

Big business now is on the fool's errand in a way where they are chasing and allowing themselves to become reliant on something that is sucking out the value out of their businesses long term has the risk that they will have to pay a premium to get a differentiation back from the people who own the large models. it's just, it boggles my mind how, and I'm a business school professor, it boggles my mind that a group of people

that are in charge of companies around the world who study business and strategy have somehow forgotten that efficiency is only one of two ways you create value, the other is differentiation. And that requires novelty. And novelty doesn't come from AI, which hence gets back to why I started this company, because somebody has to provide an economic value tool to evaluate and show where humans are valuable. And that's what Hubside does.

Francis Gorman (09:00.558)
It's a very interesting angle. One of my favorite books is Blue Ocean Strategy. I'm sure you're familiar with the context. And when I look at AI, see a flood of Red seas. Everyone is doing the same thing over and over again. As you said, it's re-agurgitating its own content at this point. And it's becoming very visible. So I almost look for...

Jonathan Aberman (09:04.87)
Mmm.

Francis Gorman (09:22.746)
I almost look at the perfection as painful in some ways now. when I see a spelling mistake, God, I think I said this before, I love a spelling mistake now because I know someone has actually taken the time to put that content in. It's the human flaws is coming true. But you mentioned you were a dean in a college there. In Ireland, I have a brother who teaches and a couple of friends and teaching. And what I'm hearing from

Jonathan Aberman (09:31.343)
Yes.

Jonathan Aberman (09:41.84)
Yeah, it was. Yes.

Francis Gorman (09:50.742)
A lot of teachers are, they're using generative AI to kind of create the content. The kids are using generative AI to answer the content. then, you know, teachers are using generative AI to correct the content. And to me, that's a self-resilient prophecy that when the next piece happens, the originality will already have been eroded because you won't have built up that muscle memory, that discipline, that pain of learning. Do you see a double edged sword here where

The next level of talent coming in to feed these companies is so reliant on the AI in and of itself that we have a problem with creative thinking, critical thinking and originality from a human aspect.

Jonathan Aberman (10:30.406)
Absolutely, and I've spoken at this length and I continue to and we actually have in our product pipeline for later this year a product that will allow professors to be able to see when their students are actually provide an original output on top of the AI output. And this is very important because right now education is approaching AI like it's a cheating problem.

You know that that somehow if you can isolate the students in some way using blue books oral exams various things you can somehow separate them from the tools and get at their natural ability and teach their natural ability. And there are a few problems with that. The first one is there's the I think sort of the self deception that many of us in education suffer, which is the idea that we

have the only way to teach people how to apply what we like to call critical thinking, but basically is creating novelty out of sets of facts. The reality is that this is no longer the case. The artificial intelligence, Gen I, at scale, can be and is a tool for gathering knowledge and bringing inferences to the human that will allow the human to start out an analysis at higher level. Then that means that to take the position that

pure human thought is the likely best way to teach students, is it itself not sitting up for the world that we live in? That's number one. Number two, by treating AI as a cheating problem, I think that academics are not understanding that gen AI is actually as addictive and not more addictive than social media. And the challenge that that creates is students fall into a very close doom loop where they rely on an AI.

and it reinforces. So to me, we have to reframe education. And instead of denying the existence of AI, we need to in fact understand it's available. And we need to start evaluating students on however they do it, whether they do it old school by reading texts and they never use AI, they use AI. What ultimately matters is, did you provide me with an insight that I couldn't have gotten out of Claude?

Jonathan Aberman (12:49.456)
Did you provide me an insight? couldn't got an energy GPT. And if the answer is no, then my question to you as a student is what are you actually learning and why will you be employable? And I believe that there are tools available already. I see professors when I go and talk with universities who are starting to do things to try to accomplish this. Frankly, the biggest challenge education has now is in a lot of ways it has to return to its roots.

meaning why universities started back in the medieval times as a way to prepare the human brain to seek a higher level of understanding. I think that they need to recommit themselves to that and understand that the challenge they have to overcome is the sameness of knowledge now.

Francis Gorman (13:38.702)
I that. It is really interesting to me and I often ask myself, if everyone is using AI to do the thing I need them to do, do I need the individual itself? And grapple with it.

Jonathan Aberman (13:40.068)
Mmm.

Jonathan Aberman (13:51.971)
And I would argue that you do. Listen, I think that the value stack in business, the value stack of a business today is IT at the bottom, cybersecurity next, gen AI and agents next. And at the top is a layer of original intelligence. And the layer can be very, very thin in an agent environment where agents do a lot of the day-to-day lifting. But in the today,

Some humans have to provide the value add of judgment and differentiation to satisfy a consumer, whether it's how you knit the agents together or how you deal with the irate people in the phone when the agents fail. And then if you take other roles like strategic roles, leadership roles, sales, and so forth, there's a large premium still on individual problem solving and coming up with new approaches. And so

I think that original intelligence basically is now the part where businesses will find differentiation.

One thing I do want to point out to you though, just to make sure for our listeners, I'm not talking about original originality like Van Gogh or Jimi Hendrix. What I'm talking about is that every one of us has within us the inherent desire to seek novelty because that's how our brains work. We seek safety and we seek novelty. It's just a question of how we apply it. You know, some of us are like Van Gogh or Jimi Hendrix or

hugely, hugely, hugely expensive in how they use their originality. Others of us are just, we get up every day and we just do our jobs and we do them well and we use originality to solve the problems in front of us. Both are equally valuable in a business context. And frankly, when I'm talking about differentiation, what I'm talking about is throughout an organization, there are different places original intelligence will matter. The secret for the enterprise now is to understand the different personality types.

Jonathan Aberman (15:47.235)
match them with the right jobs, and then teach them how to use AI appropriate with those jobs. And none of those things are happening right now. None of them.

Francis Gorman (15:56.238)
Thanks, John. I think you've hit a lot of key things there and it really brings it to life. And I think these conversations are important because there's a lot of fear out there. There's a lot of skepticism, but there's also a lot of opportunity. And I think businesses are going to have to recalibrate to an extent to potentially as they get leaner to make sure that there is individuals embedded at different layers that have that creativity, that have that curiosity, that want to

Jonathan Aberman (16:03.791)
Yes.

Francis Gorman (16:23.886)
look at the problem and maybe use the tools in front of them, which is the agentic or the generative or whatever flavor of artificial intelligence to achieve that outcome. But to have that original thought that that outcome becomes possible, if that makes sense. Do you think we're going to have to change how we hire and train individuals into our organizations?

Jonathan Aberman (16:44.803)
Without question. And I think we're going to need to change the presumption that the only businesses and activities that matter are technology led. So let me answer the first, just the first. At the end of the day, if you are willing to buy my hypothesis or my company's hypothesis, which admittedly we've been in business for less than a year and we released our technology last month. I mean, it's not like if you wanted to measure human originality, you could have done it last year. So we'll give industry a break.

If you agree with my hypothesis and people do agree that you need to have original intelligence and organization differentiate, then absolutely. the, the category of original intelligence is going to become the fertile place where a lot of startups, a lot of businesses are going to make a difference in the post AI world. It's inevitable. It'll happen in training. It'll happen to leadership and development. It'll happen in team building. It'll happen everywhere because ultimately we will elevate human insight back into where it needs to be in the value chain.

but in a different way. won't be just, you Leonardo? will be, will you be able to work with Gen. AI and do your job better, right? I think that that really is the most important thing for us to really keep a track on as we look at the economy going forward. Now, the second thing is, at the moment, we are in the world where we're thinking that the only human endeavors that

provide economic value tend to be ones that touch technology. We tend to devalue artisanship. We tend to devalue really, really great carpenters and plumbers or really, really great teachers. And it's just the way our society works. The comment you made earlier about spelling mistakes, I think is very telling. I believe that over time, true authenticity, originality, however it's manifest is going to become something that people are going to want to consume.

Because as information gets commoditized, people are going to want to have something special. So I think it's going to be a, we may see a resurgence of artisanship and craftsmanship and guilds and human labor that is unanticipated right now, but could be a very large way that many people make their money over the next 25, 30, 50 years.

Francis Gorman (19:04.302)
Yeah, I find that super fascinating. The authentic approach that you can only get from hands on metal or timber or whatever it is, know, creating something unique from an individual perspective that has flaws, has nuances, you know, has been worked by someone and you can attribute it back to a fully human, you know, work life cycle. think that is, I think you're right, to be honest. think that is it. I think people will probably also strive for

Jonathan Aberman (19:10.565)
Yeah.

Francis Gorman (19:31.424)
locations where they get to detox from technology. These islands where you pay five grand a night to stay in with your phone and laptop and iPad and digital assistant taking off you at the door are probably going to thrive in the next couple of years as well. Jonathan, you talked about founder and doing it all again. I'm always intrigued to talk to founders and people who've taken the risk, know, jump back into the life cycle. know, it's the long slog, the hard slog, the

Jonathan Aberman (19:37.691)
Yeah.

Jonathan Aberman (19:56.954)
Mm-hmm.

Francis Gorman (19:59.886)
the problem they want to solve but they don't know if it's going to last past the first year. What drove you to kind of go, I want at this again.

Jonathan Aberman (20:11.546)
Well, for this one, think that it was very much and is very much mission driven. And I think this is true for most entrepreneurs that are successful long-term. Entrepreneurs are mission driven, you know, and sometimes it's a broad social mission. Sometimes it's a narrow, I found something cool and I want to make money, but there's always a mission. And for me,

having been involved in education as a professor of business, having been involved in policymaking and economic development, having been involved as an investor in my entire career. I've done all those things sort of at once and had a journalist career on the side as well. To me, when I saw this technology, I felt like the most important thing I could do with this period of my life was to take a really good run at making sure that it was commercialized because

What Hubside does is it provides an economic argument for human labor in a knowledge worker economy post AI. can point to data from our platform. If somebody is an assessed individual using our platform, I can point to that person and say that person has the capability of taking AI and creating something that you won't get without them.

I can do that. can point it to you and it's demonstrable and it's irrebuttable. And so if you want to tell me that AI is the only thing that matters, I can tell you categorically, I'm not having this conversation with you as somebody who's socially aware, although I am. I'm in this conversation with you because you as a business want to make money. And this tool, this technology will allow you to know that if Francis takes

this assessment and he comes out as somebody who's an expander, you can take that to the bank that he's going to take those tools and he's going to create something that's going to blow your mind. Take it to the bank. And you know, you might say, well, Jonathan, how can you say that? And the answer is because there's years and years of research that shows that when people are demonstrably creative, remember originality is the output of creativity. When the demonstrably creative, they are prone to use tools better.

Jonathan Aberman (22:33.499)
They are much more tolerant of ambiguity. They are much more optimistic people. And fundamentally, they make things happen. So I looked at that and I said, I'm just not going to live in a world where technology and AI is deterministic of our society. I'm going to give people who want to make money an argument for why they're fools if they don't look at their people and train them based upon how they're situated to make them able to use AI better.

Because if they do, they'll make more money. And then I've levered the playing field. So now if you want to talk about regulation of AI, great. You want to talk about the social aspects of AI, great. Their big conversation need to be had. But let's level the playing field so we can just have a conversation about AI and humans on the level playing field. Where am I going to make the most money? And when I saw that, I thought, what more important thing could I do with my life right now?

then make sure that we have conversations like this. Because I know how to start companies, I've helped start many, many software companies in different ways over the years. I've got a great team of experienced people, lots of resources available. I have an obligation to try. So here I am. And I'm having a blast, but it's all about mission, Francis. It's all about mission.

Francis Gorman (23:53.614)
I loved that, Jonathan. The one thing I'm always intrigued to understand, especially from someone who's kind of been a serial entrepreneur and has had multiple startups and has been involved in that investment cycle, you pick up a lot of lessons along the way. When you look back across your career, is there certain things you wish you had known earlier or taught you the most important lessons to the decisions that you make today?

Jonathan Aberman (24:18.182)
It's really funny, you know, to be honest with you, I tend to cope with stress and I tend to live life by when I make a decision, always reminding myself I made the best decision at the time on the information I had. So I tend not to be somebody who looks back and says, I wish I'd done something different because this is not the way I'm wired. And I think that's true for most entrepreneurs that manage to do it again and again. Having said that, you do learn.

from experiences learned from repetition. And so there are a number of things that I'm applying to this experience. are clearly lessons learned. Things like, for example, I've been around a lot of technology commercialization deals over the years. So I know how important it is to build a very compelling and strong patent portfolio from the beginning. So I'm taking that really seriously in this company. There's differentiatable technology, and it's being protected avidly, for example.

I learned over the years that, I'm much happier when I have one or more really, really strong business partners rather than doing on my own. So when I started Hubside, I made sure I had a really strong, technical co-founders and my two scientists, but that had a very strong. Technical business co-founder and my co-founder, Eric. And, you know, the financing strategy, raising money from a safe and raising it from people and individuals that get the joke. You know, there things that you learn over time. So.

What I would say is that life is an opportunity for self-improvement, if you're self-aware, and the entrepreneurial journey, you'll always make new mistakes, but it's pretty clear to me, now that I've said this, I'll probably be out of business a week, but having said this, I think I've avoided a lot of mistakes in the last year that I might have made if I hadn't been involved with it for last 25 years. That's for sure.

Francis Gorman (26:16.28)
Hopefully we might have you back next year just to make sure that you didn't go out of business a week after we recorded.

Jonathan Aberman (26:19.552)
Absolutely. I would appreciate that. Yeah. And if I'm broadcasting from a park bench someplace, then those things didn't go quite as well. maybe I'll just come to Ireland and you can sleep on your sofa. We'll see. No, I'll be OK. Trust me, I'll be all right.

Francis Gorman (26:25.527)
and

Francis Gorman (26:31.47)
I'll feel bad now if that happens. I think as I look forward, the way that leaders lead is going to have to change. I'd like to get your perception on what you think that's going to look like as the world evolves. Are companies going to get leaner? Is there going to be a need for

leadership to be more technically astute or do you think the leadership model as it stands today will sustain?

Jonathan Aberman (27:02.15)
Well, I think that there are a number of ways it's going to play out. One way it's playing out right now is the tension between what I would call top-down leadership and servant leadership. You know, one of the hallmarks of the current model of AI innovation is it's fundamentally a top-down model when you cut through it all. It's very much command and control. And I think that you're seeing it play out with lot of AI transformations looking more like the Hunger Games than anything else. Use this or I'll fire you.

So one of the interesting issues, and again, this is why I came back to technology, is I'm a servant leader, and I think that servant leaders like to have tools they can understand how their people can flourish. So there's going to be a tension, I believe, between people who look at AI as a technology solution and people look at AI as an enabler, the top-down versus servant leaders. My hypothesis is that

Over the years, it's been demonstrated that servant leader led organizations tend to be more agile and tend to be better able to developing competitive advantage from value add. So I think that what AI Gen. I does is it creates a lot of efficiency, which results in the potential for smaller teams. Being able to do more things. So when you look at leadership, I think that

There's going to be a huge premium on individuals who understand that AI, robotics, quantum, compute, and everything are tools and enablement for differentiated business models, and that the best people are going to be those that are agile, able to add value. So servant leadership is going to become, think, more important in the modern industry if we have differentiated companies. To see this, have to be technically aware.

I certainly am technically aware. I am the product, the lead product designer right now for my company. I definitely have to understand how the models work and various things. Do I need to know how to use Claude to code? Do I need to understand exactly how the embedded and transformer models are working? No, I don't think so. I think that, I think that you have to understand the limitation to technology in order to be able to be a good leader for technology companies.

Jonathan Aberman (29:24.674)
So to me, ultimately a really good founder needs to understand his people, needs to understand the roles they fill, and ultimately needs to have below him or her a team of trusted folks that can manage the finance operations and technical side of the business and then orchestrate it, which again comes back to servant leadership. To use the analogy of real football,

I think that a CEO of a tech company needs to be a midfield or not a strike.

Francis Gorman (29:59.894)
that as well. couple of good ones in there that I'm gonna have to listen back to after this and you know let's soak in a bit more. John there's one thing I'm not sure if this in the States to talk is the same but there's a lot of talk about an AI bubble and you know potentially bringing the economy with it. For someone who's obviously put your chips down in the AI game I'm sure you have a take on this one.

Jonathan Aberman (30:07.396)
Mm.

Jonathan Aberman (30:22.68)
I absolutely do. I lived through the internet bubble. I was very much involved in that and the original commercialization of the web. So I saw that bubble firsthand. And I think it's very important to first of all say, we on an AI bubble? Yes and no. Are we in an AI bubble from the standpoint of there's an over

An overarching hype cycle where you have a lot of people who are what I'll call Sherlocking on large language models and creating service businesses and make pretend they have product companies. Absolutely. You know, one of the reasons why I started Hubside was it's a proprietary model, an engine that doesn't require any external input. It's truly is distinct. It's one of the few.

AI deals I saw outside of the Gen. AI models that actually is distinct. If all of them went away, our business could still operate. And that was really important to me. So there's definitely a wave of follow-on businesses, knock-on businesses that are going along with this Gen. AI wave. That's number one. Number two, as is the case, whatever you have a technological wave, whether it was the railroads, internet,

computers, there's always an over investment. It's inevitable because there are always going to be winners and losers. I think that the current situation with the open AI, Anthropic and the rest, there's no doubt that at some point the curves have to cross and they have to become economically viable or the business model falls apart.

Where investment bubbles burst is where the people supplying the money for that chase to occur lose heart. The reality is, that what will happen over time is the genie genie is out of the bulb is out of the barn. Genie's out of the barn. The horse is out of the barn. And it's too useful. Some subset of the current players is going to it, whether it's entropic it's

Jonathan Aberman (32:33.836)
Open AI, it's Gemini, it's Microsoft, it's a new emergent, it's the Chinese, doesn't matter, somebody's going to win because it's too important. And when they win, they will be able to capture monopoly profits and they will be able to then provide high quality versions to some customers or they keep them for themselves. So in other words, the investment bubble will inevitably burst, but the the ubiquity of Gen. AI will not. And

It's the same way that what happened when the internet bubble burst. There were too many people putting telecom cables in the ground. There were too many startup businesses hyperventilating over ASPs, which is now what we call software as a service. And the markets got tired and it burst. Same thing will happen at some point here. But what won't happen in my opinion is Gen.ai will not go away. You know, somebody like Cory Doctorow, I respect enormously, his hypothesis is all going to go away.

I don't agree with that. It's not, there's too much money in it. But there will be a period where people will lose heart and money will flow away. And because right now the AI bubble is such a supporting part of the current United States economy, there will likely be a recession. It'll look a lot like 2001 when it happens. But don't forget businesses like Facebook and Google and

and others really got their start in 2003 and grew on the basis of the infrastructure that was created during the internet bubble. Same thing's gonna happen again. I'm not the least bit bothered about it. And by the way, since you kind of asked, my bet is what we're doing only becomes more valuable after the bubble bursts. Because after the bubble burst, GNI is not gonna go away, but it's gonna become a more valuable resource, which means that people are gonna really need to use it, but they're gonna wanna be able to leverage it.

every dollar they spend and they're going to come back to the need for really smart people. So I'm happy with where I'm situated personally. I'm not too concerned one way or another to tell you the truth. I'm just building my company.

Francis Gorman (34:45.102)
I like that. I like the honesty. It's always an intriguing question because I get very different answers depending on who I ask. If I ask a philosopher, I'll get one answer. And if I ask a scientist, I get another. And if I ask a business owner, I get another. So it's nice to see the variety and different perceptions of where we are. I'm just sitting back and watching and seeing when it crashes to throw the money back into the stock exchange.

Jonathan Aberman (35:03.824)
was.

Jonathan Aberman (35:08.89)
So in the Fortisworth department, I am very bullish long-term on the value of human insights. I'm very bullish long-term on industries that are going to depend upon that. And frankly, I'm also very bullish on Europe because I think that the separation that's being caused right now, because the EU is taking a more holistic

view on privacy and AI deployment may find itself in a funny way better configured for this new methodology of really looking at human value add. I'm very optimistic about the EU as a market for a new kind of approach to AI.

Francis Gorman (35:58.546)
That's interesting. think that, yeah, it's hard to know what way it's going to go at the moment. It does look like the US companies are driving the innovation followed by China or China's ahead, depending on where you look. It's all a little bit gray and then Europe is trying to play catch up. But there's some very promising ventures and investments happening there. The last thing John, I want to ask you about, and just because it seems to be coming more into the mainstream, the debate about

the existential threat that AI may cause to humanity. And if you believe that's a possibility or if it's just hype, we've got a lot of people like Jeffrey Hilton and others on the stage kind of calling this out that we need guardrails. We need to think about what we're doing and we need to slow down. What camper are you in?

Jonathan Aberman (36:45.37)
Well, think that so first of all, I think that we are suffering from what I would describe as the Overton window. And if you've ever heard that phrase, it's the idea that basically you successfully frame a conversation so that issues that could be discussed never are. And for example, in the U.S., the idea that taxes must be cut, you know, that that is or government must shrink that in itself is a successful shift in the order to window. It's taxes don't have to be cut. Taxes could go up.

Government services could be better or they could be worse. Point is, the conversation on AI is very much framed as if technology is deterministic and left with some devices that will continue to improve, continue to improve, continue to improve, put people out of work, take away free will from people, and ultimately one day it will be omniscient and a god and it will discern everything for us. That's because the over to the window has been shifted, so AI is only looked at as a technology that's inevitable.

It's not inevitable. It's a tool. So in the first instance of people are worried about the Armageddon of AI, why don't we first acknowledge that humans actually provide the value add. And by the way, humans actually can provide the oversight on whether or not a tool is used. Look, we've had nuclear weapons for a long time. I'm not going to the dark right now in either of you. So it does because the doomsday weapon exists doesn't mean it has to be used. So

To me, yes. The concern that we get to a point where one day there is a model of GEN.AI that is so competent and moves so fast, it can create tremendous mischief or bring down the system, it's 100 % something we should be worried about. But by the way, we should be worried about quantum computing for the same reason.

And by the way, we should be worried about killer robots for the same reason and climate change for the same reason. Existential threats exist because we as humans are not willing to acknowledge that technology isn't deterministic. It is just something that we apply. So to people who say AI should be regulated, that it should be looked at from the standpoint of ethical use and all those things, the answer is 100%. 100%.

Jonathan Aberman (39:04.218)
There should be choices being made. The deterministic view that AI should just continue, continue, continue to its, logical conclusion is not true. But where I think I differ from the millennialists or the people that talk about the end of the world is that there's a whole bunch of stuff that could happen between now and then. And by just talking about the end of the world, you basically allow the window to be shifted away from the issues that fundamentally matter, which is what are humans going to do?

before AI becomes our God, if in fact it ever does. And the answer is, as I'm discussing with you right now, there are a lot of answers to that question. And that's really, again, just coming back to the end. Look, at the end of the day, what we all care about, you care about doing the show, you know your job, and I care about mine, and everything that we do, humans wanna matter. We are novelty-seeking engines, and ultimately we value our own novelty.

And to live in a world where that isn't properly valued, I don't think is a world that anybody wants to live in. So let's have that discussion.

Francis Gorman (40:14.024)
like it. Humans need purpose. is definitely an underlying need in our Maslow's hierarchy of needs, purpose, comfort, know, all of these things. Before we finish up Jonathan, if you take an outlook over the next two years, where do see this technology evolving to? We've had quite a shift in the last two.

Jonathan Aberman (40:16.131)
Absolutely.

Jonathan Aberman (40:34.874)
Well, if I have my way, two years from now, you're going to see a lot of companies leaning into their value proposition as the originality of their teams and that they are going to be hiring and promoting and building teams based upon the originality competency. I think that is something that would provide a very strong narrative.

to push back on the idea that AI is deterministic and it would make a big difference in a lot of places. So I'd like to see that happen. Putting aside my own self-interest and my company's self-interest, I think that the likelihood of a market adjustment or a market shock or what you would call a burst into the bubble, I think is more likely than not. Whether it's because of geopolitical factors,

causing a change in capital flows because there's too many conflicts going on. The tariffs in the United States really start to erode freedom of movement of technology. There are a lot of things that cause me to be concerned. So I think over the next couple of years, you're going to see a massive change in how people are looking at the Gen. deployment. Having said that,

I think that the two years after that are going to be unbelievably exciting as they were from say 2003 to 2006 after the end of bubble burst. The potential for AI to change society for the better is enormous if we change the models for how we use it. And a good market shock might actually help people snap out of it. And really remember that at the end of the day,

We're the shapers of society we live in, and we are ultimately the reason why society exists. So fasten your safety belt, but I would say stay focused on the North Star, which is at end of the day, people are the most important thing in the world, not tech.

Francis Gorman (42:49.582)
stay focused on an art star. do like that. John, thanks for coming on today. I really appreciate it. think the listeners are going to get a lot out of this conversation. So thank you for taking the time to speak with me.

Jonathan Aberman (42:51.501)
Absolutely.

Jonathan Aberman (43:00.43)
happy to do it. And folks, if you are interested, we're Hubside.com and we always have up on the website an example game you could play so you can find out your own original intelligence. So if you want to learn more, get in touch with us.

Francis Gorman (43:13.014)
Excellent, I'll stick that link into the episode details. Thanks, Jonathan.