Cutting Edge AI

#7 Ingmar Klein (CEO, Huzzle) on the Future of Work, AI, Hiring, and Human Data.

Angel Invest Season 1 Episode 7

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 35:06

AI may not just replace work. It may also create entirely new categories of human work around training, evaluating, and operating intelligent systems.

In this episode, we speak with Ingmar Klein, co-founder and CEO of Huzzle, a company that evolved from a talent marketplace into an AI-powered recruitment engine and, more recently, a provider of human data for frontier AI labs.

Ingmar shares how Huzzle built AI interview systems capable of assessing candidates at scale, why hiring is one of the first workflows where AI can already outperform humans in consistency and efficiency, and what it takes to combine automation with human judgment in recruitment.

The conversation then expands into the emerging market for human feedback and training data. Ingmar explains why experts across domains may increasingly spend time evaluating model outputs, improving agents, and helping AI systems operate inside real software environments. We also discuss why the next bottleneck may not be model capability, but adoption inside companies still running on legacy systems.

If you’re interested in how AI is changing hiring, creating new job categories, and reshaping how organizations operate: this episode is worth a listen.

[00:00:00] Ingmar Klein: And I think we are going to have something similar with AI where the AI is that new machine and sort of counterintuitively humans will be needed to design and to make sure that the machine is running. I think you'll have a lot like lots more of these jobs in the future.

[00:00:15] Robin Harbort: This is Cutting Edge AI brought to you by Angel Invest with your hosts Jens Lapinski and Robin Harbort. Our guest today is Ingmar Klein, the CEO and co-founder of Huzzle. Ingmar spotted a lot of things early, including AI interviews for recruitment leading to Hustle talent and also data for AI training leading to Huzzle Labs. The first business of Huzzle was a student job platform which emerged over time to a global AI recruitment engine serving clients like Apple or British Airways. With more than 300,000 talents on the platform and thousands of top roles, Ingmar saw change. People increasingly create data to train AI systems or improve AI systems directly. This is now the business of Huzzle Labs. We will talk about how you could interview 100,000 people in one day. How to improve frontier AI with data and the future of software in the age of AI agents. This is the Cutting Edge AI podcast by Angel Invest. Let's go. Hello Ingmar, welcome at the Cutting Edge AI podcast.

[00:01:45] Ingmar Klein: Hey, thanks for having me. 

[00:01:45] Robin Harbort: And also hello to Jens.

[00:01:50] Jens Lapinski: Hi guys, great to be here.

[00:01:53] Robin Harbort: So Ingmar, for those who don't know you, could you briefly describe who you are and what are you building right now?

[00:02:00] Ingmar Klein: Yeah, for sure. So I'm Ingmar, born and raised in Munich. I started my first company with uh 16, went to university very briefly in St. Gallen. uh then dropped out after one and a half years and founded Huzzle. This was around 5 years ago now. And with Huzzle, we've built primarily a talent marketplace where we match talents from emerging economies with full-time roles in the US and the UK. And then since last year, we've also built a new business unit on top of the existing core business where we partner with Frontier Labs and enterprises to evaluate and train AI.

[00:02:45] Robin Harbort: So you mentioned your background at St. Gallen. I think you also went to to Munich to Technical University of Munich and you've also been part of Triumph from TUM. So all of this how did those steps lead to founding hustle in 2021 which is kind of like a human talent network. How do those things align?

[00:03:05] Ingmar Klein: I think like very like nonlinear convoluted journey how it probably is with many entrepreneurs. Um but the first touch point you mentioned there with the Technical University of Munich that was when I was 16. I got like an exception for for my high school to go to the TUM one day per week because back when I was 16, I wanted to actually become like a physicist and study physics and was able to go to TUM and work with a really amazing professor there in the excellence cluster for particle physics on like fundamental research. I supported the lab with just like software engineering things here and there, and then later on I realized after going also to Vancouver to Triumph. Triumph is a research institution, it's Canada's particle accelerator. So, similar to CERN, but on a much smaller scale. I worked on an interesting project there too in the lab. But I realized that this this culture wasn't really for me, I thought that for the types of problems these researchers were trying to solve. We had way too less funding and people were working in really isolated ways. So, I thought, okay, I have to do something else to have a bigger impact. Decided to, yeah, study business because I didn't know what what else to study and uh then at St. Gallen I stumbled upon this like problem where I thought a lot of young people didn't really know, or didn't really have good options with their careers. They didn't really have a good understanding of where their potential could fit into this economy in the best way, and I thought that this was due to a lack of product that were built for for this audience in order for them to enter the world of work in a more effective way and this is when the first idea of Huzzle came.

[00:04:35] Robin Harbort: So, How did you wanted to change this this process like with Huzzle? What was the first idea? And was there already some kind of AI involved?

[00:04:40] Ingmar Klein: Yeah. Now the first idea there was no AI. This was in 2021. Although there was AI agree, we didn't use it back then. The first idea was to fill young people's calendar with what we called tiny touch points with the world of work. And these like small touch points could be inside days, they could be company visits, they could be lunches with people who work in the company or dinners, just very like efficient touch points with many many different industries because I believe back then I still believe that the best way to figure out where your fit is, in the economy, is by just trial and error and figuring like trying a lot of different things and iterating to see okay this might be the culture I want to be in this might be the function I want to work at this might be the like the mission that that I want to work at. So we thought okay the best way we can do is we can build a platform that aggregates all these very efficient small little touch points with the working world and on the other side ask young people what they're interested in, and then fill their calendars with these throughout the the year.

[00:05:47] Robin Harbort: Okay. And how did this evolve into what I see now on the website of Huzzle where it's basically a lot more AI driven?

[00:05:55] Ingmar Klein: Yeah. So that was the first version of Huzzle that we built back then also this idea that we collected these unities and fulfilled people's calendars with it. We quickly did something else like a few months later because we realized how difficult it is to scale a two-sided marketplace especially in this, or among this user segment because you have a lot of default churn where if you find a job you turn from the platform and you also have a lot of seasonality where usually in spring or in fall university students are looking for opportunities but then they become inactive. 

So we're facing some like big challenges back then when it comes to just user churn. We found for us a a good channel distribution channels through student society student organizations that also increased the retention and I'll just fast forward a few years but we then built up a university careers marketplace with around 300,000 registered users of which 100,000 were active on a monthly basis primarily built in the UK but also across different other European countries, with partnerships at like leading universities like UCL, Cambridge and etc. We ended up selling parts of that platform to a business in the UK. We kept the brand, we kept the talent pool, we just sold parts of our code base of the tool that we've built. And that was I think that was around 2 years ago. We did that because after having built this platform and just reflecting on the business, we thought that this could become a profitable good-sized business but not like business of significant impact or significant size. For that, there would have been too many structural challenges that we would have had to overcome. For example, the like much more heterogeneous markets across Europe where it would have been difficult to apply the same go to market that we had in the UK to other geos across Europe. Also, at that point of time, UK announced a recession which caused a lot of the enterprises to cut their budgets when it comes to early careers hiring. Uh, which wasn't really tailwind for us either. So, we decided to sell parts of it and what we were left with was like this talent pool, all these people still our brand and that was around around yeah two years ago, where we also saw that AI is now ready sort of at an inflection point to automate one of the major bottlenecks of manual recruitment which are pre-screening interviews and so this is when we started building an AI interview to automate this and I think this is when like the growth really picked up for us.

[00:08:20] Jens Lapinski: What's the um first wedge that you went into like what was the because typically when you when you start something new you want to start somewhere small and you pitch it and you get traction. What was the first segment where you got real traction?

[00:08:34] Ingmar Klein: Yeah, that's a great question. The first small segment where we got real traction was with sourcing sales talent, so SDRs specifically from emerging economies back then I think mostly from South Africa, and placing them into businesses in the US and in the UK, small to medium sizes businesses. And the reason it was so attractive for these businesses because the whole customer journey was productized. was very fast to hire people and they got sort of the same talent quality for lower cost as compared to the US or the UK.

[00:09:10] Jens Lapinski: How much lower was it? Was it half? Was it a quarter? Like what what what what was the savings that that made it so compelling?

[00:09:16] Ingmar Klein: Yeah, I think if you compare some of the like normal SDR salaries US versus South Africa like at least half.

[00:09:23] Jens Lapinski: Yeah, that makes sense. I think people start switching when something is at least 40% cheaper. Yeah. If you come it's like 20% cheaper. But nah, nobody cares. So, but then that transition then. So, then you added more and more. Was it more with the same companies or was it different companies? So, how did that progress?

[00:09:40] Ingmar Klein: We started acquiring more like different types of businesses. We went from medium to like these SMBs type businesses to also more scale ups. Um so on the customer side, we explored a few more segments but also on the talent side. So, we started with sales talent but today we do a lot of placements and for operational talent, for marketing talent also, software engineers sort of across the board. Majority is still like sales and operations, but a lot of these other verticals are also starting to grow.

[00:10:07] Jens Lapinski: So the interviewing that you did, how did that actually work out? So you interviewed the folks for you and then you basically said these are the good ones, you can hire them or how did you play the the sales interview with the AI? Explain how that fits in.

[00:10:22] Ingmar Klein: Yeah, great question. So when we looked at the manual recruitment process that you have, we thought that especially for high volume hiring AI interviews at that stage that was two years back can be used in the best in the best way with the most leverage because when you post a job today in a Latin American region or like sub-saharan African countries or Southeast Asia we probably get hundreds of applicants the same day. Um so the question is not so much an applicant problem it's more so an assessment problem. So when we built our II recruiter we focused a lot on this talent evaluation assessment part. The AI interviewer is like essentially as you you know like similar experience to you joining like a zoom call and getting interviewed. We were experimenting back then with like some personas. So we had like a person interviewing you but we quickly discarded that because we thought it was just not really authentic. We rather focused on the performance of the interview. So the speed should feel natural of the answers and the questions. The questions should feel really natural and and should also like sort of dig into your experience. And then we started building more more things on top of it and start improving your own model that's built into the um evaluation part of the interview as well.

[00:11:36] Jens Lapinski: To what kind of roles can this be tuned? Is it did you just do I mean what what I'm trying to get to is quite a few questions in an interview obviously generic right it's always the same questions irrespective of what role you're looking for more or less and then there is specific part of all of that yeah so that there are like how do you deal with whatever then there are the function specific questions yeah um I would imagine that by now because how many people have have you interviewed to date? How many interviews has the software made?

[00:12:07] Ingmar Klein: I think like almost like 100,000.

[00:12:09] Jens Lapinski: Yeah. Let's not go into what it does like what's the variance in the responses because you you must score people who take these interviews, right?

[00:12:17] Ingmar Klein: Yeah.

[00:12:18] Jens Lapinski: Like how how far but let's talk about like what does the distribution curve look like of the uh how good these people are at interviewing or or the scores that you give. What does that look like?

[00:12:28] Ingmar Klein: You mean in terms of how many people get through the interview like...

[00:12:33] Jens Lapinski: I don't know whether you give points or how it works actually I've never asked but but basically if you give 10 points, like do some people really get 10 points and do some people get one point or or most get 5.7, or like what does that actually what is the performance of people across these interviews what does that look like?

[00:12:52] Ingmar Klein: I mean in the end it looks sort of like a bell curve right of the distribution of of the points the way we do it is that we score and answers um individually um that that candidates give us. So we would be asking questions and then individually we score the answers to these questions. The things we look for like we don't do anything when it comes to what you as a human have when you have this gut feeling, right? This intuition. That that's very hard but but we we take all the hard criteria into account of how detailed is your answer, how well structured is your answer, does it really answer the core of what we were asking? If we start asking deeper and deeper and deeper questions and follow-up questions, can you sort of maintain that same level of detail? Those are the some of the things we do. When it comes to also like a big part for us are communication skills. So um how is your clarity of English? How fluent are you in English? This is actually where we developed our own model uh to also evaluate this. Yeah. But essentially we would take their answers and then rate them uh each one by one, one on a scale from 1 to 10.

[00:14:06] Jens Lapinski: If you benchmark this against a human taking an interview which you must have done. 

[00:14:07] Ingmar Klein: Yeah.

[00:14:09] Jens Lapinski: What is the difference between a human interviewing somebody and the machine doing it?

[00:14:14] Ingmar Klein: So what we have done is we have compared or what we're con we're still doing constantly doing it. We are comparing the manual ratings with the evaluation of our own model. And before I mean, we've released the AI interviewer already like one and a half years ago and in the beginning we were testing it a lot with our internal recruiters. Back then we had uh like three full-time recruiters in the company. Today we have two full-time recruiters in the company. We were using them to test whether the AI is sort of ready to be rolled out to all the applicants. We relied heavily on their own feedback to tell us okay now it is as good as as we are at least. in terms of the evaluation to be rolled out. Um so this was like one and a half years back when we were at that stage and now as you can imagine, because, what we do is, we take the proprietary data that we have post placement also of the candidates so we know how long they will be in the company for, we even get data on a monthly basis from these companies in terms of performance, and we also have our manual labeling of our two full-time recruiters who are still in the loop who are also giving feedback. And so this data set will make the evaluation better better over time.

[00:15:34] Jens Lapinski: But basically what that means what you're saying is that the AI was as good as humans one and a half years ago.

[00:15:40] Ingmar Klein: Yes.

[00:15:40] Jens Lapinski: So now the AI is better basically.

[00:15:44] Ingmar Klein: At least the evaluation part. Yeah.

[00:15:45] Jens Lapinski: Yeah. Okay. What does that mean? It's basically more consistent or is it better?

[00:15:49] Ingmar Klein: Yes, that is the key.

[00:15:52] Jens Lapinski: Yeah.

[00:15:54] Ingmar Klein: So the key is like objectivity and consistency I think a lot of the times um with with interviewing. Yeah.

[00:16:00] Jens Lapinski: How do you capture people who are cheating? It's not them, it's their friend, it's somebody else. How does that work? How do you prevent that from happening?

[00:16:05] Ingmar Klein: Yeah, it's hard. I mean, we have vision built into the AI interviewer where we can see if where we can try to see if you're reading off your screen and but to be honest, you would catch these pretty easily when someone's like reading off the screen, but someone doesn't come across as natural. After we've made like matches, we still have someone quickly looking over the candidates we send to the clients and they would then immediately be able to take: "Okay, this this person is reading off the screen." This this person doesn't act natural for this. It's like a combination of like vision um and the product and also like human in the loop.

[00:16:43] Jens Lapinski: How many I mean there is an unlimited number. How many interviews could this machine do per day? What's the limit? There is no limit, right? It could do tens of thousands. It could do hundreds of thousands of interviews per day, right?

[00:16:50] Ingmar Klein: Yeah, there's there's a limit. Yeah.

[00:16:53] Jens Lapinski: I mean, if this is true, why should anybody not interview people by machine? If you're doing it at scale, what is the reason for I'm not doing it with a machine.

[00:17:02] Ingmar Klein: Yeah, you should. I think I think it's going to happen in the future. I think like culturally there's still hesitancy in some with some seniority levels of being interviewed. I think also the AI interviews or the experience back then just like sucked because they weren't really built well. So you quickly lost respect for the company if you had an experience like this. Um that's why we we think that the experience was in the first place so important for the candidate. Our average ES score on the candidate side is around 88 after going through these interviews. So that was really important to us. I think it will be applied the first wave is applying this for all sort of pre-screening interviews in like high turnover sectors very high lots of like like large scale hiring projects basically we see this obviously in in human data space as well, and then I think it's going to make its way into even more senior roles down the road at least the pre-screening interviews when when you are um like exposed to so many hiring processes and open job roles.

[00:17:57] Robin Harbort: Ingmar, when you are exposed to so many hiring processes, and open job roles, did you see a change like which like roles are more requested? Did AI change something there? Because of course AI has some impact on job posts.

[00:18:15] Ingmar Klein: Yeah, it's a good question. We definitely saw like some new roles like for example this go to market engineering position that was coined by a lot of the tools like Play that came out. But when it comes to the like SDR specific. Of course, you need to use AI everywhere in your day-to-day to like still be a top percentile SDR, but we actually haven't seen like much of a slowdown when it comes to hiring people as SDRs because one and a half years ago, we also thought, okay, there's going to be AI SDRs everywhere and then people won't have like those jobs anymore. 

But it turns out that running cold call AI calls is actually is pretty hard and people can still distinguish it or by law sometimes you also need to say that you're an AI if you're cold calling people. So the businesses that are our customers hiring with us still you know think that humans are much more capable when it comes to that. So so we didn't see huge changes there apart from some new smaller things like go to market engineers. Obviously where you see a lot of different change like changes is when humans go into this like sort of into these annotation roles that that haven't at least at this scale and this like haven't been there 5 years ago.

[00:19:34] Robin Harbort: Can you maybe explain the specific um like job role of annotating and what exactly you see there which which is now basically Huzzle Labs right?

[00:19:46] Ingmar Klein: Yeah. So I think it started back then sort of with more like automotive like self-driving companies, where in order to build these self-driving systems you need a lot of manual labeled data. So for example, is this a person crossing the the crosswalk and back then you were hiring a lot in these emerging economies as well from India. Scale AI was like sort of the big first company to do this really at scale for self-driving um self-driving companies and they're essentially just labeling data. So they're giving human feedback for model outputs. So you have an output and then there's a human that needs to verify whether this output is true, or even rated and and give their feedback and then their feedback can be used to further train these models and improve them and we have sort of coming from this self-driving industry like application layer, it has evolved through these AI labs popping up and um LLM's becoming really really good to a lot more like a much more like horizontal sort of job role where essentially I think in the future everyone in the economy will be in some if if they are really good at their job will be giving human feedback to improve models. So essentially will be labeling something or judging traces uh to to improve agents.

[00:21:17] Robin Harbort: How big will this job category like will be of like human data annotator people providing AI with their intelligence is this like an implicit role so people rate within an within their normal job or are they full-time employed and basically kind of submitting their intelligence? How does this work?

[00:21:36] Ingmar Klein: Yeah, it's an interesting question. I think also in terms of what type of commitment is going to look like, how is this going to look like exactly? We actually see the like of these three options you just mentioned, we see all of these three options at the moment. So, we see these contracting gigs. These are the like I would say the main category uh where you would contract lots of experts for a shorter duration project where they label data or create generate data sets to improve models. You also see people using platforms to do this in their day-to-day work uh just like on the side. You probably see it right now in companies with people observing AI systems and giving their feedback on an ad hoc basis, and you also see people who actually work full-time doing this. Um so some of the AI labs has shifted strategies when it comes to that to employ people full-time in house, to become experts for for certain domains. And I think to your first question in terms of how how big will this be? What what what how how will this look like I like to compare this with the industrial revolution, where through industrial revolution you had the this new class of workers that was created, who are just there to design and sort of run the machines. And I think we are going to have something similar with AI, where the AI is that new machine and sort of counterintuitively humans will be needed to design and to make sure that the machine is running. I think you'll have a lot like lots more of these jobs in the future.

[00:23:10] Robin Harbort: Can you recall the specific moment like which happened like in the last one or two years which made you expand Huzzle into the AI data space?

[00:23:18] Ingmar Klein: It wasn't like one specific moment. I think it was a few calls with my very excited CTO, who just really wanted to to go in there on a deeper level because he got so excited and passionate about it um that we looked into it on a more serious level. We knew about it before last year but we thought like we have a good business that is growing very fast. Let's not look into this. Let's focus on the core business. But then it it it sort of came back and we started digging bit deeper into it and we thought oh we're actually pretty well positioned to to take this seriously. And here are the things that we actually would change. if we running that type of business as compared to the current market.

[00:24:05] Jens Lapinski: I have a question around feedback loops. So I mean the computer is now interviewing some humans to do some work for computers, right? Which is a little absurd actually if you think about it. Anyway, so then the but then what happens is that the computer could actually get feedback from the computer about the performance of those humans, right? Which could then presumably lead to interesting feedback loop where the computer says well these humans here. I don't know how they they didn't do this very well. Find some other humans. How could they seriously you know that is actually if you think about it that will definitely happen this year. I'm sure that you close that loop. How do you think that that will play out? Is is there like well let's say that these don't have the stamina, they make too many mistakes and then do you think you can actually when you look through the transcripts of these interviews that you can figure out like retrospectively what kind of people to screen for and optimize for eventual performance 3 months down the line or something like that.

[00:25:06] Ingmar Klein: Yeah, I've also asked myself that question. We thought about this um really deeply. I think the information that that you have through an interview like 30 minutes or if you have multi-stage interviews you have a bit more information is still very limited, and even myself when I started Huzzle I got my co-founder my current CTO by posting a role on LinkedIn. I didn't know him before, like never met never met the guy before. And I had a 30-minute call with them. And I sort of decided after that 30-minute call to get married for five plus years or 10 more years, you know, how long we're going to build this business for. So, yeah, I think this question has been there for for like a long time, and there's still no like really good solution to it cuz the the information you're getting through this interview is still very very limited. I'm not sure if based on that, you could say reliably, okay, this person will be able to, you know, really perform well. But, you're probably a little bit better than a human being who tends to be more biased to their own experiences, etc. Of course, models are also biased, but it turns out that they're actually way less biased when it comes to evaluating lots of different candidates and are they're actually way more objective, which is is proven to make or result in better ing decisions that that has some empirical evidence.

[00:26:33] Robin Harbort: Now with with Huzzle Labs, could you give an example like what like data or intelligence are you providing for let's say AI system and and how do end users could notice this?

[00:26:48] Ingmar Klein: Yeah. So we are working on we're offering two two things mainly on that side. One is we're building RL environments like reinforcement learning environments The other side is um mostly RHF data. So reinforced learning through human feedback or you can also call it expert data where we would recruit uh subject matter experts across different industries. For example, PhDs in psychology who also speak German and we would identify capability gaps of current models where they fail within the field of psychology. The answers they give are inaccurate. We would then fix these capability gaps by providing the the golden standard. So the in the end you sell data fixes, or data basically like JSON files, back on that side and on the RL environment side you try to do the same thing that I described um with the psychology experts you try to collect that data of these experts, within a realistic environment like working environment. One example of that could be let's say you are are or you want AI to be able to do the job of a customer success manager, then it's not just enough to sort of train an LLM, to give better text outputs to questions because that customer success manager also actually has to complete real world tasks in his day-to-day. And so what you want to do then is approach to that could be that you try and model a CRM that this person is using, and then collect what they're doing within that CRM. So you then have data, that you can use to train agents to accomplish these tasks within that CRM, and then you come much closer to actual performance of human beings or actually solving real world problems as compared to just providing answers to questions.

[00:28:45] Robin Harbort: So so if so you're basically teaching the AI to use some kind of specific software.

[00:28:50] Ingmar Klein: Yes, it can be tool use like software also computer use. So using your computer um etc um and applying sort of the the the reasoning of these experts within that environment. Yeah.

[00:29:05] Robin Harbort: Okay. So, if we are now teaching AI to use software, do you think the software will stay? Because right now there's the buzzword "SaaS apocalypse". So, is this will will this happen? But because like if we train the eye to use it, we should not remove the software afterwards kind of.

[00:29:24] Ingmar Klein: Yeah. So, that's a great question, and I like I've I've had some sessions like midnight sessions with my CTO just talking about this and we also talk to enterprises about this by the way and the reality is that a lot of these enterprises are still stuck in like very old systems, and if if they want to have agents complete work within their systems, you will need computer use capability of these agents actually logging into the computer of an employee and getting the work done. Because transitioning from the current legacy systems to something completely new, you know, that has like all these APIs or MCP connections is harder, than if you just have them use the computer. So I think what's going to happen is you have this in between step where you can combine tool use and computer use, until you maybe start just bringing much more advanced systems into the enterprise overall and getting like the legacy stuff out of there.

[00:30:19] Robin Harbort: In those late night talks, did you also think about like in the, let's say far future, will be a system of record which is now probably one of those old legacy systems. Will this all just be the context window of an AI?

[00:30:33] Ingmar Klein: If then you can have context windows that are that large?

[00:30:40] Robin Harbort: Yes.

[00:30:42] Jens Lapinski: So basically what you're saying it all depends on the size of the context window.

[00:30:44] Ingmar Klein: Yeah. It does. And I'm I'm not sure. So I mean the the currently the context windows are not large enough to to support that but there's like some some good research being done there. France company Magic is doing great work there in just yeah creating these like much more larger context windows and being able to do reasoning and then much more larger context windows.

[00:31:07] Jens Lapinski: That would be a capability jump because that's not just a function of cost that's a capability increase that's substantial yeah that's. I agree with that I think that's correct very cool I mean basically what you're saying look, we can do the interviewing, we can do it at scale we can do it 24/7, we can do it super consistently, we can now use humans to train the computers to become much better, we can do this consistently, we can ramp this up yeah. We can do the training inside environments where you know there will be even more machine input because there is scoring of the things that the humans do and vice versa. Yeah. And then effectively if all of that comes to pass then the context window size will determine the extent to which the machines can really replace what and how quickly and that's one of the one of the limitations right now. Certainly that I agree with. Okay. Very cool.

[00:32:04] Ingmar Klein: When we're thinking this through like this is all just on a technical level, right? What will take the most time obviously is just like people bringing this actually into organizations actually making this useful, because when you talk to enterprises right now, the reality is a lot of them are actually just still stuck in the legacy systems and the AI agents are like, you know, it's like far in the future to be taking over significant processes or work closes within the organization. 

[00:32:29] Jens Lapinski: So then it's just all about adoption.

[00:32:31] Ingmar Klein: Yeah, exactly. It's evolving so quickly that I would be confident that by 2030, you have enough capability that you need to do very very complicated processes. So agreed. I think the major major thing right now is just the adoption. 

[00:32:47] Jens Lapinski: What's number number one limiting step on the adoption side? If you could wave your magic wand, what would have to go what would what would go away? What's the number one thing that's blocking you in the US um or somewhere else in Germany? I don't know some other place from uh what's the biggest hindrance to adoption do you think?

[00:33:06] Ingmar Klein: So culturally there might be some hurdles. I think also in in Europe sort of the openness potentially to it and then I think also the maybe the talent even technical talent that is able to deploy these systems as well. That's why you see like very heavy pushes of these labs uh to use their forward deployed engineers to help companies adopt it. And I think on that note though, like all of this stuff, like even RL environments, they they sound very cool and they sound like the secretive stuff that's happening. But if I think anyone who's an engineer, if they just sit down, they probably be able to quickly understand what RL environments are, what you can do with them, and then you sort of go past this in your head these complicated things that are actually not not that complicated, at least on like a I would say basic level, to then coming to the actual problems which then come back to a lot of the operational things again. So I think just teaching people more of these things up-skilling them more faster to have this technicality I think would probably accelerate the deployment a lot a lot more.

[00:34:19] Jens Lapinski: So it's almost like a new class of employee.

[00:34:20] Ingmar Klein: Yes.

[00:34:20] Jens Lapinski: So it's a type of new job role inside larger organization organizations, interesting. The programmers of tomorrow...

[00:34:30] Ingmar Klein: Yeah. 

[00:34:32] Jens Lapinski: They program the machine. Cool.

[00:34:35] Ingmar Klein: Mhm.

[00:34:36] Jens Lapinski: Right. That's that that is definitely then that's the Cutting Edge. I mean this is basically this is where organizations will fundamentally change.

[00:34:43] Ingmar Klein: I think so. Yeah.

[00:34:45] Robin Harbort: Thanks for being here. If you enjoyed this episode, support us by leaving a follow and share the Cutting Edge AI podcast. See you next time.