AIAW Podcast

E115 - Revolutionizing Hiring in an Age of AI - Alexandra Davies & Stefan Wendin

January 26, 2024 Hyperight, Season 8 Episode 1
AIAW Podcast
E115 - Revolutionizing Hiring in an Age of AI - Alexandra Davies & Stefan Wendin
Show Notes Transcript Chapter Markers

Ever wondered what happens when the modern job market collides with the complexities of AI? Get set for an insightful ride with Alexandra Davies and Stefan Wendin as we unpack this fusion. From the quirky trend of parents attending job interviews with their college-grad kids to the cutting-edge role of AI in talent acquisition, this episode is a goldmine of thought-provoking discussions and expert insights. Laugh, learn, and possibly rethink your approach to technology, recruitment, and privacy, as we cover everything from generational divides to privacy concerns in an AI-driven world.
Tune in, and let's rethink the future of work together.

Follow us on youtube: https://www.youtube.com/@aiawpodcast

Speaker 1:

and deep fakes and these things. So maybe that's actually part of that weird understanding of reality.

Speaker 2:

What was your thesis about, again? Did you have a title? Or what was the oh?

Speaker 1:

that's long ago. Let's see what I had to find it. There was something with limitation and perception, something, something. Let me see if I can find it.

Speaker 3:

No, but we got into this topic based on Alex's view on you know. Did you see? Did you hear the survey in US on recruitment in? You know how graduate students that's coming straight out of university now acts and college, I would say coming out of any to the workforce? And the joke was to a Swedish. This is a joke.

Speaker 5:

Yes, it's like had you heard the Swedish college students.

Speaker 3:

They bring their parents to the recruitment interview.

Speaker 2:

And they're like what? So what was this, alex? Was it some kind of survey? Or how did you get about this news?

Speaker 4:

Well, it was actually a friend that sent it to me on the Instagram.

Speaker 4:

It was like a clip you know, where they make a mockery out of this, and I sort of snapped up on it and I was like, is this true? And I started digging and I found the survey and several articles about it. So I was like, OK, let's dive deeper down this rabbit hole. So I did. For me it's not only about sort of this kind of behavior. It was one in every five college graduate that actually brought their parent in sitting on the interview, not like being a support person outside of the room, but actually bring them in on the interview.

Speaker 5:

It's very weird, it's a Swedish. This is weird.

Speaker 4:

It's weird so, but I started thinking. So what has, what does this have as an implication? Moving forward Like this is Jan X, which is the next generation of the workforce, bringing and onboarding them into organizations. What are the consequences, looking at these individuals, for each and every organization that are looking to employ younger professionals, basically? So that was very interesting for me, even though it was funny.

Speaker 3:

And one of the key outcomes that you mentioned around the service like well, kind of, if you bring your parents to the survey, you acting like a baby. We don't hire babies so the core, one of the core impacts is like you're shooting yourself in the foot, and that is even in the weirdness of America. Even then, get it right here.

Speaker 2:

The question is is it the kids or the parents that are?

Speaker 5:

That's a good one. Who is curling who?

Speaker 4:

Chicken or the egg.

Speaker 5:

Yeah, exactly.

Speaker 2:

So the parents should be the grownups of the speak and realize that this is potentially a bad thing you would imagine.

Speaker 3:

If my kid would ask me that or if your kid would ask you that would you? Say grow up, or would you say of course.

Speaker 4:

I would see it as a personal failure. I mean exactly. If my if my child was like mum. Can you come with me for an interview?

Speaker 1:

But what if? What if it's like this Mom dad, whatever, I don't have any kids You're so freaking tired of your kid not getting a job. You just want to melt.

Speaker 5:

Every freaking vote there and you for you. That's the only logical explanation to this. There's a movie around this. There is a great funny comedy Through my show.

Speaker 3:

No, another one where they hire the the hire Jennifer Aniston as the beautiful girlfriend in order to lure the 35 year old bachelor living at home with mom to get the fuck out of our house. Yeah, it's a beautiful. Who is? It is Jennifer Aniston? Is Matthew McConaughey? Who is is the bachelor, of course, maybe you cracked it.

Speaker 5:

Maybe you cracked it. This is not about the kids. It's about the parents kicking their kids out.

Speaker 1:

I think that's the only logical. But again, people aren't really logical or rational, right? So maybe we're just sitting here Rational.

Speaker 3:

But one of the things you said, alex, is like OK, this is maybe a backdrop of the pandemic, it's a backdrop of generation X growing up now and having awkward social skills, you know, and in their most sensitive prime years, when they're supposed to graduate from being a kid to being an adult, and they basically they're lacking some of these fundamental interactive skills and confidence. I guess.

Speaker 4:

Exactly.

Speaker 3:

So that that is. That is another angle on this right.

Speaker 4:

Yeah, definitely.

Speaker 3:

Weird Weird. And then of course, we can take that is, we could take that into the bigger picture of weirdness. That is sort of convoluted topic US right now, from the vote to the, to the you know, to the curling we're talking about here, to the I don't know.

Speaker 2:

And before we jump into all of those rabbit holes, thank you, thank you Perfect. You welcome you here, alexandra Davies and Stefan Wendin. It's a pleasure to have you here. It's actually the first one this year, right?

Speaker 5:

Yeah, so this is the kick off, the kick off, the kick off.

Speaker 1:

It's going to be extra party.

Speaker 3:

This is like we need to. We need to start with a cracker. Let's talk about the talent Firecrack.

Speaker 2:

And let's start to just give a quick background about who you are. And if we start with you, alexandra, see if I get the background correct. But you can describe yourself. But you're a founder of next gen talent.

Speaker 4:

Yes.

Speaker 2:

Right, and I guess an expert in recruitment? Yes, and also, as I saw, you write something about using AI to unlock talents, going resumes.

Speaker 4:

Yes.

Speaker 2:

That is something I'm really looking forward to hearing more about.

Speaker 4:

Absolutely.

Speaker 2:

And before we jump into your, just like to quickly welcome Stefan and you I'm not sure how. What's the proper way to describe your background as well as AI innovation officer.

Speaker 5:

Not the case, not the case, that's it.

Speaker 2:

No, another way.

Speaker 5:

But actually not case Neo4j Prime Evangelis. Is that the same?

Speaker 1:

No, but I think it's fun though I was, it must have been a year ago. When you were here yeah, I was leaving Neo4j at the time, promise myself stay away from working for a year and focused on reading up on everything new on AI, which I did work with startups, thinking about what actually is going on instead of just rushing for the next thing. So I guess that's what I do I help companies, organizations.

Speaker 2:

Could it be something like your next pretty name I driven product. Yeah, innovation.

Speaker 1:

Yeah, something like that in that space.

Speaker 3:

Yeah, I'm going to put a little jingle on this. I want to test it on you.

Speaker 1:

Okay, ai Is this whole thing about me getting a job, so everybody gets rid of me, like the brand speaking someone else.

Speaker 5:

This is the curling part. This is the current important action with the professional team.

Speaker 1:

Professional team.

Speaker 3:

Yeah, I leveled up, but there is someone who used the word sometimes to say, like when we are succeeding with AI or data and when we are scaling that up, it's a social, technical Problem. So it's both about the tech stack and all that. But ultimately this is tech is how you organize it from the practices in the engineering all the way up to what you want to do with your innovation, and you are the kind of guy who kind of grabs the tech, but you are really five dimensionals in the social part. You understand me. Like that's your real sort of superpower.

Speaker 1:

Yeah, no, I think, is that a. Yeah.

Speaker 2:

And before we jump into that. I think we great Hangar here when we move more into your background of Stefan. But before we jump into that, thank you, let's hear more about who is really. Alexandra Davis, how will you describe the background? What's your interest? What's your passion? How would you describe yourself?

Speaker 4:

Tricky questions, but yeah, I've been running my own business within the talent industry, so to speak, for the last two years. I was called a bit crazy or a nutter because I was three months pregnant when I sort of jumped ship. I didn't want to go back once I had my child. I've been working for the last 15 years with recruitment and talent. In some sense, I've been building functions, recruitment companies you name it from scratch more or less, and that's what I love doing. I love building something that is something more efficient that they currently have, or yeah, basically just do something better with a twist.

Speaker 2:

Yeah, yeah, I mean it's awesome, and we can just say also that, of course, the theme of today is more about the AI enabled future and specifically related to talent acquisition as well.

Speaker 3:

And I did a post with that. We're going to talk about the paradox today. Okay, the paradox, the checking in the egg. You can start with oh, we want AI, augmented talent and recruitment. Oh, what is that looking like? And we can go into the rabbit hole of what those capabilities are.

Speaker 4:

Absolutely.

Speaker 3:

But kind of start with well, first you need to know how to organize AI and recruit the right AI talent to build that for you or organize that for you. Twist the chicken in the egg of the normal talent to get the right AI talent in order to get to the AI augmented technology talent.

Speaker 2:

Correct, that's the paradox. Yes, another awesome topic to add to the.

Speaker 3:

There's several topics hidden in there. Topic machine that's the topic machine.

Speaker 2:

Perhaps just give a bit more about the background. What's your educational or how did you come into the recruitment business?

Speaker 4:

Yeah, so this is a plot twist. I actually studied law.

Speaker 2:

Oh yeah.

Speaker 4:

I didn't like it at all. Good for you, it wasn't for me. And I sort of discovered this two years into my studies and I was just like, okay, just, I don't have a backup plan, what am I supposed to do? So I just carried on through it and then I came out and I sort of like, what do I want to do? What do I have a passion for? I was like, first of all, I like people and I want to work in an industry where you are able to solve problems and I want to be able to build my own brand.

Speaker 4:

And the recruitment industry back then this was in 2008 and 2009, was a bit underdeveloped. Linkedin wasn't really around to start off with that kind of gives you a sense of where we're at. And I kind of like that. And I had a thought in my head that I wanted to make a gold standard In some sense. I wanted people to sort of like, if you come to me and work with me, you know that it's a gold standard and you're going to move away from that process or working with me sorry, feeling as if you don't want to go to anybody else or you remember me.

Speaker 4:

So that's been in my sort of the back of my head to write these years.

Speaker 2:

You mentioned LinkedIn as well, I think all people, including myself for sure, because I've been recruiting for 15 years as well, or more, and it's always been surprising, I think, how poor the tooling is around recruitment and even with LinkedIn. Linkedin is great for many purposes, but it's surprising how poor the support for both the employee and the employer, I think is. And even if you take all the top branding tooling that you do have for recruitment, I think they're also lacking the basic kind of AI technology, and I actually built some of my own because I was missing it at the time, and I think it's surprising how poor the tooling is still to this day. Would you agree?

Speaker 4:

100%.

Speaker 2:

Awesome, so I add that as a topic, like the tooling around recruitment.

Speaker 3:

Tooling around recruitment.

Speaker 4:

That's actually how we sort of the banter that we went through the first time we met and I'm like, oh my God, he has so much knowledge that we could apply to this industry, because Tell us about that anecdote when you met and you had a banter around data and recruitment.

Speaker 3:

And what was the setting I assume at the party with the no, no, no.

Speaker 1:

There was no party, it was actually.

Speaker 4:

Party for two, party for two. Good tackling.

Speaker 1:

Maybe that's a new podcast we can run. Go ahead and produce it for us. Party for two.

Speaker 5:

Very good.

Speaker 1:

No, it came out of actually LinkedIn. I don't know what I commented, but it was a common friend that commented on something. And then, oh, but you should talk with Alexander. And I'm like, yeah, let's do it then. And then we booked a meeting, hit it off directly, and I think what I really appreciate with her is that super fast thinking, but also with the heart in the exact right place, understanding what is it that drives people, what is it that drives talent, what is that important? It's not the stack of keywords, it's not this crappy, whatever BS that these systems or persons even focus very much on.

Speaker 3:

The CV machinery.

Speaker 1:

Exactly, yeah, yeah, that's another topic that we need to unpack.

Speaker 3:

And we need to kill the CV.

Speaker 2:

I had a topic about resumes in some way, both how to do it or how not to do it or something.

Speaker 3:

What is wrong with the CV type recruitment?

Speaker 1:

Yeah, yeah, no, but I think we're just hitting it straight off. It felt like the time was flying, as it does when you meet someone and really connect on the topic, and it's also a topic that I'm super passionate about. I wasn't really caring about it. I never applied for a single job, but then I started to look into this and I was like this is literally catastrophic article.

Speaker 5:

This is way worse than other kids bringing people to interviews.

Speaker 1:

This is like the worst untransformed area ever. This is worse than finance. This is worse than governments. This is worse than anything.

Speaker 3:

HR is the last bestelion of analog.

Speaker 1:

It is, and it's also in a time when it's must needed. We talk about supercomputers and blah, blah, blah, but then it's like you can't even freaking recruit the right people to run them. Exactly. No, you can't even find them. You know how to find them.

Speaker 2:

Every company needs it. And the way to do the matchmaking between the employer and the employee and the tooling around it and the best practices is really bad.

Speaker 3:

And to put this in a fun context, this whole theme oh no.

Speaker 3:

No, no, no.

Speaker 3:

I stole a very simple 10, 20, 70 rule from Boston Consulting Group and I think we even talked about it with Christopher O'Hogary when he was here from Tila, division X, and there's a quote and I've used that at conferences where I was pushing my ideas to HR people when I basically went up on stage and say, do you realize, for the AI revolution, the HR department is more important than the IT department as a provocative quote.

Speaker 3:

And the 10, 20, 70 rule is that to succeed with AI is 10% about getting the algorithm right, 20% about getting the data and the whole product stack right and 70% around getting the organization practices, people and talent right. So it's a people problem and ways of working problem over a technology stack problem and an algorithm problem. And this was like a famous storytelling approach of BCG and I stole it and used that in an HR analytics conference and I think that is really telling what we are talking about right now. But how will we ever succeed with the AI and be on the right side of the AI divide and all that if we don't even know how to get the right people in the right way?

Speaker 4:

No, and the funny thing is I actually read before I came here. I was like Svenskar and the Internet. I read through that just quickly because I remembered it said something about AI and how we're adopting that in the future, and I started looking through the numbers. Like the skepticism about AI and the third of Swedes are very skeptical and the impact that AI would have. It would have a negative impact, I would say, on the future. And when they broke it down into I'm going to talk about gender here because it's very relevant because HR is a department which is predominantly female and the females were more skeptical towards AI than the male respondents were. So imagine, then, people within HR departments being skeptical about using AI. Well, no wonder where we're at today. It's like it's data backing what we're all feeling and sensing.

Speaker 3:

Interesting.

Speaker 1:

But it reminds me slightly of a topic not related to HR, but this whole idea that these people that would benefit the most from adapting it is also the person that will lose the most Meaning. If you look back on communication in the early days of social media the one on agencies that would benefit the most for learning how to use data in their marketing. They would benefit the most from it. Right, the creative director, etc. The one that was sitting there in the corner office. However, that person would then have to give up on the corner office. You're not the owner of what's good, and I think actually, this is equivalently true for this. You are not the owner of that anymore. If you have a system that's doing it, who are they then? Who would they be?

Speaker 3:

So you think this is part of the core root challenges here that we need to address correctly.

Speaker 1:

No, I'm sort of thinking maybe this resistance or this kind of Skepticism, yeah kind of Skepticism. Skepticism kind of it's not really moving, Inertia.

Speaker 3:

It doesn't really happen.

Speaker 1:

Maybe this is because of this, because they actually have the most to lose on this.

Speaker 3:

And what do you mean lose in terms of the corner office, in terms of, I mean, being replaced, being replaced.

Speaker 1:

I mean, if you're sitting there collecting CVs, scanning keywords, doing these things, and I don't know, it's not like it's hard to replace or build that. I mean it's. And how much is Even under said.

Speaker 3:

I built some of my own, but for me, this is a huge difference in if that's HR compared to what is truly people and talent management which is, in my opinion, something completely different. So if they have sort of backed into that administrative corner, the way out of that is, of course, to retake the seat at the C-level table around talent management, which is then super, complex and super interesting yeah.

Speaker 2:

So I mean like, so they are not playing Before we jump into any more rabbit holes? We need to do an introduction as well. Damn it.

Speaker 1:

Rabbit.

Speaker 2:

Of Stefan Wendin. So please, who is Stefan Wendin? How would you describe your background? I'm looking forward to hearing this.

Speaker 1:

I don't Stick or hide.

Speaker 5:

He can't do it, he can't.

Speaker 1:

I can't do it, that's the thing. Can somebody else do it for me? I think for me my biggest thing during my entire career has been kind of running in between technology and the business side or the people side. For me, technology alone is just technology. It's fun for me to build, play and do these things. So in my early days when we were producing music, we have a classical memory of when I helped producing Latin Kings and Swedish kind of hip hop and early on, you know, half of the time I wanted to show them my new innovation that I built, which was an organ of vacuum cleaners. They weren't impressed by it, and neither should I have been, but that's also figuring out my nerdiness on these things right, but then I learned like oh, it needs to sort out some problem for someone.

Speaker 3:

And you still have usefulness and value.

Speaker 1:

Exactly.

Speaker 3:

Not just building shit for building shit.

Speaker 5:

Vacuum cleaner organ Ooh this is Shin-Duko.

Speaker 1:

Yeah, have you heard about Shin-Duko? No, tell me more about it. He's going to freak out now.

Speaker 5:

This introduction is going this is so risk-free. Shin-duko is unusual inventions. Oh yeah, and this is a big thing in Japan.

Speaker 3:

So if you Google Shin-Duko, you get all the craziest inventions. That obviously is completely useless.

Speaker 1:

I saw one really good. It was a walking tomato feeder. As you walked, it feed you tomatoes.

Speaker 5:

I mean, it's like a pretty awesome thing if you like tomatoes, if that's the situation, dugo.

Speaker 1:

If that's your thing.

Speaker 5:

Oh, sorry, sorry.

Speaker 1:

I think, coming back to the introduction then. So I learned early on Bring it back, bring it back, hold it in, hold it in. Come here.

Speaker 5:

What wasn't in this coke that you gave me? No.

Speaker 1:

Coke Coke and we have to do a break. No, please no. But I think I learned early on like it needs to be applied. It needs to solve something right, and I think that's what I spend the majority of my time doing understanding how I could use technology to solve things for people, and then doing so, I learned that every single freaking one of us are super lazy. So if I can solve it easier and faster and give you more value, you're going to give me money or something of value that I can then exchange for money. Meaning that, ah, there was kind of a ha moment here, because early on, every single recruiter the talent partners told me Stefan, you need to either stay in the technology lane or you have to go in the business strategy lane. You cannot be in both. It is very confusing for them, man, because it didn't fit the box right, it didn't fit the table. Which table should I store you in?

Speaker 3:

But this is another parking on the list, anders. Now we're getting to the point where you know the old school, fitting him into boxes is completely one of the bigger problems.

Speaker 1:

Data structure actually matters For the future of AI this is a.

Speaker 3:

To think like that when you recruit is problematic. Or even if you have those boxes and now you're trying to recruit, they won't tell you the answers, what you need.

Speaker 1:

But let's park that, let's park.

Speaker 3:

that Anders sorry.

Speaker 2:

Don't. I can't even summarize it.

Speaker 1:

I don't even know what this is. Okay.

Speaker 2:

Yeah, okay.

Speaker 1:

Oh my God, please carry on, yeah, carry on, yeah. So, without further ado, let's carry on. Back from the break, I know, but so what I really do is figuring out how to use things right, but also equally figuring out what happens to us when we use things. So I built technology for people to use to solve problems, but that in turn changing the way we act. And I think on this topic, for example, there the kind of shabby team madness of writing right, we all seen it in the realm of blah blah, blah, blah, blah blah this is the classical opening, knowing that. Oh yeah, here comes the content from GPT, but I think Equally so. That is now we can totally see that. But I'm pretty sure we will start also acting, maybe not on that cheesy line, but we will pick up on these things because we are pattern machines as humans, right, we mimic what we see around us, and if that's what we see, we do so Still would try to put you in a box.

Speaker 1:

Oh yeah, he had enough now Red lamp.

Speaker 2:

I'm just trying to find a way to categorize you, to tag you, to put it in the. I don't know. No, but is it more into change management, you would say? Is it more into business innovation, product innovation?

Speaker 4:

Make it understandable, oh God.

Speaker 1:

So, very simply, I solve problems using technology, making money for people.

Speaker 2:

Making money solve problems.

Speaker 1:

At the end that's what I do, right. I take things. I say, oh, but this with this, with this solve this. In the aggregate it's all this and in the aggregate of that you create a new product. So equally on the kind of frog or kind of you know, ground level, but equally confident going on the 10,000 feet or whatever. So I think my greatest skill is actually that kind of untangling of spaghetti.

Speaker 2:

Then if one would say I think it's awesome that we can sometimes the role is untangling spaghetti guy. I mean, I think it's awesome to have these people that actually do connect different areas, and I think you are a perfect example of that.

Speaker 3:

So the challenge with the guys who was connecting things is that you it's hard to put them in the different boxes. That is connecting. So you see connector.

Speaker 2:

Yeah, your connector.

Speaker 1:

Oh yeah, very good. Actually, that's when I did one of those personality craps that has come up connected You've done one. I hate them, and then I need to do them before so I can see what I hate.

Speaker 3:

But then if we, if you, if we put you down as the connector, your time at Neo4j is perfect, yeah, as far as connecting the dots Connecting the dots. That's a meta, it's a meta view why you should work in a company Connecting dots yeah. He's the, he's the. What do you use Neo4j for Human incarnation? Yeah, no, but I think. But it's a good one, because it's a good one. And the core connecting nodes is technology and problems of business and people Right Cool.

Speaker 2:

Should we move to a serious topic?

Speaker 1:

I love how you also introduced a serious topic. I'm really up for a serious one.

Speaker 5:

We'll take you and you can see you're just making this.

Speaker 1:

Hey, I'm welcome to list machine.

Speaker 2:

Okay, but thinking about recruitment and you've both been in the field, you for 15 years and I guess you even more. Stefan, if you were to try to just characterize a bit, you know what has happened in the last 15, 20 years from recruitment. Can you see some kind of trends? Can you see some kind of evolution, some kind of change that happened over the last decades? I don't know.

Speaker 1:

I can start in the last year, but I don't. I never applied for a job really, so it's very hard for me to because. I wouldn't fit the box. But I think it's Alex's question because I think it would be because you, you were sort of reflecting on when you started with no LinkedIn.

Speaker 4:

Yeah.

Speaker 3:

And then you're, and we talked before like, like how we could be sort of compartmentalized, and you know, you are the tech person you know, and I think you put it really well upstairs Like some of those views and models, dogmatic views of how you do, recruitment is simply out of the window today because we are living in the intersect, or something like that. Yeah, so if you could take us through that thinking or that maturity of a point of view on this, I think you have it.

Speaker 4:

I mean looking at this, there's nothing that's really changed, which I find really interesting throughout these years.

Speaker 2:

That's an observation in itself right, what do you say?

Speaker 3:

What are the way the industry works?

Speaker 4:

The way the industry works has looked the same throughout the years. Interesting. What's different? At least this is my perspective. What's different is that you can do the same thing that we've always done faster. Bigger, like AI, is enabling us to sort of keyword search, but you have a computer or a robot that does it for you, so it's still the same sort of traditional way of doing things on steroids.

Speaker 2:

So it's a kind of a lift and shift moment here, where we have the same kind of processes that we had in the past. It's just doing them a bit more efficiently using technology and here we have an internal joke.

Speaker 3:

It's a digitization versus digitalization. So we have an analog way of doing things that we've always done, that we put on steroids. So data and AI, sugar coated the analog process and in reality now, is that the right process when we are now moving into where we are today Exactly? Is that a summary?

Speaker 1:

Yes, I would say so. I think it's a very good summary.

Speaker 4:

Yeah, and I mean, we talked about this as well. So how do we solve the problem? And I think it's got to do with actually educating people what you're supposed to be doing with all of these tools, because we're still I'm not saying that we're manually going through CVs like I did way back when, but today there's still like a AI doing it for you, but what they do is the analog way.

Speaker 3:

They are mimicking the analog way faster in that scale.

Speaker 4:

Exactly, so I don't really know if that's the case.

Speaker 2:

I guess that's one of the core parts of the recruitment process.

Speaker 4:

You still have the resumes and you still have it to this day, and it's not really that different in terms of how you submit an application and then you say that you have predictive validity by having all of these psychometric tests or GMA testing and stuff like that, and don't get me started on that.

Speaker 1:

But that's for the list. That's for the list. Yes, I have that to the list.

Speaker 4:

Because those valuation methods today they are dated or I would say, very suggestive.

Speaker 2:

I was speaking like psychometric testing.

Speaker 4:

Yes, okay.

Speaker 2:

I'm not adding to the list. I'm adding to the list.

Speaker 1:

I'm going to think, also tapping into what you're thinking and changing the frequency of things. We're having more people faster, and the need for more advanced neon stroll is getting greater. So one of the problem here is I think we're still stuck, or a lot of these. I mean take whatever they call like work days, success factors, whatever, sorry, saps and whoever built these, but they can't even manage to parse a freaking CSV file. I tried actually with one. I tried to figure out what is the actual format for this crap to be able to read it. It wasn't CSV, it wasn't a single format. Nothing could make it read it, which is like how can you even build a system that is this bad? How can't you even 2024 figuring out how to also are a document, even if it's like even a comma separated file.

Speaker 2:

This is the way we integrate in resumes in some way or have some way to integrate the systems.

Speaker 4:

I mean you still have systems that actually like oh no, apply via LinkedIn, and then when you do that, they ask you for can you fill in your CV? And it's just like. Really, I just didn't get it. It's beyond me.

Speaker 3:

But let's unpack, let's stay a little bit in the problem discovery phase now.

Speaker 1:

Yes.

Speaker 3:

Because I think there are. I mean so let me be provocative why is this a problem? I mean, like because, okay, there are technological problems that are simply working with stupid protocols and standards that we could have moved away from. That made the date of the CV much more relatable in terms of machine readable, so to speak.

Speaker 3:

This is one problem. This is just poor technology. But if we're digging into the core problem of, is it a problem that we have the old way of thinking about recruitment and then putting it on steroids? What is wrong with the underlying recruitment process, the way we, or is it something wrong with it? Maybe it's perfect A little bit, like you said, like maybe it's perfect.

Speaker 1:

Are you drunk? No, not yet.

Speaker 3:

You tell me why it's not perfect.

Speaker 1:

Yeah, no. But I think the problem here is is a thousand folds right. So I think that we cannot separate the technology from the actual problem, because they're in between. We create systems, system creates us. That's how we think, that's how we act.

Speaker 1:

So one of the problems here is I've, if you have a system where I can't even input a CSV that I spent four hours formatting for you, if it's that bad, most likely it's going to be equally bad in the mental model of the person on the other side, because if it's hard for me to put shit in, imagine receiving that shit on the other side. Imagine what type of person stays in that situation and think that this is a good thing. It's not going to be the one I want to talk to, that's for sure. But equally the problem now, because then we can add smart technologies. And I think I did a fun thing. I scraped a little bit of LinkedIn and then I looked for prompt parts in job descriptions. Oh, my freaking God, they didn't even remove the part of their kind of recruitment prompt thing. So they have a prompt structure which is even in the ad, which is like come on.

Speaker 3:

So you mean, like it's so obvious to use chat GPT to write it and the prompt is still there.

Speaker 1:

This is nothing new. This is the same thing that happened before, if you think about these things. So how is it usually, when you need to recruit people and you have to recruit shitloads of people, maybe you are in a hyper-growth company.

Speaker 3:

The prompt is a template that you fill in.

Speaker 1:

Yeah, yeah, nobody equal to the same, like, oh, we need to hire one, who should we do? Who is that? And then you scrape some shit together and put that there. And then that poor recruiter then tried to write that in a nice neat way structured, which then goes out to a list where you kind of check for keywords. But that list nobody really cared, they just put, like some in the copy pasted an old thing and do so. And now we do that, but we do it with a computer so it goes even faster. So we've scaled up.

Speaker 3:

shit, yeah, shit in, shit out.

Speaker 1:

And now we heated it up and put it in a fan like this. Put that on the list, if you want.

Speaker 4:

I'm not going to, but let's assume that, for sure, there is a problem.

Speaker 2:

We have not seen innovation in recruitment for a long time. What can we do then? If we move, can AI be of some insistence here? Can we find a way to improve the system?

Speaker 1:

Oh, this is fun.

Speaker 2:

This is all you.

Speaker 1:

Yeah, well, we can talk about it together. I guess that's why we're here, but I think one part is that people should be able to do this better, and then nothing happened. And then Alexander shared some stuff with me that I could check out. I'm not going to put out any names, but there's a family of GPT recruiters. Again, the problem is that we do not understand.

Speaker 2:

When you say GPT recruits, what do you?

Speaker 1:

mean Kind of chat GPT equivalent part. So it's the classical chat interface. Upload whatever job description.

Speaker 2:

So it's a service that you have and it has a language model in it.

Speaker 1:

Yes, so it looks extremely good on the surface. You get it back, it's scored and everything. It looks super impressive. But I'm like, but if this is built on a transformer structure, how can this take care of the problem of in and out of distribution? Because if you're looking for a job, what is the first thing that you would likely do? You will go to your LinkedIn and you will update your profile, meaning that that training data set is not the one representing reality.

Speaker 1:

Right, I even tried this with four of them. I took persons from the company, took their LinkedIn. We created exact job descriptions. So to the fact that it was like it needs to be working in X, having a podcast called AI After Work podcast, and it needs to know these people that runs a conference over here. It's like it's that detail Page 47, which is the problem like using such a technology without, for example, a rag pipeline or whatever, to at least have some sort of accuracy. So again, we're seeing, we have a belief that we have a solution to something, but we're using technology I would argue in the wrong way.

Speaker 2:

In the wrong way. So it's a shift again. So you just use AI, in this case, to just scale up the old process.

Speaker 1:

Yeah, the old kind of thinking in that way. And it looks super good, it looks super neat, but at the end it's just that. Which now? Golden dust on top.

Speaker 4:

And I mean to make it a bit more understandable and relatable is, like a recruiter, what you basically do when you search for candidates is you do a keyword search on LinkedIn, or if it's GitHub or if it's any other platform where you potentially can find candidates Right, and then you get a robot AI whatever you call it doing that for you. So that means it limits it's limited to my knowledge of which keywords to put in. Yeah, that's the crucial part in all of this and that's how the recruitment industry has been looking for the last 15 years.

Speaker 3:

So the recruitment industry actually boils down to a large degree to this fundamental dogmatic mechanism. Yes, and why is that a problem? Because it's you said it because it puts all the effort on this keyword search on the most junior job in the whole firm. Yeah, because the person who just puts on this job is like that's like OK, you're not a recruiter yet you need to do the garbage shit. You will do the. You will be the keyword person.

Speaker 3:

Then you will do this and regardless if you use a computer or not, it's the least experienced person putting in the keywords. Yeah, that is sort of hardcore.

Speaker 4:

That can be a problem as well, but due to the fact that we don't know anything else, people are like hooray, I can do my job much faster, I can get more results and I can give this to my hiring manager and they will have 10 CVs. Instead of waiting two weeks or two months for me to find something, I could find something in a day.

Speaker 3:

Yeah. So instead of going for effectiveness, we are going for efficiency, without thinking about are we solving the right problem?

Speaker 1:

Exactly, and I think we actually think we solve the right problem. I think that's where the problem lies. Instead of looking upon, how would you think if you think about every single one of us knows, if I say a talent and person within X, you will think of a person. Right, it's very simple. We know this as humans. If we start unpacking, what is the foundation of that knowledge? Why did I come to that conclusion? It's going to be a super tight network of different things. It will be the persons, the role of our time. It will be the influence, how they talk, how they write, how they think, what they have built all of them, their friends, their friends friends.

Speaker 1:

To quote the classical study with Dr Fowler like all data about your friends, friends is more predictive than any of the data about you, which is one of my favorite studies. I think it's actually one point. Seven steps out, so not really two persons, but I can't cut you in half.

Speaker 3:

And this is two one and a half person out. If I look at that data, I get more about you than you would have.

Speaker 1:

Yeah, which is kind of bizarre, and that's also how I guess we ended up here, because that's how you connect with people. But if we think of this, a system would rather benefit from lots of weak signals. Pretty much how I would solve, for example, a fraud investigation for a bank working at my time at NIO. So it's going to be very easy to find a highly 100% fraudulent ones Right, this is the very bad criminal.

Speaker 3:

These are the lazy, stupid criminals.

Speaker 1:

Me using your social security number obviously fraud. You cannot have two of them. But instead thinking of this person shows up very close to bad activity several times, Not necessarily doing anything, but it keeps showing up this is the compounding effects of weak signals that you're looking at. That's what I'm thinking, and I think we could do this in with talent. What are you thinking?

Speaker 2:

I'm thinking about the lovely quote that you had that people around you is more predictive than the data you actually have about yourself and to me. When I think about that, given that the world is modified before us as well, I'm thinking directly about song recommendations. Yeah, when you take a song recommendation, the normal approach you take is collaborative filtering, which doesn't look at all at the song itself. It doesn't look at the content, it doesn't look at the artist, it doesn't look at the title of the song. The only thing it looks at is what other people seem to yourself has listened to this song. So that's the collaborative filtering part, and I'm kind of thinking why doesn't we have more of these kind of recommender systems for recruitment purposes?

Speaker 4:

They think they do. That's the problem by these rankings and stuff that's going on, but I haven't seen any.

Speaker 3:

But the interesting part, then, is you need to unpack it and you need to think about different techniques. And you need to really think about solving the real problem and then you need to really unlearn, to relearn around the topic. You need to go down to first principles, to quote Elon Musk.

Speaker 2:

Got it in there First time this year, oh no.

Speaker 1:

The Elon count Should have brought it down here, but.

Speaker 2:

I think there are, as you say, so many old type of techniques that they still use to find people, and I guess, if we start to break it down to a couple of problems since recruitment is actually a very important one for myself personally and I've been thinking a bit about it, but for one it's the matching problem Right? You have a job ad and you have people and you need to match them and I find is this the person that is related to this job or not?

Speaker 2:

So it's a matching problem. It's like a tinder problem between the employee and employer. And the second problem is really the quality problem. Even that you have a set of potential candidates, how can you rank them in some way? And then you have a lot of things to do there and you have interview techniques or other ways to do that. But at least these two, I think, are two core issues. Like for one, match it and secondly, rank it, would you say these are at least two of the major problems that we need to find or solve somehow.

Speaker 1:

Yeah. I think so.

Speaker 4:

I mean, the matching part for me is where we need to start, because, given the situation today, we don't know which roles are going to be out there that are needed for an organization like two years time or so, and that's the big issue with all of this as well. How do you predict future capabilities in a role that doesn't exist?

Speaker 2:

Right.

Speaker 4:

And that's where this predictive validity can come in and actually give you some kind of insights as to this person, regardless, or given their competencies today, regardless, they probably perform well in whatever role.

Speaker 3:

So the matching print problem is a massive one, because it's about matching in the short term, medium and long term, yes, and this is so the. And then, of course, basic CV recruitment on keywords. You know we should, you know we joked about it are you need to recruit an attitude?

Speaker 5:

or add a talent. What the hell that is.

Speaker 3:

But this is the core problem how do you? You know, so how do you go about that thinking? Then you know how do you you know how do you think about that even today, because you need to solve this today.

Speaker 4:

Yeah.

Speaker 3:

The matching problem.

Speaker 4:

Yeah, I'm looking at this guy. No, no, no. I want to hear yours, You're.

Speaker 3:

you know how do you tackle that today, Because you need to tackle this on a daily basis.

Speaker 4:

Yes, we kind of do. I mean looking at today we use a lot of these GMA testing or psychometrics.

Speaker 2:

Should we move there to this topic?

Speaker 4:

Shall, we go there yeah.

Speaker 1:

Can I go to the bathroom then? Yeah, oh sure, no, I'm joking, I just wasn't interested of the topic. Sorry for my bad sense of humor. Everyone.

Speaker 2:

Sorry for my bad sense of humor. I didn't understand it.

Speaker 5:

The lack of humor, bad sense of humor.

Speaker 2:

But I guess that moves more into the ranking issue. You know how to write people, absolutely. And what's your? There are a number of different testing or tests. You can do standardized tests and whatnot. Can you just give a quick description on your thoughts about them? Okay, I mean if you have psychometric tests, since I'm like no, but I can do it, is there no value at all.

Speaker 4:

Okay, I can tell you about my experience and why I have a sort of negative feeling or thought about them. Throughout my years doing all of these psychometric testing. There are predominantly three things that pop out. The first one is a lot of people feel as if oh, I had. I thought you were looking for these characteristics, so I replied, I applied or I replied these.

Speaker 3:

I went into a role and tried to imagine what I would talk.

Speaker 4:

And that's problem number one. Problem number two is basically looking at the logical part, because that's also a part in all of this. You can train yourself to get better at these tests.

Speaker 3:

Like the IQ test. The IQ test part yes.

Speaker 4:

I've done it several times.

Speaker 2:

So I'll rabbit hole there, but I'll bite my tongue for now.

Speaker 4:

Yeah, you can but it's based on my experience, because I've done a lot of these tests and I'm like I wonder if I can train myself to become better at these. And I did.

Speaker 1:

I was like top 1% high, I mean 99% of these companies selling these tests also offer training so you can get better at the test, which is just, literally, if you look upon what you're actually validating, you're validating how desperate one person would be to pay to learn to do a test to get a job. They have some merit, but it's so maybe their measure is actually submissiveness, and that's what they're looking for. Then it's, I'm that's, and now I get it why I never get any job.

Speaker 2:

I think there are, you know, different types of tests.

Speaker 4:

Some of them are horribly bad and we can train, but there are some, I mean I'm talking to the general mass here, and the third part is sort of like your, I think you're talking about is like I need to evaluate the candidate. I have to go through the results, which means I'm putting all of my bias into this. I mean there are 188 cognitive biases. To think that I can evaluate a candidate and not be biased because I'm trained in the system is just like nonsense. So that's the third issue, and then there's a couple of other issues as well. But I mean we can start with those three.

Speaker 3:

But but but, and of course these approaches has evolved over time because we couldn't find better ways in a practical way to do matching right. So so it's like. So it's like pestilikuliralli, or also a little bit like because not doing any matching at all or doing something stupid even more stupid, so so, so.

Speaker 2:

So they've had usefulness and merit, and we've been challenged to figure out the better way to do these matching problems or ranking, or ranking or whatever Like, so ranking for one, this kind of standardized test that they test some kind of the G factor as it's no more cold. And that's some yeah, it's the like, the underlying intelligent or G Factor.

Speaker 5:

I never heard that before, by the way you're thinking of something else let's not put that on the night oh no, don't anyway.

Speaker 1:

Anyway, we're going to go to a short break and we apologize for any inconvenience that may be caused.

Speaker 2:

IQ scores. You know that's one estimate of the G factor, the general, like intelligence kind of factor, general intelligence and then of course it's not.

Speaker 2:

it doesn't really measure what you really are hiring for potentially. So it's very general and it's not specific and it may be completely off-putting or wrong for the job you're hiring for, but at least it's general. I mean that could be some base. But then you can have other stuff you can do. You can do interviews, you can have them having like home exams or homeworks that they have to do, or you can do this kind of, you know, standing in the board and doing live kind of testing. What are the best way then? If psychometric test is not good, how should people rank people?

Speaker 4:

Over to you. How should?

Speaker 1:

people rank people.

Speaker 4:

Because we've been discussing this and I haven't yet seen a good way to do this on the general mass, because you tend to want to do these tests when it comes to large scale, large scale recruitment, right, or you do in general, like for the final candidates or whichever stage of the process, and it's not tailor made for the position, just like you said, and I haven't found another way of doing this, and that's actually where we started talking about this as well, because there needs to be another way and I think it needs to be like a very easy to do thing.

Speaker 2:

I remember we spoke so much, you know, in saying okay, you have one week to do this kind of homework exam thing and you have to submit it in one week and then we will review it. But it was very biased. Some people have kids at home, you know they have so much to do. Another are completely free and have all the weekends and perhaps even are not working at all at the time, and they have so much more time than other people have. So it's not really a fair comparison then, no, so I guess what you want to have is some kind of way to measure.

Speaker 2:

Yeah, I mean, it's hard to say but you need to measure people still right? Yeah, of course you do, but you need to have it in a way that takes as little time and effort as possible, but gives as high value as possible. And what that is, I have no idea.

Speaker 4:

No, that's why we're here, yeah, but maybe it's also about understanding or coming to sense with the fact that you're just insecure, and that's why you're measuring, but you need to rank them, still right.

Speaker 2:

I don't know how would you choose to hire others?

Speaker 1:

No, but it depends a little bit and I think if you were to hire on something that you know about, you would immediately know if a person would work If you hire from someone you know. Ok, if I take some experience from Spotify, we have like a number, I'm just thinking we have 2,000 applications. Yeah, we are going to hire 10 of them.

Speaker 2:

Yeah, what do you do? But that's one problem, right. And then you have the for those.

Speaker 1:

Actually, those tests really works In the screening out. Give me 50, and then I will do the other one. Then I would argue they might work right. There are even scientific studies proving that they actually are good for that job Not for the end between two persons, and I think the problem is not using it for the 2,000.

Speaker 1:

It's an early screener and OK, but the problematic part is a lot of people are using it for the final with maybe three candidates, and then it doesn't even have scientifically proven. It's actually suggested not to use it. They yet still reuse it and I think the reason for that is that they're too cheap to give the 2,000 the test, so they just screen out and then they use it as almost this kind of thing. No, but I did the test. Therefore, if this person fail, I cannot get fired because I have evidence, and I think that's actually a lot of the parts, the way we use it. Rather than finding the right person and then the question would be wouldn't it be better to educate the people, looking for people to actually understand what the company are looking for and spending more time in that? I think that would be.

Speaker 2:

Actually some people are better at the job than others. Still Right, absolutely, and that's the future of recruitment.

Speaker 4:

It's like there's going to be a seismic change, but the industry isn't understanding that yet, that the shift from what recruiters do today is not going to look the same in a couple of years, and what's going to be this is the million dollar question here, no this is the unicorn question.

Speaker 3:

Yes, ten thousand dollar question.

Speaker 4:

No, I mean there's only as far as I can see it, I think that we're going to shift towards. You need people with the intelligent that knows data and knows how to handle AI predominantly and understands it. So, people within HR and TA they need to upskill themselves, because I think a part of the problem and all of this also is the fact that HR tends to, or TA I say TA and HR being they belong together, because usually talent acquisition is a part of HR. The underlying problem is, like you're supposed to upskill entire organizations and focus on that. What about HR? When are they upskilled and when do you actually prioritize upskilling and re-skilling them? Because, if I follow you right now, they're forgotten.

Speaker 3:

No because if you're upskilling the domain competence in capabilities of data and AI, they will reinvent themselves based on their known, their new know how or on AI to, and they will have a more balanced understanding of how the date of the psychometric tests is working and not working. Yes, it's working and not working. Versus the Fowler example. That, I think, is. There are other methods that you could do in order to build a more sharp matching problem Absolutely.

Speaker 3:

But what you're saying now? Then it starts with data literacy or AI literacy, in order to figure out how to innovate.

Speaker 4:

Yes, and that's forgotten.

Speaker 2:

I guess we can have an analogy to education in this case and how people actually do homeworks these days. In an AI based future where everyone has access to oh, this is fun. It is Because it really changed how people can do homeworks. I guess in this case, and I guess the same can be argued for how you do your job in the future.

Speaker 1:

That's definitely.

Speaker 2:

So the people then that potentially have access to AI can if they do know how to use that, they can do their job so much better than people that cannot.

Speaker 1:

Yes, and I mean we tried with co-pilot, with the big consultancy, with 20K people. First two weeks we have a 60 to 65% uplift in productivity, which is literally like Mind-bomb.

Speaker 1:

No, it's like what happened. I look at the data over and over and then the quality is there, the fact that they just don't have and this is a very kind of old, traditional sequel heavy. So it's a lot of scripts that write scripts that write scouts, so a lot of those long, long lists of codes, right? So the fact that they didn't even have to type it actually was the majority of this. Now they can focus on the quality of it instead.

Speaker 3:

You can focus on the logic instead of the coding of lines.

Speaker 1:

And I think this is so interesting and I think that's what's going to really change to that topic. I actually did a test on this on the topic of learning with Hyper Island. I usually bash Hyper Island on one thing they're amazing in the creativity.

Speaker 2:

For the record, you've been working there for 13 years or something.

Speaker 1:

Yeah, so it's like I'm a fan, but then the more you love something, the more you also have feedback about it. At least, that's with me, no, but one of the things that I say is like we shouldn't be afraid of actually doing coding, because they're primarily focused on non-technical skills. They also have technical skills, but you wouldn't have. If I go to KCH and I would teach about graph neural networks, that will be a complete other lecture. But then I was like, no, it doesn't have to be like this. So this year I had a stupid experiment or a stupid idea, as I always have. So I said okay, back to your idea of Spotify. We're going to build a recommendation engine running on a graph neural network in one week without any prior knowledge of coding. We're going to build it on Neo4j. Here's how you download it. Here's the data set. I took a data set that I got from Spotify. I also fd it up, so it has the classical danger engineering problem.

Speaker 1:

So they had to clean it. There's the duplicates, it's the wrong format, so all different letters, scrambles, the classical things. They had to figure out how to normalize things and so on and so what and import it. Write that. What is very funny of this is that this is before the update of GPT-4, meaning it's trained on Neo4j-4. They have to use Neo4j-5 to be able to run the vector stuff that we're going to be used later on. So they can't actually ask it. If they ask it, it's going to give you the complete wrong answer. So what they have to do is do some sort of memory prompting. They have to go to the documentation, take the syntax out, put it in. I'm trying to do this. How would they do it for this example Meaning they actually learn how to use technology and I was thinking there are 50 people in this class and I was like, will this ever work?

Speaker 1:

They're like why did I come with this idea? It's completely stupid. What blew my mind? At the end of that week, three and a half days, eight of the groups showed up presenting a recommendation. All of them tackled the problem of a good recommendation as not recommending a song that are alike, because if I like Taylor Swift, I don't need 10 more Taylor Swift songs. I need something that I didn't know that I liked. That's where I get the epiphany moment. This is when I tell my friends and they all did it which literally shaped me to the core of which part of knowledge is it that we actually are measuring, and what should we be measuring? So what I look when I try to define people is how they think about these things, how they think about creativity, how they think about grit, for example, how they think about persistency in these things, and I rather have a discussion about if the person worked in a bar, a photography assistant or any other slightly humiliating work which is just grinding right. For me, that's actually more valuable.

Speaker 2:

But perhaps that's a good segue into a question more about one, of course is to test what kind of facts you know about. How is C++ programming happening? As details about that.

Speaker 4:

I mean, that's what you're looking for in the CV. That's your comfort blanket. So that's what we're doing today.

Speaker 2:

It is to a large extent but then it could also be more personality testing, saying you're more of a creative person. Perhaps you have this kind of big five ocean models where you can see if you're open-ness and conscientiousness and like weableness and whatnot. Is that something we are lacking a bit today? We should be more on the personality types rather than the fact-checking type, or do you think?

Speaker 1:

There are these tests, but the majority of these is when you have to kind of, you get a questionnaire. This is a made-up scenario. How would you react? And then you have four off that you need to pick. Right, you should be very stupid if you don't know which one to pick already there.

Speaker 1:

But what is equally interesting, I actually did this with an interview that I was because I refused it, though those or they can explain it why it has a value, and then I will do it. They haven't passed yet. But what is equally fun, that I added the CEO of the company into the thing, because that's the one I had the discussion with to that, and then we had that bouncing back and forth and explain why it didn't work, and so on. Now we have a very long list of a conversation which is a real conversation. I took that I pretend, a little model on the framework that they actually are using for the valuation and I said here we have a real scenario, let's see how we compare on this, and I sent the data of me, the CEO, the recruiter and the recruiting manager. They weren't super happy about that.

Speaker 3:

No one got the job.

Speaker 1:

They didn't score high on the thing that we're looking for, which is also interesting. So one thing that is actually interesting is the ability to use, for example, language model to structure these and look for things and then do the analysis on that, and you can do this on interviews as a backdrop how people handle these situations, and now we're starting to get to an interesting gut feeling of these things and segment these out.

Speaker 1:

So I think that's what I'm looking for to see more of those kind of things, Because now we are mimicking again how we take decisions.

Speaker 3:

And it's not so easy to hack it in that same way, because if you have, Aleksandra, back to the whole thing that we got into it by saying I'm actually looking for grit or something else. Actually, that's going to be the don't know. Or I'm looking for how sharp or how willing you are to learn. So one hypothesis we are going away from the economies of scale to the economies of learning. Fundamental adaptability is the number one game. The way we get the efficiency is that we constantly adapt things. Then the CV is completely irrelevant Because that CV will be updated every two years. So it's our skill rate or adaptability to change that I want to measure. But what are we looking at then? Where does that lead us in terms of what you're matching on and you said it right, you had this you're trying to find the attitude or something like that. Like there are tests for this, I guess as well. But I think that's a big shift in the matching game is not matching on the competence requirements. It's matching on grit, ability to learn, ability to adopt 100%.

Speaker 4:

That's what we need to do.

Speaker 3:

And what are the current ways to match on those things compared to the CV things? What is the OK? Some of these tests right, is there? What else? What do we use First we can?

Speaker 2:

get back to that shortly and because in some way you need to also know that the person has skill enough some kind of job, but then it's more in the balance between personality. General intelligence and knowledge is something that is a bit hard to figure out, but perhaps, Goran, we should have a middle break here. Yeah.

Speaker 1:

Ta-da, ta-da, ta-da, ta-da, ta-da. Is it dancing time? It's time for AI News. It's dancing time Brought to you by AI.

Speaker 2:

LW Podcast. So we have added this kind of section in the middle of the podcast where we just had a short break speaking about some personal favorites, news items that happened in the last couple of weeks, and each of one of us can choose to bring up a topic. If you have any, if you don't, feel free to ignore, but if you have, try to summarize it in three, four minutes, if you can. Anyone else wants to go first?

Speaker 1:

Holy crap, I can go.

Speaker 2:

Yes, let's go. Awesome Stefan.

Speaker 4:

Let's go Super excited.

Speaker 1:

No, I just came back from South Korea and I met with a lot of really interesting people, primarily two things One in the ability to train models on visual input video and such which is super cool, but one thing that really blowed my mind is the small language models.

Speaker 2:

Yeah, awesome, I know that oh my.

Speaker 1:

I never tried it and I couldn't believe it. This is like the equivalent of me connecting to internet. I was shocked. I downloaded it. I can't get it to run here in Europe, but I run it in Korea for like a couple of weeks. I downloaded it. It just helped me with my writing like real time on anything within my operating system.

Speaker 2:

Remember the name of it.

Speaker 3:

It's from the upstage team a little one, so the same one that built.

Speaker 1:

Solar Model and.

Speaker 3:

So the idea with this small transformer is actually I can put it inside my laptop and I can write in real time.

Speaker 1:

It runs all the time and whenever you write something, it's there.

Speaker 2:

In real time, more or less, yeah, and this is for me how it should work. No but.

Speaker 3:

It's flipping back and forth.

Speaker 1:

Yeah, but I was. I didn't want to believe it. I had to even go turn off my Wi-Fi, even check that it didn't sneak out on my Wi-Fi in some weird way. It's like I didn't want to believe it Because it was so magical, it was insane.

Speaker 2:

Was it like auto-completing, or was it really like writing so?

Speaker 1:

I started to. I used it in different ways, but the classical kind of you start writing something and then that little icon come up and then boof, and then you have like formal, informal, you can like auto-prompt it in some way shorter, longer, use emojis. Use not emojis Meaning that it takes whatever you write and then give you a suggestion how you can actually write it better.

Speaker 2:

Of the existing text.

Speaker 1:

Of the text that I started to write, so it has a little bit of an input and on that input it's like auto-completed-ish and styled it up into formal, informal and these kind of things. So think of to like Grammarly, but real time in everything, everywhere and I'm like I don't know. You know, I've wrote in my kind of predictions for 2024 about small models and how excited I was.

Speaker 1:

And then I was like trying it and I was like I couldn't even predict how excited I was. Sorry, I just like get all like a kid here. The organ is going to come back. Vacuum organ.

Speaker 2:

I think that's one of the big trends for 2024 that we will have more efficient models and that is going to democratize, if you can abuse that word a bit to basically have any, and this is so that Gemini did the storyline with the huge, the normal and the small.

Speaker 3:

Yeah, the nano one. The nano one, they call it Number, and also running on my old Macbook.

Speaker 5:

Yeah.

Speaker 1:

I run it on my old M1 Macbook, so it's not of the super new ones, it's not. I mean, this is several years, I don't even know how old it is, but it works like a charm and a meaning that's going to be in every single thing More or less so.

Speaker 3:

All of a sudden now you get to embed a language model inside a product or something.

Speaker 1:

Equally as big as internet, I would argue, and I think now it's much more. Yeah, I'm trying to be nice here, but internet there's people here in internet watching us.

Speaker 3:

Okay, so that's good news.

Speaker 2:

Alex, do you have anything you want to bring up?

Speaker 1:

No, okay, I was thinking we're going to go back to that. Bring your parents.

Speaker 5:

Actually, that was the news. That was there.

Speaker 3:

Exactly that was the news topic that we can now bring into this because, if we're cutting it, there was an Instagram, there was a, there was a storyline around recruitment in in US. Tell us the news, because this is news, because it's again three packs.

Speaker 1:

It's enough now, all right.

Speaker 4:

He has forgotten.

Speaker 1:

All right.

Speaker 3:

Go on, we'll go back in. Do you want to go next?

Speaker 2:

Yeah, yeah, sure, there's a lot of stuff happening, and it was actually a number of weeks since we had the last podcast now over the Christmas holidays and whatnot, so it's hard to choose what to really pick up on, but the one I chose is actually something that I have mixed feelings about. It's called Alpha Geometry.

Speaker 4:

It's from DeepMind.

Speaker 2:

And it's combines this kind of traditional rule engines or symbolic engines with more language models. So in this case it's in competition, a math competition called the National Math Olympiad, and it basically become as good as the gold medalists or between the silver and gold medalist of the best humans ever competing in this kind of math Olympiad. And it's in geometry kind of math tests, but of course super impressive.

Speaker 2:

What's new really in this one is it's not really the combination of having both rules or symbolic systems combined with Euro, your network kind of solutions. That's been done a lot in this kind of hybrid system, but what's new is they use language models to do the creative part. So when they describe a bit what humans do is OK, they can use active rules to try to see how can I prove that this kind of geometry has X and Y property and they know the rules. The problem is that the thing that humans really good at is coming up with rabbits, as they call it. So the rabbit is this kind of weird thing. What if we add this kind of variable to the equation and it comes from nothing.

Speaker 2:

I mean, it doesn't really make sense. You just have some kind of creative inspiration of what happens if we do this, and that is really a hard thing to do, especially for a computer, because it doesn't have any sense on. Why should you add this specific thing? So some kind of intuition is guiding humans in finding solutions for these kind of math problems. Now, what they were able to do better than anyone before was use large language models to do the creative part of pulling the rabbit out of the hat. I would say that they are basically taking the problem that a lot of people see with large language models, which is hallucination, that they just make up things that looks good, but actually using that as a feature. So suddenly the ability for large language models to make up stuff becomes useful, because it's still it's still a little bit of a problem, because it's still related. It's based on some kind of intuition that you can't really describe. It's just that.

Speaker 2:

Okay, let's add this in the language of math in this case and suddenly they first try to do this just with rules. They may fail finding a solution. Then they ask the language model please add a construct or a new rabbit. And then they try again. It may not work, and you try second and the third time and then, by adding a number of constructs, suddenly it finds a solution, it solves the problem, it finds the proof. And this kind of combination of using neural networks, large language models, to be the creative, hallucinating kind of part of humans, combined with the very deductive, you know, logical kind of rule engine, is very fascinating. So this kind of creative part of being able to come up with strange things that you don't really have a reason for, intuition in some way and you can call it hallucination if you want it's turning out to be really useful.

Speaker 3:

And what is the usefulness? Is it creating new ideas on this? Why is this useful?

Speaker 2:

They just use it to show that the AI model is as good as humans more or less, but of course in creative thought In finding proofs for math problems. So in the future, if you have another problem, let's say you want to solve one of the biggest problems of all how you can combine quantum mechanics with relativity. We haven't been able to do so, but perhaps if we have a sufficiently intelligent AI system that can start to add constructs and rabbits, perhaps we will find a solution.

Speaker 3:

Because the point is that you're not going to solve the new problem with old thought, so you need to be creative in some sense, to combine things, to be creative.

Speaker 2:

I think to take humans are not logical. I would say that 90% of human thought I heard someone else I think it was my PhD supervisor said that 90% of the human kind of reasoning is abductive. Abductive meaning it's not deductive, saying if something is certain from a logical point of view, abductive means basically, you haven't proven it's wrong, you just make an assumption. I think a car will fly into this room. Prove me wrong where I think God exists. Prove me wrong. It's abductive kind of reasoning and this is really what they are doing in this model is abductive reasoning, is not deductive. So they combining abductive reasoning and the marriage between the two makes it possible to find mathematical proofs in a way we've never seen before.

Speaker 3:

So in a natural abductive and deductive, and this is one of the ways forward for next generation level mathematical problem solving.

Speaker 2:

Any problems with the physical world that we have for the economical world we have for the world we have for the computer.

Speaker 3:

But those complex problems that you need to find new ways to think about how to solve them, because humans have not solved them yet.

Speaker 2:

Yeah you was just too stupid to solve some problems. No, I think it's maybe the structure.

Speaker 1:

I remember we talked about it last time I was here because I just wrote a piece on creativity in that sense and how I tried to formulate it into a differential equation, meaning if I can understand it, so I can write it as some sort of equation or formula, then I understand it on a scalable level. One of those I mentioned, an old thing that really blow my mind, was this old kind of folded. I don't know if you remember this is a protein folding game fold, dot it. So it was for super nerd. This is before we have cloud computing, meaning that the biggest problem you have to solve if you have hard computing problem is where to compute on Meaning. We have local clusters on universities and these things.

Speaker 1:

Some of us are nodding and having good memories or painful memories maybe, but one thing that I found also interesting with this case is that you can sign up for this. If you have a university account, you get a protein folding sequence. They're trying to solve the HIV problem right? The scientific community have tried for 10 years solving this, solved 35%. All of a sudden you get this small package as they fold. Somebody has a stupid idea from another department. Maybe in this case the mathematical department. Was a mother's sake, why aren't you trying this? This is not how we do protein folding. We all know how it would sound right.

Speaker 2:

It's been more creative. By the way, Alex, you have to leave at half past, right? What was?

Speaker 4:

it no go on.

Speaker 2:

Okay, otherwise we need to focus a bit more, before you have to leave, on your kind of favorite topics no, it's all right, okay.

Speaker 1:

The list is long.

Speaker 3:

Awesome, henrik. I'm going to go with the example topic we mentioned to you. I'm completely out of depth here, but I want to talk about it Completely out of depth yeah yeah, because I'm moving into his territory.

Speaker 1:

Okay now I get it. I was like what?

Speaker 3:

So I stumbled upon a paper by Google Research. It came out, I think, tuesday, january 23rd this week, and it's about the ex-former, so sparse transformers for graphs. So this is a paper that sort of indicates and gives design patterns for architecture for how you build transformers that are more sparse. And what does that mean? It means sort of how do you work in the whole transformer in a way that you don't really light up the whole neural network?

Speaker 2:

network sort of. Thing.

Speaker 3:

So it's expand the graphs and it's like it's about being way more efficient with compute and with energy and everything like that, in order to find the right results. So I think the theme here which I think is a trend is how do we build more things more efficient, and what are we talking about when we say sparse transformers and stuff like this? So I'm going to lean a little bit into you on this here now, but I think the trend here is a little bit interesting. We started with the mixed-draw, mixed-experts type.

Speaker 2:

I want to read about it quickly. Yeah, and I have a very article, so I'm reading right now.

Speaker 5:

I feel like in a career I'm reading it, don't be an expert.

Speaker 3:

You don't need to be an expert, but I just find the interesting trend here. We are trying to figure out how to not light up the whole neural network is the bottom line.

Speaker 2:

And I have a dreaded article so I can't say anything smart about it, but I can at least give some kind of context perhaps. And we know transformers have really transformed pun intended so much of the AI that we have being able to use sequences like words or even images in a way that we never seen before. So with the additional attention that they have in knowing what to focus on, they have really revolutionized how we can use AI. Now to use it on sequences or images is one thing, but the most general kind of data structure you have is a graph. It's a graph with notes and vertices that you can connect to each other, and how can you use a transformer network done for graphs? That's a hard problem. When we sparse graphs. This basically means you have very few connections. In a social network, you have a few friends, but if you take three steps out of yourself, you have very few of the people that are connected to each other, so it's very sparse and that becomes hard to represent.

Speaker 3:

So they need to have a way to represent these kind of sparse networks and graphs in a good way, and I'm guessing, without having read the paper, that that's something that this so the bottom line of the paper is that when you have a sparse network and you're trying to do exactly that, it's actually not that many notes happening in reality, but because this transformer doesn't know that, he needs to go over the whole fucking Facebook before he can realize that, oh, I was just fucking three notes and now, with a former, they're trying to figure out to circumvent that. So, basically, not going through the whole Facebook graph in order to find something that was quite easy to find, because your network was in reality quite sparse, which is, you know, if you think about it, lighting up every single person in Facebook in order to check. Do you know, henrik?

Speaker 3:

you know, rather than do it in another way. It's about efficiency here. I think these kind of the way I picked something out of my head competence-wise, but I think it's a trend With the mixed-expert approaches, with the nano approaches with sparse network approaches.

Speaker 3:

This is a major trend for 2024 around efficiency and smartness. So we're going to see. You know what is GPT-5? What's it going to be? It's going to be more bigger. La la la. Is it going to be a mixed-expert? You know what are the driving forces. One of the driving trends is efficiencies. We can use it in different ways. You know embedded and stuff like this.

Speaker 2:

This is my sort of. I think it's a way to use the transformers efficiently for a new structure which is graphs. There's been other times. You know Robert Luciani loves graphs, you know. Yeah, he had this story. He's been using transformers for that.

Speaker 3:

Actually I was going to ask Robert, you know, because he used graphs and transformer to solve traveling salesman's NP problem and he had an idea and he's been proving it, but no one understands what he did two years ago and I think now we're getting.

Speaker 2:

Proving it? I'm not. I haven't still seen any.

Speaker 3:

Mathematically, he was claiming he could, he had his, he could prove it, he didn't prove it. I don't know, but he, he claimed he could prove it.

Speaker 1:

That's not proving it, no.

Speaker 3:

You know I was in a business meeting together with Robert, with the guys at AI Research at Volkswagen talking about the. You know it's an optimization problem in the transport ecosystem. They didn't get it, but in reality, like what he did, is, you know, thinking about transformers in a GNN and combining that smartly, and I think this is kind of going in this direction, I guess I don't know.

Speaker 2:

I haven't read the article so I can't A.

Speaker 1:

GNN and transformer or where you like, anyhow.

Speaker 2:

Yeah, oran, do you have any topics as well, or should we move back to I don't know? Yeah, yeah, I know.

Speaker 1:

IPA topic.

Speaker 2:

So, okay, let's keep it. Awesome, let's get back to the topics and you can stay for some more time, right?

Speaker 4:

Yes, seven-ish.

Speaker 2:

Seven-ish minutes, Okay, but then no not seven-ish minutes, Seven Okay cool 42-ish minutes, Otherwise I was thinking. You know you can choose a set of topics that you most prefer and etc.

Speaker 1:

That's right.

Speaker 3:

Okay, I think we should go there anyway that she can choose, I don't know.

Speaker 4:

Bring out the magic list.

Speaker 1:

Where is it?

Speaker 2:

Can we move to the resume part perhaps? And just speak about resumes, because that's one of your.

Speaker 4:

I think, thinking or writing about you know how to move on resumes.

Speaker 2:

So, you know what are you thinking there. Can we move on resumes in the future?

Speaker 4:

you think Absolutely without a doubt, and I think once we do, it's going to disrupt the entire recruitment industry, because I think we're going to take one step further away from having a step in between sort of hiring employers and the candidates. I think that once that technology because I think it's just a matter of when, not if, that's going to happen and once it is in place, the recruitment industry as it is today is going to change and they're going to have a tough.

Speaker 3:

But do you have any thoughts on what comes next in terms of resumes evolving into what?

Speaker 1:

Can we think like this Maybe I got a stupid idea because the resume is literally a snapshot of something. What if it is real time? What if it's multi-dimensional? All of a sudden, it's not just a curated paper of something, but rather taking consideration whatever is happening at the time, because the CV gets very outdated quickly. It doesn't take into consideration.

Speaker 2:

How would it?

Speaker 3:

work in practice. What is your vision?

Speaker 1:

No no, I haven't thought of any architecture yet and now it's just a stupid idea that I have now as what could be the difference part. I mean, one part is having a non static one that actually interacts within that. So maybe now we're here talking, maybe that would update some score of something. For example, finding people alike doing the same thing would be a part of that. I'm just kind of having stupid thoughts here. It's not a solution in any way, but I'm just thinking of if it's static. Dynamic would be much more better If we can read dynamic over time, of course.

Speaker 3:

But what is the driving force why it needs to be, why you are so certain that we will evolve beyond the resume? But what is the driving logic for that?

Speaker 4:

I mean, I think we touched on the topic a little bit earlier. It's the fact that we don't know, we can't predict what tomorrow looks like. And how do you hire for that?

Speaker 4:

Because like like you say I mean CVs are very static. It's a snapshot of your current sort of state, what you actually bring to the table right here right now. And it's actually like I see it as an author of my CV and I put in things that I think are necessary. But I mean it's guesswork when you see it as a recruiter or hiring manager. On the other side, you have your idea of what the competencies need to look like. So I mean that kind of that's what it also like. It's static, you know, and the difficulty there is merging those two pictures together and giving the entire story about the background of the candidate, because it's much more what, and then what it says on the paper.

Speaker 5:

Maybe, that's.

Speaker 3:

Maybe that's a key word you meant used there. Yeah, we're using the resume as a proxy. Yes for I want to get your story. Yeah you want to get my story for the job at hand and I want to get your story for a job at hand. At some point we decided that the resume was the ideal proxy for that storytelling. What if I say fuck that proxy, let's go back to the storytelling. And how would that look like?

Speaker 1:

Yeah, because I think also, as soon as it becomes dynamic it becomes, the story becomes so much more vivid in that sense, and if we go back talking about prediction of GPT five and stuff and how, most likely we will be trying to both train and out that video. But how is?

Speaker 2:

work in practice. I mean how do you get to dynamic one? I'm trying to find some kind of metaphor or some kind of analogy to other areas of matching in some way, and I think it's hinder.

Speaker 4:

but hinder is also very like profile of people kind of based the resume, based this resume and I mean I don't have the clear answer right now, I just think it's based on our conversation, I just think it's possible, but nobody's actually doing it.

Speaker 1:

Yeah, but think of this. So maybe we're looking for let's try to find some person which is outside of the normal scope. So we're looking for someone that has a known driver thinking differently, has a strong grid of building or going to market with early scale apps. Of course, you can measure this on a CV by looking upon company X scale up fast company. Why scale up fast? So it can be ish captured, but very limitedly. However, this can be also captured, I mean, if we allow them to ask for that specific question. So, instead of looking on the limited two pages of senior CV which would be the limit, I guess, still I don't know Instead saying I'm very interesting of these things, these three things Now give me a condensed version of that person, Because I mean, the value of reading my LinkedIn and the articles I've written is more likely of gasillions of higher value than my CV.

Speaker 3:

And that's because I want this interview, because when you're talking now, I starting to imagining the product.

Speaker 1:

Yeah.

Speaker 3:

I can imagine a product so. So, if the starting point of the product is not forget about the resume, we don't want the resume, we want a story.

Speaker 4:

Exactly.

Speaker 3:

And who can tell stories? Chucky PT, you can prompt them to tell story. And what is the rag now? Oh, I want you to scrape LinkedIn on all the articles, posts and CV that Henrik has done, and I want you to write me a compelling be obli, be a. You know a story that puts Henrik in this perspective. Who is Henrik?

Speaker 5:

So you can you can do this today.

Speaker 3:

You can do a rag with all the data that you get from my LinkedIn that I've done and then you can load that in and see how we're ready to just explain a bit what you're studying there.

Speaker 2:

So, because I think it's an awesome thing to say that we want someone, instead of having them themselves write a static resume, to have an AI that writes a resume for the job ad I am having as an employer Exactly. So then I'm saying I want to hire a frontend engineer now. Now, given this person, you give some link to the LinkedIn profile, github repository or what not?

Speaker 3:

That's the data, that's the data.

Speaker 2:

They and the rag that you mentioned is retrieval, augmented generation so basically they are adding data or using data from LinkedIn or or whatever kind of source you have, and then using that to try to find a good summary. I guess A story about me how it's matching this kind of job role that you're having. I think this is actually a good idea.

Speaker 3:

This is not stupid, right? Because this, if we know we need this storytelling for this job and this future, so you can do a storytelling about the job and the projection of the job.

Speaker 1:

And, by the way, we're looking for investments. If you have any money, please send it to me directly, or Alex, or any of these gentlemen, so hard to do.

Speaker 3:

No, no, that's annoying part. It is not so hard to do.

Speaker 4:

No, and that's what I mean. I don't personally I don't know how to build this, but when we had this conversation and I started sort of putting your expertise together with my expertise from the company industry.

Speaker 3:

I was just like we will literally had the guests on this podcast to build this unicorn yes period.

Speaker 2:

I think you can even use the GPT builder, so open AI has this kind of you can go without any programming skills can actually just add the data, tell it what to do, and it actually will do the work for you. So I think I think you could do it as well.

Speaker 1:

But I think the key here is the, the rag part. Right To have something, yeah, but actually the key is not the rag part is the technical key Maybe.

Speaker 3:

Yeah but the key was what you said Fuck the resume, I want your life story and I want to match your story to the story of the job.

Speaker 4:

Yes.

Speaker 3:

So why? You know, the resume is just a proxy. So why are we stuck on the proxy when we can go straight to the juggler? Tell me you know, you know how you do this sort of a Jeff Bezos. Write me the you know. Write me the press release of this product you know, a way to think about something you know. Write me the press release about the brilliance of this job and this career. This is one prompt. And then, you know, match that with a you know. So it's very much storytelling.

Speaker 1:

Yes, it's actually storytelling. It actually does a pretty awesome job. I tried putting my name in the GPT and it came out. Ish good, it was very marketing BS though.

Speaker 3:

I just we just letting him suck from himself on internet.

Speaker 1:

Yeah, no, rag, no, no, he went sucking with Bing and we all know how much I love Bing.

Speaker 2:

But, he's ragging through Bing. This is very like yeah.

Speaker 3:

Yeah, but it's alright. So we crack, we crack any, and the VC friends, you know.

Speaker 5:

I was cool, but but is it?

Speaker 2:

okay, because this is potentially one part of the problem Find the matching, seeing how relevant is a person to a given job that that we're having, I think also as an employee, employee or sorry to write a job that is really hard as well.

Speaker 4:

Yes, 100%.

Speaker 2:

We need some help with that as well. So perhaps we even could have absolutely yeah, go on, sorry, no, no, no, no, no.

Speaker 4:

I was just like I don't know why I thought I could finish your sentence.

Speaker 5:

Please do, please do.

Speaker 4:

I urge you to do this I mean seriously, I think a lot of times companies don't actually know what they're looking for.

Speaker 2:

Yes, and they write right horrible job ads.

Speaker 1:

I think 100 living in and they even use this company prompt for the job ad.

Speaker 2:

They even lie a lot in job ads, you know so you're going to work as a data scientist. No, you're not, you're going to work in the basement.

Speaker 1:

Not even they're going to do.

Speaker 4:

Which also brings me back who owns the job descriptions?

Speaker 2:

HR yes, right.

Speaker 4:

Yeah.

Speaker 1:

But what is actually cool with this, when I thinking of your idea now and in this dynamically thing because the problem with the job description and the one applying for the job is that you get a very little part of the actual market it is not rather interesting to go the other way around. I having this problem, who within my network of networks would be the best person to solve it? Because all of a sudden, you're not looking for the one looking for a job, you're actually looking for the best person for the job, the best recommendation.

Speaker 4:

Like the, there is this system.

Speaker 4:

It's called horsefly analytics which is basically like a labor market analytics. So what they do is, like I'm assuming it's called I'm not a technical person, but this rag thing that you're talking about. I think that's what they do. So they take all of these information about from LinkedIn, from GitHub, all other aspects and they put data together and they tell you here's in Stockholm, for instance, looking at a Java developer, the demand is 100%, the candidates available is 10. There's a discrepancy there. You need to look for these candidates in I don't know where. The demand is zero, but there is a lot of candidates there, so that availability is already around, but nobody. They don't have any clients in Nordics and I'm just like nobody knows.

Speaker 3:

So now you lifted the game to an aggregated problem and a macro problem, and it's equally interesting Because you know you were thinking about matchmaking actually demand supply. On a more macro level, Exactly. Equally interesting, of course.

Speaker 2:

Okay, so in interest of trying to, catch.

Speaker 4:

We have so many other topics here. I know you also have.

Speaker 2:

But okay, so let's say that we found a way to potentially be matching problem by having appropriate and dynamic resumes but also job ads in a good way and we match some people. And now we need to rank them somehow. Now you can rank them using hard skills. You know, I know how to program Python, blah, blah blah and have three years of.

Speaker 1:

I know English. That's a new program.

Speaker 2:

Awesome, but there was a soft skills. So thinking about, you know, the ranking problem how to try to measure the hard and soft skills Can? Can we move on the current kind of, I guess, interviewing techniques or testing techniques or work techniques or doing case studies on the whiteboard or how? What's your thinking about the future of ranking people?

Speaker 1:

Is that not equally dynamic? That's the same thing, right. What if we can analyze things? Videos talks, I mean it's possible.

Speaker 4:

You have met a view, for instance. I just think it's a matter of time before they add that kind of capability on top of their current sort of platform, because what they do is they summarize your interview with the candidates. They give you suggestions on interview questions and polls data so I can type in, give me whatever they said about their weaknesses, for instance, so it prompts you back. This is a summary of what they actually said, but it puts a lot of responsibility onto the recruiter. To actually sit in there, do you need to ask the right questions and how do you ask the right questions? You need to know what you're looking for, which takes you back to the original problem.

Speaker 2:

Perhaps you can have some kind of AI enabled kind of interview techniques that actually can have some kind of suggestion for questions which they say they do, but I know it's like chicken and the egg again.

Speaker 4:

It's like if you don't know that you've written the right job description, how, if it suggests you gives you suggestions on the questions, it's going to be on the wrong job part anyway.

Speaker 1:

So, yes so most people are all over it, the freaking internet, so so why do we need to limit ourselves as input, of the only input? I'm not against the interview itself, but there will be a lot of footprints that will tell a lot about especially the soft skills, most likely also the hard skills, because that's also initial kind of input in this. That's how I think about it. Why would we limit it to a test? Well, we limit it to a interview. What if I can have several interviews and I can mix that with several appearances and not protocols like that time constraint as well.

Speaker 2:

I mean, if it's a machine doing it you get my point.

Speaker 1:

So if I take a machine, it's not easy to see. It's like it's already there, right? This is what I one of the companies I work with in Korea analyzing how you differently like change over time and then combine that with how do I feel now and during talking about these things, you will get all of these traits out. I mean, this is what I did with the text prompting in the conversation earlier. We can easily do this already now. I think the only part is then we get into some sort of privacy concern, I would argue, which is again also solved in other parts of the world that have privacy problems.

Speaker 5:

But what?

Speaker 3:

you're saying now. So you're saying like right now, when we're trying to do this ranking, in the end we are. We are kind of narrowing it down to looking at ranking in relation to one test or like this the ranking is the equivalent proxy as the CV is of my story.

Speaker 2:

It's the exact same thing.

Speaker 1:

For me, this is the same problem.

Speaker 5:

Why do we limit to this?

Speaker 3:

In reality the logic of trying to understand the dynamics of the storytelling will give the awesome approach to the ranking.

Speaker 2:

Yeah, yeah. But if you try to focus on the interview techniques and you mentioned something about having potentially robot interviews or something- yeah, or whatever, like analyzing photos or videos. We also had another person, you know from, rob uh, fur hats right.

Speaker 3:

Yeah, that's right, but they still arrived. Yeah, I think so Go far hat what is fair?

Speaker 1:

hat I don't know Far hat robotics? Yes, oh, yeah, yeah.

Speaker 2:

It's like a face that you can train project like an arbitrary, like face on it and then you can speak and it's basically a physical robot kind of and they actually do specialize in in in a recruit as well. So they actually use it for recruitment purposes, apparently. But I'm a bit reluctant, you know. Is it really the right way to have, like, an interview with a robot?

Speaker 1:

No, for me it's not an interview with the robot. I think I'm completely against the robot part. I mean I would act very strangely if I talk to a robot but think of this, so if I would analyze all of the episodes of this podcast and then say, using this psychological framework, gimmicky the characterities of you as a logical profile of so then you will get a pretty good understanding of yourself, most likely better than I would get in an interview, because then you will pretend to be someone Right?

Speaker 2:

That's what I ask people to submit some or some videos of themselves. Yeah, or?

Speaker 1:

you go. Scrape their LinkedIn, you go. You can even have normal interviews that you analyze over time as well. I mean, it doesn't have to be a binary thing, and I think this is the way I think because then, if we can use moving image and sound and analyze characteristics and skills and whatever and whatnot from those, that would be part of that ranking score and that would also take in consideration a much longer time, which was most likely to the story of me.

Speaker 3:

But it's interesting because the way we have done now you know content the flaws come out. In the end it's unfortunately, and when you're stressed as well, I mean at least when I do interviews.

Speaker 2:

I want to stress a person.

Speaker 1:

Sorry, but that's one of my techniques, and also your smile while telling this. Can we just assume in here, so mean when in this space, please, the wrong thing? But you want to understand how people are beneath the skin, so to say yeah, totally.

Speaker 2:

Right, yeah, and how do you actually achieve that? It's not that.

Speaker 4:

Yeah, I wouldn't use intimidation to do that.

Speaker 1:

This is personal feedback to live with three. That your face Sorry.

Speaker 4:

No, I mean when I perform interviews it's more about having a conversation. I focus on that because when people trust you, they tend to open up more and be more genuine as well. Maybe I share personal stories, because then they share personal stories back and, without them knowing it, I've got gathered some information that you know they don't think matter, but for me it's like ah, there you go.

Speaker 3:

Because they thought that we're going to talk about the expertise and you wanted to figure out their attitudes and personality.

Speaker 4:

Yes, and higher for skill.

Speaker 3:

Higher for skill, yeah.

Speaker 4:

No higher for attitude, Sorry yeah.

Speaker 5:

But. But.

Speaker 1:

I am a for my English skills. I know programming in English.

Speaker 3:

But but still fun. I like the way you're thinking, but you're also in a slippery slope because the way they've.

Speaker 1:

Of course, I am.

Speaker 3:

Both of us are thinking about. Why limit ourselves to the interview when we have all decided to be public on LinkedIn? So, when you have decided to be public on LinkedIn, why don't I, as an employer, understand you know what can I find on LinkedIn? Now you can find this YouTube, yeah. Podcasts, yeah. On YouTube and LinkedIn.

Speaker 2:

It's no secret. What about privacy then?

Speaker 3:

I mean the way you act in private is that. But are we private? Are we private? So we are not private.

Speaker 2:

Some people argue that they are private. No, no, no.

Speaker 3:

Bullshit. I think we're done. The reality is when we decide to publish this. And still I mean some people want to share If someone wants to scrape all our podcast videos to give a personality flavor on you.

Speaker 2:

I don't actually do. This. Insurance company came and said to you and have analyzed, or your Instagram pictures, and I'm going to increase your insurance rate, yeah. How would you react to that?

Speaker 1:

Also, they are equally, equally, equally the same thing. Stefan, we noticed you have stopped drinking alcohol and go to crazy parties, so we're going to decrease your insurance fee.

Speaker 3:

But it's a good point right.

Speaker 2:

Privacy is still important.

Speaker 3:

I think, it's because you're saying now, like in theory we are not public in that sense that we didn't agree to them scraping it for a insurance purpose or recruitment For recruitment purpose.

Speaker 2:

No, that's the GDPR topic in here, right?

Speaker 3:

Yes, it is Do you think that would hold, goran?

Speaker 1:

No, I don't think it will.

Speaker 3:

You know what, when we put out this, you know when you are public in some sense, legally, you know, morally, ethically, I fully, 100% with you. Legally I'm not sure it will work. I really don't know.

Speaker 1:

Everything which is public domain, knowledge.

Speaker 2:

If an employer forces you to share your closed Instagram or closed Facebook. No, no, no, no.

Speaker 3:

Closed Time out Not closed, not closed, not never closed, nobody if it's public thing.

Speaker 1:

And I think this is the interesting part, and I mean coming from South Korea. One of the companies I met is called Deeping Source. What they do focus extremely, so there's a bunch of Intel engineers. What they have built is literally a way of scrambling video, so it's unscrambled back. You can't find anything of the actual person. However, what they can do is pick out any change in your character from that which looks like. I don't know what it looks like, it's just like Mido Nas Krieg. I don't know what this is called in English, but you know that test screen. But it's interesting. They can do this in real life on any camera, meaning on any type of moving thing, because in Korea they have equally many cameras, I guess, as in the UK or any.

Speaker 3:

What is it called CCTV?

Speaker 1:

It's all over the place, right. So they are very high on the privacy part. They have zero problem on me saying angry person, happy person, that's not a problem. They cannot tell that it's their fun, that's the happy person. They can say it's a male age and so on.

Speaker 2:

I think this is a very sensitive topic.

Speaker 1:

It is, but it's a fun topic.

Speaker 2:

We need to find statistical correlations. That is not causation. I mean, we have the traditional Amazon story where they actually used AI to try to see who they should hire and it turned out you know it has gender biases in it. That was inappropriate. That was a correlation but not a causation, and they got so much backfire and I think it turned out to be horribly wrong. So I think we really need to think through how we do this so we don't abuse correlation for causation here.

Speaker 1:

That's what we equally do as humans as well. What?

Speaker 2:

It is, but AI can make it worse. Ai can actually make biases worse.

Speaker 1:

Yeah, it also can make it better. Maybe, maybe I mean it can, nobody would know the answer. I would argue. That's my point.

Speaker 2:

As long as you think it through, I'm happy.

Speaker 4:

Absolutely, it's all about, like we talked about previously about knowledge, about the data that you put in. Otherwise it just doesn't make sense.

Speaker 3:

I don't know, I want to steal now. We were joking, this is the chicken and the egg problem, and now in my opinion, we have. Is it the chicken or the egg?

Speaker 1:

We have an omelet now.

Speaker 3:

We have an omelet With chicken feathers we talked now about the chicken problem, of building the AI system Chicken problem. Now let's go to the egg problem. How do we recruit now? Or how do we get the right talent in to start working on these problems where we are today in order to build these systems? I mean, either you go down the startup way and we build a startup, and then who do we need to hire for this startup to make so sexual? Or, if I go into the enterprise world, who should the enterprise people hire into HR or into IT to start being better at HR in order to make an AI driven? We think we're going to move into an AI driven approach and we almost took the startup approach. Now we were cooking up the startups on the fly.

Speaker 1:

We had a fallout also.

Speaker 3:

That was the technology, the setup, that was the egg. Now the chicken is now. If we want to build a startup, if I want to think about recruiting data and AI people next to HR domain expert people in order to start working better. Where do you start or what is the competence to see that we are trying to line up in order to improve on this topic?

Speaker 2:

I'm trying also to write a summary of the question here.

Speaker 4:

And I'm struggling a bit.

Speaker 2:

Are you speaking about the other? Should we go centralized for AI or should it be decentralized in HR? It can come.

Speaker 3:

We are starting now. We were brainstorming about what the technology will look like. And now I'm asking what are the people that needs to come together in order to realize that technology as a startup or, if you want, to improve on it in the enterprise? What is the people we need in order to fix the problem that we've been brainstorming around?

Speaker 1:

But I think if we pause and not bash so much on nature anymore, they can have a pause now.

Speaker 1:

Let them be for once.

Speaker 1:

But the way I think because I work a lot with this kind of big scale kind of enterprise program within AI now they need to get AI ready, whatever they want to call it.

Speaker 1:

The main problem here is not, I mean, I can find you the talent that can build and train your model. They're not going to stay. The reason they're not going to stay is you don't have a data pipeline, you don't have any data engineer, you don't have business people understanding how to build data products, meaning, literally every single thing connected to that talent is a nightmare. So I think, equally, coming back to your quote of the 70-2010, 70% here, this is not only an HR problem, because they're just a proxy again for the CEO or CIO saying, oh, we need an AI project now with Chatchapiti. So I think that's the actual problem that we need to kind of maybe rethink the organization or create a space for them to interact. And anyone that has built any sort of AI thing or work any remotely closely to some sort of even excel level crappy science knows that if you have crap in. You're going to get crap out.

Speaker 2:

I'm trying to also just write down a good question here, like a topic summary and some kind. And would the topic be more or less what skills do you need an organization to be in in an AI-enabled future? Is that what the question is.

Speaker 1:

Maybe, Maybe, like mindset or like way of acting, or something.

Speaker 3:

I'm with you here because there are some core competencies, yes, but the problem right now is not the core competencies, it's the way we have decided to chop up and organize work and steer work. I think that is the core problem.

Speaker 3:

So we have decided to have, based on economies of scale and Tayloristic thinking and scientific management out of the 30s, 20s, we work with division of labor we have a value chain, we have our primary activities and we have our supporting activities, and we decided it was more efficient that you work over here with technology and you work over here with the primary process, hr or marketing. Actually, they are supporting processes compared to the core, primary processes of producing or selling something. It doesn't matter. So we have decided that the efficiency comes from putting people of the same kind in different boxes.

Speaker 3:

So we have organized now Lego according to red boxes, blue boxes, you know, and all of a sudden now we want to organize our Lego boxes in teams, with a couple of different colors in the same teams, and they should work in some way when they are close to the customer and the Lego box is slightly different composition when it's some platform and we are not building processes anymore. What is the question, henrik? Is it more like, should we?

Speaker 2:

have cross-functional teams.

Speaker 5:

There is no question, it's a rant. Yes, I know, that's why I'm asking oh my god.

Speaker 2:

If I try to summarize a bit what you're trying to say here, there are different ways to try to bring out understanding of technology and AI specifically throughout an organization. Either they can do it completely decentralized and every part of the organization should have some AI skills in it. If it's upskilling that's necessary or if it's people, people that is going to have to be recruited.

Speaker 2:

I don't know, but there are different ways and we need and I guess in an utopian version of the company you should have AI understanding at least throughout the company. That's at least the goal. The question is really how do you move from a non-AI company to becoming increasingly AI enabled?

Speaker 3:

I can put the question now. I like the way you framed it. The first question is what talent do we need? Actually, flip that question and the core question is how do we organize the talent in relation to the organization, what talent, but how do you organize it? It's actually the core question. Maybe how would you frame it.

Speaker 2:

I think the only.

Speaker 1:

Why do you want to have a question If you have one?

Speaker 3:

hour. I would spend 55 minutes framing the problem first.

Speaker 1:

No, but I think coming back to we need.

Speaker 2:

Chattivity here.

Speaker 1:

If you think about how would we want an organization, or how do we believe an organization should structure itself to be successful.

Speaker 3:

With an AI ready and tackle and bring this on One part is about the 1% incremental readiness.

Speaker 1:

Everybody needs to understand at least fundamental things on how to use it. Over the board Secondary you will have another problem because the majority of the jobs today will be somewhat replaced, not necessarily by a machine but a combination.

Speaker 3:

Let's use that one.

Speaker 1:

Then the dilemma for a lot of these is that they're stuck in their existing mental model and existing way of measuring. Maybe you have a consultancy. You measure on hours. If you get smarter, you're going to do the job faster, meaning you have less hours paid, so you can't really sell hours. You need to start selling something else. Maybe it's insights. You say that's going to work, but that's also going to slightly decline because insight doesn't really have a value. What you need to do is aggregate that insight, combine it into new data products, which is the thing you sell. All of a sudden that's very far from the existing thing.

Speaker 1:

I think maybe upskilling one part and then rethinking smaller teams and thinking about figuring these things out, because the tricky part is to innovate within that organization, because if you do, you're going to have A and B teams. You're going to also have a lot of distractions. I use 70, 20, 20, 20, 20, often when I do things. Take whatever you do now, 100%. Optimize that to 70. You have 30 left. Take 10 of those trying to figure shit out. Go for those 20x, 30x instead of the 1x.

Speaker 1:

Most of them will fail, and then take those and scale those, and then that's how you could do it.

Speaker 2:

I've written down a question.

Speaker 1:

You know your question.

Speaker 3:

Really chat GPT Actually be honest now, Ladies and gentlemen.

Speaker 1:

Woo Everybody.

Speaker 2:

It's not good, but still how to move towards an AR-ready company. What I mean with this is more. It's rather easy to imagine like a vision, a place where you want to be, where all people throughout the organization have an understanding of. Ai and some people have an expertise in it.

Speaker 4:

That's easy, I think.

Speaker 2:

You won't have a managing understanding of the potential and you want everyone to be working with it in a good way and everyone being empowered by it, and the productivity is tenfolding and whatnot. But I think the hard question is really how to move from a non AR-ready company to a watch AR-ready company, because you can't really get all the people in at once and you have to have some kind of incremental step to get there. That, I think, is the tricky part, and then you know.

Speaker 1:

This is what we talked about already. I don't get this guy. Do you know this guy? I don't.

Speaker 5:

But he needs to feel as he's controlling this no but I think it's a good point.

Speaker 1:

Sorry for making a bad joke, inappropriate of me, no, but I think one part is getting that up. But I think the tricky part here is that every single I mean I'm trying to think of one company that is not like built now Every single pipeline, every single measurement of people, individuals, anything is built and optimized for something else. Literally, it's almost impossible to do it. I think one part is to start thinking, you know, and looking back to people like me, like if you asked me 10 or anyone five years ago, what would you say? Give me more data? Now people say, take your crappy data and get the hell out of here. Right, we don't want more, we want clean, we want working, we want a nice piece of pipeline. So it's not going to be the 99.9% cleaning or even figuring out, because that's the problem with most non-data-driven companies. It's not even on the cleaning phase of data anymore. Where the freaking F is this. I think that's the problem. So, thinking of those pipelines, but if you got a problem, that's the difficult thing.

Speaker 3:

It's a problem and if I go back to your core question, which I liked, that you framed the question and now you're getting into the core business of what Dairdx is all about. And the problem here is that people think we are, you know, in some ways we are an AI adoption company, you know. But in reality, what we are we are a company that is around moving and doing the pivot from a type of organization that fits. When the productivity frontier moves slow, you can work with economies of scale because your core process is fairly stable compared to a situation when the productivity frontier moves really, really fast. So, ultimately, the difference between these two states when the productivity frontier moves slow, the innovation Curse-wise-Lore, accelerating returns here you can work with efficiencies within the game, you know. So you have, you can improve your process. And if you can improve your process, 30%, 40%, wow.

Speaker 2:

The time is flying by here, but let me finish that because this is key.

Speaker 5:

You have to leave, you have to leave yeah, in a couple of minutes.

Speaker 4:

No, but I don't think it's a one-size-fits-all solution either. I mean, looking at every organization out there will need to adopt AI and awareness in their specific way. I think that a lot of boardroom remember we talked about this as well playing company, but I think a lot of companies from the top down need to actually understand the benefits of data and that you show them and then make a decision from there. What do we need to do in order to move to this end result over here, or what we want to achieve and where we want to be in two years, five years time?

Speaker 2:

So, yeah, yeah good Before we have to leave. I have one last question for you Should Super easy one or not? What would the normal day look like for a recruiter in 2030?

Speaker 4:

2030. Six years from now.

Speaker 2:

Or 10 years from now.

Speaker 4:

Really tough question.

Speaker 2:

I know I thought I ended with an easy one.

Speaker 4:

Thank you, no, but honestly I can't give you a good answer because Hopefully something without static resumes and perhaps a bit more.

Speaker 4:

My, my sort of dream you is that we moved away significantly from static CVs. I think I'm hoping, at least, that the recruitment industry doesn't look like what it looks like today. I think that candidates and employers have moved closer to each other so that this middle segment of recruitment agencies is much smaller than it is today and they are downsized and more specialized in other aspects of recruitment, because I think that the tool and the tool sets that the companies that you're going to be able to put in the hands of companies are going to replace much of what the recruitment agencies do today.

Speaker 3:

Let me ask the same question, exactly the same question, but I will frame it completely differently. With your, with your passion and skills and in my opinion, I like that superpower in in five years time, 10 years time, what are the things that you will spend your effort and work on and what is the core contribution of what you will do in 10 years time, because all the rest is automated or simplified? What is the core superpower? What recruitment the human part of the recruitment should all be all about?

Speaker 4:

I would say people, behavior analytics. I think that, however you look at recruitment, at the end of the day we're all human and you are there to ultimately match the right human with the with your human ability of sensing and perception and conscientiousness, consciousness to get it right.

Speaker 4:

My nerdiness right now is sort of looking at everything from you. You suggested blueprint. I've deep dive into that, that book looking at like behavior, genetics in that sense, and for me that's what I'm focusing on moving forward because I think we're transitioning into that. If you're not looking at AI, I think it's going to be looking at coaching companies or reviewing, helping out to review candidates on the behavioral level to assess them in the best way possible Because, like I said, at the end of the day it's human.

Speaker 5:

Because it's the entire.

Speaker 2:

That's good running.

Speaker 1:

And I think, just to say what you will do in, what people will do in six years, is what you already are doing looking and talking to the human.

Speaker 3:

Everything else is secondary and you're doing an extremely awesome job, and that's why I'm here with you talking about this, but ultimately then it's like you want to unlock more time with the human perspective and all the rest should just be automated.

Speaker 4:

Absolutely and like reduce the biases so that you can come into a meeting with the candidate and focus on that individual, and all the rest of it is like white noise or whatever Exactly.

Speaker 3:

So you have now done it there. Now we can be humans and we focus on this. Yes, awesome.

Speaker 2:

If, and assuming that we do get an artificial general intelligence, we can imagine at least two types of future to extremes. One extreme would be the dystopian nightmare of the matrix of the terminator and machines are killing us all. The other extreme would be the utopian future where AI is creating a world of abundance, as some people call it, a paradise in some way, where humans are free to pursue whatever happiness, passion, creativity that they want and perhaps you don't have to work 40 hours a week anymore and we are free to live our lives without the burdens that we have today. Given these two extremes, where do you think we will end up? Let's start with you, alex. Do you think we'll be more close to the dystopian or the dystopian future?

Speaker 4:

Well, I tend to. Even though I'm from the North, I tend to have a positive outlook. So I'm going to be all rainbows and butterflies here and say utopian, because I'm thinking. At the end of the day, I'm at least putting my hope in humanity and saying that we are smart about all of this and don't go bananas. But hey, yeah.

Speaker 1:

On the topic US. Hopefully we have by then stop pretending to be computers, meaning stop doing repetitive shit over and over again, and all of those. If it's dystopia or paradise or whatever, I don't know, I think technology is the least of our worries. The human ego is the one I'm more worried about.

Speaker 4:

Yeah, we're all the same the same page.

Speaker 2:

Yeah, I actually agree as well. It's not really the technology that is the danger. Is really the people abusing the technology. Potentially that is the real danger.

Speaker 4:

Do you agree? I do.

Speaker 2:

And it's surprising that almost everyone is saying it's more of a utopian probability.

Speaker 3:

So I will disagree now. Oh, and use for the fun of it.

Speaker 3:

And I will kind of steal an argument from a couple of our guests that I kind of heard more and more. Also, listening can also change in some some out months, attitudes and communication, the last couple of months. And the argument is, literally, we will always find the relevant problems and the relevant pains as humans, irregardless if we're in the 16th century or the 19th century, on this year 3000. So so it's so the argument, the theory is that actually, when will we find a GM? Where will this utopia happen?

Speaker 3:

We will not experience this in such a paradigm shift as the human race. It will be incremental, even if it goes faster and faster and faster. It's boiling the frog. It will always be the problem at hand, of the times at hand that we will experience. So so the argument is like will be the same, but but on an accelerated scale, with other types of problems. Maybe it's utopia in terms of comparing, like we are comparing now to the medieval times, but we won't think of it as utopia at that point in time. We will simply think about it in a different way. So I think we know we, we get on with it. Get on with it.

Speaker 3:

I don't know, I don't know, I don't know.

Speaker 2:

Yes, the reason is that you know even when we will have a GI. In his view, it will not really change that much of the thing.

Speaker 3:

No, and because it becomes this bottom line definition I mean, like John LeCun thinks, even a GI as a definition is fundamentally flawed. You know what is human intelligence? What is what is artificial general intelligence? The best thing we have seen is the property of the paper came out doing the level one, two, three, four, five, simple stuff like this. Right, that becomes a little bit more concrete. But ultimately, the bottom line is that we have a hard time understanding the innovation that is happening around us as it is happening. And then we can look back, even if it's five years, 10 years Wow, what happened here?

Speaker 4:

But right in the moment, we are not very good at understanding as a part time shift to say that's my view.

Speaker 2:

Cheers and thanks again for awesome future with a GI on our side.

Implications of Parents Attending Job Interviews
Discussion on AI in Talent Acquisition
The Paradox of AI in Recruitment
The Evolution of Recruitment Trends
The Problems With Recruitment and AI
Recruitment
Data and AI Transform Job Skills
Real-Time Math Problem Solving With Models
Future Resumes and Efficiency in AI
Exploring Dynamic Resumes and Ranking Techniques
Privacy and Skills in AI World
Moving Towards an AI-Ready Company
Understanding Artificial General Intelligence Progress