Career Club Live with Bob Goodwin

Author of "The Algorithm" - Hilke Schellmann - Career Club Live PART 2

March 19, 2024 Bob Goodwin (Career Club)
Author of "The Algorithm" - Hilke Schellmann - Career Club Live PART 2
Career Club Live with Bob Goodwin
More Info
Career Club Live with Bob Goodwin
Author of "The Algorithm" - Hilke Schellmann - Career Club Live PART 2
Mar 19, 2024
Bob Goodwin (Career Club)

Part 2 of 2

In this episode of Career Club Live, Bob Goodwin interviews Emmy Award-winning journalist Hilke Schellmann about her new book "The Algorithm" and how AI is transforming hiring and the workplace. They discuss how AI is used in applicant tracking systems, resume screening, and video interviews, as well as the potential for bias. Hilke also shares insights into how companies are using AI to monitor employees and what regulations may be needed to ensure fairness and transparency. This thought-provoking conversation provides valuable perspectives on both the opportunities and risks of AI in our working lives.

Show Notes Transcript Chapter Markers

Part 2 of 2

In this episode of Career Club Live, Bob Goodwin interviews Emmy Award-winning journalist Hilke Schellmann about her new book "The Algorithm" and how AI is transforming hiring and the workplace. They discuss how AI is used in applicant tracking systems, resume screening, and video interviews, as well as the potential for bias. Hilke also shares insights into how companies are using AI to monitor employees and what regulations may be needed to ensure fairness and transparency. This thought-provoking conversation provides valuable perspectives on both the opportunities and risks of AI in our working lives.

Bob Goodwin:

I know you're gonna find it. You've got to keep on at it. So let's say, okay, let's move past the candidate side. Now I work at Company X that's using AI in all kinds of ways that I didn't even dream a company might be using AI. What are some of the things that are going on behind the scenes that people probably don't know about?

Hilke Schellmann:

Yeah. So in hiring we see the one way video interviews that can be analyzed by AI. We've seen gamified assessment, where you play sort of a game to understand what personality you are and if you were, you know it's usually calibrated by people in their job playing the game. So if they have like a risk, you know if they're all risky accountants and you hire new accountants and if I'm a job seeker who is also likes risky behavior, I might land on the yes pile. So there's a lot of questions about like maybe I'm a maverick in playing video games, but it doesn't have to do with any real life Actually that this would be used in real life in the job, but you know the questions about that.

Hilke Schellmann:

But what we see also at work is like we see that eight out of the 10 largest companies in the US survey their workers. So we probably see like every keystroke recording everything you type, everything you put into shared apps. We see sentiment analysis of like slack messages, emails, use zoom calls, right. All of that can generate transcripts or texts that then can be analyzed and you know they can look for certain signals of bullying, toxic work environments. You know who speaks the most. You know who's a bully in zoom meetings.

Hilke Schellmann:

Maybe it to non compliance, inappropriate behavior and some tools that I that that I looked at, even said that they could find instance of self harm in social media. So some companies also do. It's called continuous employee background checking. So just continuously look at your social media and make these inferences. We don't know a whole lot how good it is and usually AI isn't very good yet with like sarcasm, humor and also doesn't often know. If you know we've seen that with them Social media scans on folks who have applied for role, because that then becomes lawfully.

Hilke Schellmann:

They can actually get access to the data on them because it's like an official back once part of an official background check and you can ask the employer or the potential employer for the data. And some people have done that and when they looked at it they found out that like maybe they liked to tweet that and you know that mentioned alcohol and that was suddenly a red flag like alcohol, bad Right. Or you know they cited a you know a lyric of song and that was then interpreted as you know somebody who might be suicidal or prone to self harm, when we don't actually know that. But the problem is like these signals may be coming up and we see companies use like sort of very vast broad AI recording and checking tools because they see employees as potential leakers as as as a threat to their business. So we only have seen the very, very beginning of them.

Bob Goodwin:

Well, okay, so gosh, where you kind of alluded to without saying it. But, like on the continuous background check thing, right, background checks are covered by the Fair Credit Reporting Act, which is what requires the transparency. But it seems like you talked earlier about your kind of regulation and some kind of oversight of all this stuff. It seems like a fair information reporting act, right, you know, would be appropriate. You know, like, like I get if you suspect somebody of embezzling that's a different topic as an example, right, yeah, or receiving bribes or just something that's just patently illegal, right, but maybe we're using this to create the layoff list, right, yes, and I think that's always the right, yeah.

Hilke Schellmann:

Yeah, once you start getting like getting this data together, maybe for one thing it's also easy to look at it in another way. So talk to a analyst at Gartner who talks to a lot of employers and and he shared that you know this was pre pandemic. The employer wanted to promote people in. The way to do that was checking key card entries, like who was longest at at at their desk or in the office. It's already kind of problematic, right, because we all know I can sit at my desk for 10 hours but not actually be successful at my job. Right, like I don't know if that's a good indicator of performance, but they use that.

Bob Goodwin:

Some people work at home and seem to do okay. I don't know.

Hilke Schellmann:

Yeah, and you know it's just like it doesn't. You know like it's it's. It's not about checking the hours, that that that your work. We usually suggest that you check the results, right, because somebody can sit there for 10 hours, do nothing.

Hilke Schellmann:

But the problem here was when then it came to you know, the pandemic, the company had to do layoffs. They wanted to look at that data because they felt like, well, you know, we know who are the most productive employees and the unproductive employees, the ones with the longer absentee times, which of course you know we know from the pandemic. Now that people have families, they have caregiving, giving obligation has nothing to do with their committed and productive employee often, and that data wouldn't actually kind of take that into consideration right. So this becomes really problematic really quickly and we also see at work like slight risk measurements that purport to predict who's going to leave in the next year and looks at some sort of signals on that, like are you updating your LinkedIn very often, do you move data around on, do you put USB sticks in your computer, do you print a lot, and that's usually based on behavior of past employees. So we don't actually know exactly.

Bob Goodwin:

Or just a hypothesis.

Hilke Schellmann:

Yes, it's really just a prediction. It's not often actually accurate that people leave within a year. The question is, what does an employer then do with that information? Maybe they're not so thrilled about you and they just feel like, okay, the person may believe and that's good, but if an HR manager knows that, are they going to give you a raise and other people won't get a raise just because there is an indication that you might leave? We don't even know if this is true. It's just a prediction. Or maybe if there's leadership training, they're not going to put you forward because you may be leaving the company.

Hilke Schellmann:

Once you have that information, it's incredibly difficult to not make decisions based on it. Even when I did some of these tests of these tools that I did, even though I knew sometimes, okay, there's literally no sign in facial expression analysis, but when you see that number, it just feels like objective math and it's just really hard to ignore that. I think that's actually hard for companies too, because they see, oh, this person was flagged for being a flight risk, but we don't actually know if that means anything. But I think it's hard for HR managers not to, or hiring managers or people in charge of managers to just ignore that information.

Bob Goodwin:

Based on any discussions you've had with policymakers or other people that might influence policy, do you think a day is coming when your companies will basically be forced to provide more transparency into what they're doing in terms of data collection on employees?

Hilke Schellmann:

Yeah, I mean we hope so. I think there should be a little bit more transparency. I think we see a little bit of that in the European Union. They have a new AI act that's in the works and hiring is actually and AI works actually classified as a high risk or high stakes endeavor, which sort of triggers a bunch of safeguards that you have to keep as a company or as an organization when you use that, and I think that's actually really smart. We haven't really seen that in the US at all, but maybe my hope is, sometimes you have this Maybe in some states in the United States, for example, in California, have a very high regulatory burden for car emissions, so not every company is going to then build 50 cars for different states.

Hilke Schellmann:

They will build one car that will pass the higher emission rates in California and use it on all other 49 states, I guess, and so we hope maybe this will happen with some of this regulation. Right, if one state or one city makes it harder and requires some of these regulations, the off-the-shelf tools will follow this one regulation and then also we may have a little bit more insights in other jurisdictions. But we don't know this and there isn't a whole lot of appetite in the US to genuinely regulate, but we see a little bit with Biden pushing into this. We see the National Labor Relations Board has come out saying you know, that sort of broad surveillance of workers could be really problematic, because workers in the United States have the right to organize and form a union or at least have organizing communication, and those could be really swept up in some of this broad surveillance. So we see ideas how to regulate it. Have we seen any actual regulation? No, but I think there's more and more a at least people see the problem Right.

Hilke Schellmann:

And that's what I sort of felt with the book is like I wanted to show how AI is already being used in you know the good ways and the bad ways, and just to show, because a lot of people don't know that. So now we can act right and now we can build regulation based on that. That is actually specific to how we use these tools. Because I think the problem is like you know, broad regulation when I don't understand AI is actually not really helpful, you know. So we kind of have to know, like, how is it being used and where does it go Right, what do we know? And then we can, you know, build guardrails towards that and not just like a broad, big regulation. So you know, that's my take on it.

Bob Goodwin:

Let's talk topic and then we'll start to put a bow on this. As a worker, you know this is very broad. Should I be freaked out that AI is going to take my job?

Hilke Schellmann:

I mean, you know, look, the AI they just mostly use for monitoring and surveillance is not going to take your job, because it's like predictive AI, right, that looks at signals and compute analyzer signals inside the company. So I think we see a little bit that maybe AI is going to be kind of your manager or your boss in a way checking, like, do you send enough emails compared to other workers? Right, like there are people who are very successful to X. You know we can check their behavior, so maybe the AI will remind you. Hey, you know you should like send as many emails as that person. Have you talked to Tracy Recently? She's really successful, right. Like we can see like some sort of nudges and reports.

Hilke Schellmann:

And then there's Generative AI, which I think is going to have will have a profound impact on a lot of our jobs, and we'll take a little bit of the mundane, everyday sort of stuff that we all hate to do away from us, and my hope is that jobs will then evolve, right, I get to do like more of the creative stuff that AI is not going to be able to do. So my hope is that, like, our jobs will be better. I'm sure some of the jobs, kind of like elevator attendants who had to push the knobs, and all of that maybe 80 or 100 years ago, you know. They went by the wayside but hopefully they found all other new jobs and hopefully that will help us as well. But that is like ways out. That is not tomorrow.

Bob Goodwin:

And you talked about the eight out of 10 a couple of times are the biggest companies or whatever. Yes, you know, I don't know what the right scale is, but one is like not even paying attention. 10 is this is fully deployed. Where do you think we are kind of on a AI intelligence, ai utilization scale right now?

Hilke Schellmann:

I think we see that company starting to you know, I think it's, you know, deployed in hiring a lot, because I think it's like it's a pretty easy use case, right Like you have a lot of people that are apply. You need a technological solution. You don't want humans to necessarily read the resumes because they they have all this unconscious bias. It feels like, ok, this is a good use case. I'm not saying it actually is, but I think that it's not a good deployment of it.

Bob Goodwin:

It's a good use case.

Hilke Schellmann:

You know the deployment we have to work on what, what tools we actually build and use here. But you know, we see we see a pretty high penetration there For for, for the surveillance. Like I wish I had better numbers, but we don't know. And we don't know because companies don't have to tell their employees most of the time. You know what and case law has been in their favor. Like, whatever happens on a work computer in the United States is usually you know, there's no expectation of privacy and this, this belongs to their employer and they can do whatever they want.

Hilke Schellmann:

Sometimes, you know, and I think this is another forced consumerism sort of point, like you know, I remember when I start my first job, maybe the first day, when you like lock into whatever their Google mail or whatever infrastructure, maybe, maybe there's a little note. You know we will, we might monitor the space and you have to consent to that First of all. You may forget about it. Maybe you've never read it. Also, you're going to say no and like walk off the job on the first day and you may forget about it, right?

Hilke Schellmann:

So, like you know, some, some folks are very diligent and they use a second computer at all times. You know they never use the computer issued by the employer to do any, you know, send even personal emails. They don't want to run their personal emails on that computer because it can all be sucked in. We don't know exactly the limits. You know there's some people who feel very strongly that they don't want any apps from their employer on their phone because they could swipe up, you know, a vacuum in other data and we don't know the limits. But I think what we see sometimes, we see some companies now pushing into sort of wellness and in health care. You know there's one that I looked at that you know it seems very benign and wants to personalize benefits. But but when I listened to the demo and they were like, oh, you know, here's Aiden. You know this is a synthetic example, but they were like, you know Aiden, you know we see from his benefits that he dropped his spouse from his medical plan. So he think we think you know the computer will infer that this person is now going through the verb force and suggest therapists to them. Here are some therapists in your network. You should call them and you know, seems very benevolent and maybe, maybe, maybe it will help Aiden, you know who knows.

Hilke Schellmann:

But I also feel like for a lot of employees it's like wait a second, like why do? Why does a third party company have access to this like very personal benefits data and can make inferences on that and knows all of that? So I think you know we will see more and more of that and there is no law for transparency here. So it will come by, like you know, maybe journalists like me looking at what these companies do, who they're working with, maybe there's some individual people who are very smart, who can contract this kind of stuff because they have a computer science background, but we don't know a whole lot of stuff. I mean, I think what's actually kind of helpful?

Hilke Schellmann:

I watched a whole lot of webinars of software companies that sort of do these make these videos for people who maybe buy these tools on the market for companies? And they're very IT heavy and sort of you know, very jargony and technical, so nobody ever watches them. I watched them not all, but a lot of them and so, like you know, they talk very openly about the capabilities of these tools, that they can find any kind of signals and can, like you know that the signals then lead to a report that maybe goes to IT, that maybe goes to HR and gets like starts in investigation. So we know it's often built in by very large tech companies in these tools. Do I know which company turns them on? I don't, so you know if people have tips on that. I would love to know more and talk about this more, because I think we will see this more and more.

Bob Goodwin:

One last thing, because it was in the news, or I saw it today and I'll say anyway today's February the 21st was a deep fake. So right, Sora is out now, right.

Hilke Schellmann:

Yeah, yeah, you can build anything in videos.

Bob Goodwin:

Yes, yeah, and so it was somebody in the finance department of apparently not a small company where the deep fake was executives at this company. Via video it sounded like it was interactive, directing the person to wire $25 million to some bank account.

Hilke Schellmann:

Yeah.

Bob Goodwin:

It looked like the executives sounded, like the executives even mimic their office backgrounds, like it was very believable apparently and this person actually executed the trade or did the wire $25 million. I know, I know.

Hilke Schellmann:

We've seen this before with, like audio deep fakes, right, that, like you know, somebody got a call from what sounded like the CEO, because you know they're often the audience out there and you can train a tool and now you can do that with video. So you know the question sort of the authenticity of factual content, right. But I think it's also speaks to like what I've done and this was like a couple years ago I tricked some of these one way video interviews by. Actually I wasn't on screen, I was sitting next to the screen and I was typing in my questions and had an early deep fake say the words that I was typing. So I wasn't actually speaking and I got results like none of the companies, actually none of the vendors that deployed these tools had any kind of security protocol, right, like we see this in facial recognition technology. At least you have to be, you have to move or something to indicate that you're moving and breathing human. You can't just hold in a photo to open a door most of the time. So we see there's a whole lack of security processes on any kinds of ways that work.

Hilke Schellmann:

So I think you know the FBI put out a notice like six months or a year ago, saying like hey, employers really need to be aware of this. Like there's like candidates swapping and interviews, but also, like you know, it could be a real hacking problem for companies because you know if you employ somebody remote they you know they had somebody else or they're you know they used to date deep fake to do their employment videos they may get access to the computer systems on day one. So it's not only about like somebody lied and it's a wrong employee who shows up and maybe they don't. You know they can't, they're not as good as a coder as you thought they would be, but actually, like you might give them access to very sensitive data and they start vacuuming out. So there's a whole lot of new world there that we haven't fully figured out, either as employers or employees and job seekers.

Bob Goodwin:

Yeah, it's weird, Hilke, because I mean it sounds like science fiction, right. I mean it sounds like you're watching a movie. Yeah, I know when the title is, you know 2,205. You're like, well, yeah, that's a long time from now. It's like, no, this is kind of happening, like right now, and it seems so bizarre, well, that that can't be real. Yeah, no, it is happening.

Hilke Schellmann:

You know, it feels sometimes like very ovalian and dark, but the reality is like a lot of these tools are not very good, but the problem is they still make decisions, and in this case, they also make flock decisions, so that gets even more problematic. Right, we don't know a lot about these tools and science fiction is usually like these tools are always right and they sort of create this like over the top surveillance. Yes, you may be surveyed, but like who knows if the sentiment analysis is any good. But the problem is if those kinds of analyses are then used for employee decision, this can be really difficult. Right, it could lead, you know, it could be signals and we've seen this from surveys that company leaders do want to use some results from AI tools with many other decision making in in in layoff decisions. Right, if you have no data, or if you have one data point, that is maybe a flawed productivity algorithm you may want, you know, some companies may want to use that.

Bob Goodwin:

Well, in, fact, they would even call it data driven decision, right? Well, yeah, but no, so he'll get. This is a fascinating topic. I could keep you on this call for another two hours, easy. Is there anything that we didn't talk about quickly before I let you go?

Hilke Schellmann:

No, I think we've covered a whole lot and I want to encourage people to understand the world that we live in so we can act on it. And now is the time to act and maybe push back a little bit in demand that there is some guardrails, that this is more transparent, that there is explainable AI, like we know. Some of these things that we know can be used for high-stakes decision making. Maybe there should be an appeals process, kind of like we have with our credit score. Like I have the right to check once a year what my credit score is. I sort of roughly know what are the criteria why my score is this and I can appeal it and say, like actually, this is the wrong Hilke, like that's not me. I'm sure that is full of loss, that system, but there is a system For some of this hiring stuff.

Hilke Schellmann:

We don't know enough, we don't actually have a right to know and we absolutely cannot push back, except when it comes to, like, your background screening and social media background screening if it's part of an official background screening process. But if a company monitors your social media after employment and maybe you said yes to that, or I don't even know if they have to get consent, because some of the stuff that you post on social media is technically public. It's not private data, right, they're not taking your Facebook password and looking at that. But whatever you post on X and LinkedIn, that's usually publicly available. But do these tools? I mean, I've examined these tools that try to find personality traits out of your social media and I can authoritatively say they do not work, because I tested them with me and others and we built a larger sample at NYU with a computer scientist and a sociologist to test more people on this and they do not work. And I think that's kind of hard to know that some companies use these tools.

Bob Goodwin:

So what's the end with this? I want to encourage people to get Hilke's book the Algorithm. We're going to put a link in the post production for how to go buy the book. Also, to follow you on LinkedIn, because you write about this and post on this pretty frequently. Yes, yes.

Hilke Schellmann:

And LinkedIn is a perfect way to talk about this. Right, like there's so many job seekers, hr managers, people who like care about the world of work congregate there. So it's like you know, I've sort of abandoned Twitter, more or less than you know, as many journalists have, for obvious reasons, but I also feel like, well, you know, this conversation about the future of our work needs to happen on LinkedIn, so I encourage people to get in touch with me to share maybe their experiences or, you know, if they have any comments, feedback, if they think I missed. You know big things. I do want to know about that.

Bob Goodwin:

So what I love about the book is, I mean, you're driving awareness, you're starting to kind of pull the shroud back a little bit. That it's not like this unfathomable thing. You know that only you know. Somebody in you know Mountain View understands so. So I think it's great because you lay it out in a way that real people can understand.

Hilke Schellmann:

You're an image award winning investigative journalist.

Bob Goodwin:

So it's not just one person's rant on I hate AI or AI yeah.

Hilke Schellmann:

I mean, in fact, I started it because I think AI is a transformative technology, you know, and I wanted to understand it. And you know, and I think also, like you know, that is maybe like a good thing of journalism. I already found people who have been affected by by these technologies in profound ways. So I think, like you know, talking to real people, understanding what is happening to them, has like really been eyeopening to me. To pull out some of like maybe like the science behind it, which I think we should talk about it, but like in in, in gestible levels, and also understanding here's how it affects people.

Hilke Schellmann:

It's not just like AI philosophy or theoretical examples, no, like there's hundreds of people or millions of people who are looking for jobs and they have to apply to hundreds of jobs. And I think you know, I think, when in doubt, and you have like done some of the things that that leading folks in the space recommend to you know, ai, make your resume machine readable. Yada, yada, yada. It's not you, like we know from the data that it often takes hundreds, sometimes thousands, of applications to find a job. It's not you. It's probably that tool in the middle between employers and employees that who knows what it's acting on, but it's like at one point it's just a numbers game.

Bob Goodwin:

Okay, I Know I said appreciate that last point because actually kind of getting into my why on career club is the emotional and psychological Toll the search takes and how much people internalize it must be me I must be the problem, and Thank you for saying it's probably not you. Actually it's the process, it's the. The system isn't efficient at all and therefore it drives all this inefficiency that comes through as rejection and and it's not.

Bob Goodwin:

It's not really you, it's just a messed up system that hopefully to your point, because we want to be optimistic and and hopeful that technology can Ultimately be a solve for this. But right now, what we see in a book like yours does such a great job of illuminating. We're not all the way to bright yet, that's for sure.

Hilke Schellmann:

Yeah, yeah, I think we, you know, we, we we rush to sort of digitize Processes that are actually not really working Exactly to them either.

Hilke Schellmann:

So let's actually Change this all, like what are like ways to hire that are not build an old processes that we just digitize or or use, use AI for, but like, how can we actually improve this whole, improve this whole system?

Hilke Schellmann:

So I think there's a lot, there's a lot to be done and it, you know, it doesn't always have to be AI tools. In fact, some of the more traditional tools, like regression analysis, that uses it just a few variables versus everything that's on a resume they're often actually as predictive and you don't have the stuff coming on like first names or locations or hobbies, that baseball, your school, that like, yes, that like says more about your, your background, right, your socioeconomic status and your background, that actually, if you're qualified for the job and I think you know, I know with it, like HR folks and everyone they they don't want to use that either. They want tools that actually find people based on married, on their skills and their experience, like, and not on their background and who our parents are or where we come from, like. So I think we all have to push into the space and make it better awesome.

Bob Goodwin:

Well, you're doing a very big piece of making it better, so thank you for that. Thank you for taking a few minutes today Everybody. Thank you all so much for listening today. Again, please go by the algorithm by Hila Schellman, follow her on, linked in comment, as she said, and also, again, please check out the resources so we have for both employers and for job seekers. We're here to help you and by bringing you high-high quality content with award-winning people like Hilke. So, thank you again so much. Hope everybody has a great day and I'll see you soon. Okay, thank you.

AI's Impact on Employee Monitoring
Regulating AI Use and Impact
Privacy and Security in the Workplace
Encouraging Awareness of AI in Work