Sourcing School by RecruitingDaily

Reshaping Workplace Safety Using Artificial Intelligence with Micole and Brendten of Fama

October 27, 2023 Brian Fink, Ryan Leary, and Shally Steckerl
Sourcing School by RecruitingDaily
Reshaping Workplace Safety Using Artificial Intelligence with Micole and Brendten of Fama
Show Notes Transcript Chapter Markers

Imagine having the power to expose gaps in the workplace misconduct screening process and make your company a safer and more inclusive place. That's exactly what you'll be able to do after this engaging chat with Micole Garatti and Brendten Eickstaedt of Fama. We dig deep into how Fama is revolutionizing the background screening process, using AI to track misconduct indicators that often slip through the cracks of the criminal justice system. Understand the importance of pinpointing a vendor's definition of AI, the potential risks of mislabeling, and the intricacy of using markers in this process.

Delve into a robust conversation that dissects the complexities of AI regulation, keeping the recent New York case at the forefront.  Strap in for a stimulating discussion that promises to equip you with the knowledge to navigate the ever-evolving landscape of AI and workplace safety.

Special mini series recorded with Oleeo at HR Tech 2023 with hosts Ryan Leary, Brian Fink, and Shally Steckerl


Listen & Subscribe on your favorite platform
Apple | Spotify | Google | Amazon

Visit RecruitingDaily
Twitter @RecruitingDaily
Join the Secret Sourcing Group
Learn more about #HRTX Events

Speaker 1:

We are back at the HR Tech Expo on the conference room floor, live at the Olio booth. As Ryan would say, we are powered by Olio. Today. It's the second day. It's nearing well, it's past lunch, so things are starting to slow down a little bit. However, it might be getting a little more fun now, since some of the more let's just say serious people have already made their way to the airport. So here we are. I have as our guests now we have Fama, which is Latin for fame, also Spanish for fame. I have Mikol Grady and how about you try saying it Brenton Ikested. Okay, I'm the chief technology officer of Fama and Mikol is the director of product marketing. Not the director of product, but the director of product marketing. And what is Fama?

Speaker 3:

Yeah, so we do online screening for misconduct, and we do that because background screening technology is the oldest technology in the market.

Speaker 1:

Background screening yeah.

Speaker 3:

It's the oldest technology in the market and it's gotten really, really good at doing checks quickly and efficiently, but it's also missing some key signals of misconduct, because most misconduct is never accorded right in the criminal justice system. But just because something isn't recorded in the criminal justice system doesn't mean that it's appropriate for work. And so 95% of global background screening vendors actually partner with us to help close the gaps and make workplaces safer and more inclusive and improve quality of higher retention.

Speaker 1:

Okay, so it is a technology that covers the gap between what you are supposed to legally check for and what might make a difference, that you might be missing.

Speaker 3:

Right, for example. Like most, harassment is never recorded as a crime.

Speaker 1:

Right, yeah, a lot of it.

Speaker 3:

Unfortunately, most intolerance is never accorded as a crime and even most violence is never recorded as a crime. But that doesn't mean we want employees coming to work and being violent toward others.

Speaker 1:

Being borderline violent but not violent enough to have gotten caught Right Right or threats.

Speaker 3:

Like most people making threats, are not going to jail for it and being prosecuted Right and so.

Speaker 1:

Bigotry and nefarious, otherwise nefarious behavior.

Speaker 3:

Right.

Speaker 1:

Racism and things that. Yeah, wow, that's kind of heavy. I'm depressed now.

Speaker 2:

Sorry about that.

Speaker 1:

Please don't run my name.

Speaker 3:

The good thing is is that we help prevent companies from hiring workers that are doing that Right, right and so it's more of like the absence of those things.

Speaker 1:

But the companies aren't hiring you. They're hiring some kind of background check company and you are their power for the most part on the online screening front. Right, oh, okay, so what other front is there?

Speaker 3:

Well, I mean, there's identity checks and verification.

Speaker 1:

Oh, okay.

Speaker 3:

You know what I mean.

Speaker 1:

There's other components that we don't do but we complete the. Yeah, gotcha. So there's also that somebody has to run to the local office or whatever.

Speaker 3:

Yeah, that's not happening anymore, hopefully.

Speaker 1:

Yeah, I've heard it has.

Speaker 3:

Still.

Speaker 1:

Yeah, because there are some jurisdictions that don't have their records online. So yeah, which really delays the process Because, as you said, a lot of the vendors nowadays offer 24 hour turnaround.

Speaker 3:

Right.

Speaker 1:

Except for when you have to literally send a runner to the local and they have to hire a PI that has the license to go in and say, hey, no, we really got this person's permission to run their background check. We're not just here Snoopy Loopy.

Speaker 3:

Yeah, the most running we do is from a treadmill desk while we like, Basically yeah.

Speaker 1:

I want a treadmill desk, all right. So HR technology, have you had a chance to kind of look around and see what's going on around?

Speaker 2:

here. Oh, yeah, a little bit.

Speaker 1:

Yeah, when you walk around, what do you see is, in your opinion, just really truly innovation in technology. Have you seen anything that's like wow, that's new and I like it and it's.

Speaker 3:

That's really interesting.

Speaker 2:

I think from my perspective I've seen a few things that I think are innovative, but I'm distinctly interested in helping companies find and hire good people.

Speaker 1:

And so that's partly what we do Find and hire okay.

Speaker 2:

So looking at other companies that are doing similar things not exactly the way we're doing them, but different components of really finding quality people, quality of hire so I'm seeing some technologies out there that are doing that and I think that's really interesting.

Speaker 1:

And that's good. Okay, what have you found is the kind of I don't know the common denominator across what you're seeing here today.

Speaker 2:

I mean, everyone's talking about.

Speaker 3:

AI right.

Speaker 1:

So that's what does that really mean?

Speaker 2:

I mean it could be as simple as just a bunch of if then statements.

Speaker 1:

Right, that's a decision tree. That's how I feel about it.

Speaker 2:

As to be as complex as LLMs and things like that, where they really truly are sort of quasi thinking on their own. We're seeing, I see, all kinds of things going on on the floor here that run the gamut of that.

Speaker 1:

I wish, yeah, if I could tell every attendee not the vendors, but every attendee to do one thing. It would be to actually ask the question what exactly is AI to this company?

Speaker 1:

Because, there's just kind of a blanket term and it's really not a common denominator. It's just a common word, but it could be completely different things that they're just calling it and it used to be other things. Semantic search was huge a few years back and essentially, in some ways, this is also bad. You need an ontology for some of the models to work, or, if you don't have one, you trained it and when you trained it, you created ad hoc ontology. The CTO is shaking his head up, so I must have nailed that one In the industry that you're in. So what part of Fama does is look for indicators in a way, so there must be some markers that become these indicators. Right, Is there an AI application there? Or is that too dangerous Because the AI might like mislabel or misinterpret the indicators.

Speaker 2:

So I think there is an AI component there and we do use it.

Speaker 1:

The categorization of it.

Speaker 2:

Yeah absolutely to categorize and classify the content, and a lot of it is to weed out the vast majority of things that aren't problematic.

Speaker 1:

Which, let's face it is hopefully the most. The majority should just be regular, non-ranking.

Speaker 2:

Absolutely.

Speaker 1:

I mean we've filtered out 95 plus percent of everything, because you're just looking for the risks, not the non-ranks Exactly, and then once you get into the classification of things that are problematic, that's when you start talking about.

Speaker 2:

what are you concerned about in terms of bias in the AI or things like that, and to me, a lot of it comes down to the fact that there's still a person involved.

Speaker 1:

There's a fact checker.

Speaker 2:

There's a fact checker. Yeah, I was going to ask you about that.

Speaker 1:

Yeah, yeah, so you're using let's call it AI, but really is, you're using the technology, the large data sets and the ability for machines to do something that they can do very well, that humans can't do, which is to process a lot of information. You're using that as an assistive technology. It's helping bring to the surface the things that you should look at. And then someone looks at that and makes a decision. Because that's not quite where AI is yet, it's not decision capable yet Our AI does not make decisions, doesn't decide.

Speaker 2:

Right, it really just says this is something you might want to look at.

Speaker 1:

And that is a huge factor with the New York case.

Speaker 3:

Right, yes.

Speaker 1:

And what has transpired there and is probably going to continue repercussions throughout the whole ecosystem, is because, just generally speaking, it's hard to get legislation through and governing bodies tend to be very I don't want to say lazy, but they tend to. If something has already been done, they tend to sort of carry that on Right. So they're going to take that case and move it forward and apply it in other ways Exactly, and what? The real essence in my experience of that case correct me if I'm wrong is essentially letting the automation make the decision. Absolutely. That's exactly what it is.

Speaker 1:

Which is what you're not supposed to do, correct. So that regulation I'm glad for, yes, yeah. But then if they over-regulate, they're going to make it so that you cannot apply the technology. So therefore it will be regulated or illegal or prohibited to actually use the assistive technology for the things that it's good for, exactly. So that's the other side is if we over legislate the protections, totally yeah, and not take advantage of the machine's power.

Speaker 2:

I love the idea of it being a system technology.

Speaker 1:

Yeah.

Speaker 3:

That's really what it is.

Speaker 1:

That's what it is.

Speaker 2:

That's how we should be using it, because there's just nuances of human judgment that you can't encode yet. Maybe someday we will be able to. Yeah, I don't know.

Speaker 1:

I mean there's. I have a background in non-verbal communication and intercultural communication and I've just always been a huge proponent or advocate of the fact that you have differences and not just nuances, but outright differences in semiotics and the meaning of things, and that you just can't. A machine is not going to be able to determine something that's new because it hasn't had experience with it. Correct, when humans can make that logical leap In the future, that AI that really is able to create cognition may be able to, but right now we don't have anything even remotely close to that, exactly right.

Speaker 1:

What about the case of mistaken identity? Michael Smith, right. How is AI handling that?

Speaker 2:

Or not? Is it poorly handled? Again, it's assistive. So we do have AI that helps us identify the people, using, basically, algorithms that triangulate based on data about a person their name, their address, their general location those types of things.

Speaker 2:

So it helps us weed in and weed out again, but it ultimately comes out to a person again making the decision about is this actually the person that we're looking at or is this a different chance? Because we're FairCurrent Reporting Act compliant, we have to have at least three markers for each profile that we identify. So you know, is one of them visual? Yes, it can be.

Speaker 1:

It absolutely can be, so we can look at a picture. We can look at a picture Address.

Speaker 2:

Address phone number email picture.

Speaker 1:

Yes, correct that correlate.

Speaker 2:

In order to say this is that person. The data we already know is true about them Got it yeah.

Speaker 1:

Do you need their permission?

Speaker 2:

We do yes.

Speaker 1:

For this consent-based.

Speaker 2:

Our case is consent-based. Again back to the FairCurrent Reporting Act stuff we do have to consent Absolutely.

Speaker 1:

What about Americans with Disabilities Act?

Speaker 2:

That's a good question. In what way?

Speaker 1:

Well, utilizing sort of the aggregation of information may potentially and you may not pass this to the employer, which would protect me from that, but may potentially reveal that perhaps I am diabetic.

Speaker 2:

Yes, I see what you're saying. So, yeah, we weed out. That's part of how we help companies take compliance is because instead of them going on exactly.

Speaker 1:

No, that's right.

Speaker 2:

Instead of them going on and looking at it themselves and seeing protected class information.

Speaker 1:

we actually Look Shelley's wearing a Jewish star, exactly.

Speaker 2:

We actually filter that out. We only present the things that are relevant, and so if it's relevant to misconduct, misconduct behavior or what, if it has anything to do with-. Acts, things they've done. Yeah, exactly, exactly so. We're never going to show their religion. We're never going to show that type of. Thing.

Speaker 1:

And what about Weed it's. It's kind of a thing, right, it's a pass.

Speaker 2:

We do flag for cannabis. But if the company wants to know, if the company requires it, yeah, yeah, right, and it's. You know, in places Like California it doesn't matter, right?

Speaker 1:

some companies don't ask and won't, and won't care exactly. And then there's the conflict of like, well, in the red, in the state in which they reside it's legal for recreational use and in the state that the company is the employer of record it isn't. So you know which right which one has jurisdiction, and so that's why some companies Simply say we're not gonna ask because it's getting cloudy. Yeah, but then there's also, before all of this, the fact that I might be you know, there might be a picture of what might Could potentially be considered misconduct, in that I'm like smoking a dubia at a party, and that may be lack of judgment you know poor judgment but it's also medical.

Speaker 1:

So you got the like. Well, you know, how do you know? Shelly doesn't have a Whatever license to you know, right, right. So that's where the human comes in and goes.

Speaker 2:

Okay, this is not relevant, because exactly, and it's not only our humans at Fama that are reviewing it, then you know, Because we don't score or say this person is good or bad. We just Shows it doesn't show it. We just surface the information. Okay, the hiring team can then use that information to make got it.

Speaker 1:

So it's like hey need to look at this and determine Okay, you talk to Shelly and say hey, shelly, do you?

Speaker 2:

you know, are you a medical marijuana patient?

Speaker 1:

I would be in a business again, but maybe it is certain yeah right.

Speaker 1:

So that's that's interesting. So that's an application from. In my opinion, that's an application of technology that People haven't really talked a lot about. Right is is this I Don't like a classifier classifications is not a good example. Aggregation, aggregation of information right. When you look at the models like chat GBT, for example, there's a lot of information that was fed into it. That is very non-homogeneous. It's law, medicine, you know, books, fiction, nonfiction, whatever. But when you look at the kind of information you guys are looking at, it's, it's Homogenous in that it's all social content. So you can more tightly define the railings that you can operate into, and I don't see enough of that. Have you, have you seen other Organizations out there that are really tightly defining the rails? I don't see enough of that. I see a lot of generally talking about AI and it seems like a lot of it. Is that connecting to chat, gbt or?

Speaker 1:

whatever right, and that's that non-homogeneous information. Do you see the other side? A lot.

Speaker 2:

Not really. No, not not so much I think. I think it's something we're gonna see. You know, obviously a general model of intelligence is something everyone's going after yeah. And you see any has applications it has applications, absolutely, but I think that that you know the application of more specific models that are much simpler than LLM.

Speaker 1:

But that's a good job much simpler very, very well for a particular domain.

Speaker 2:

You're gonna see that Continue, I think, and I hopefully get become more used because I think it's it's more More accurate in a lot of cases.

Speaker 1:

That's right yeah particular for a particular use case one of my mentors in the whole artificial intelligence, space and Automation, particularly before it was called AI, you know. I mean it's before this whole new AI thing is Come out. So like, let's just say, prior to chat, gbt we used to call automation and I was really big on writing and and using it for practical reasons. But the point is that I had this mentor that told me. He explained this in A way that really stuck with me. He said the robots, the machines, the program, the technology can do something Very, very well in a way that is faster, better, more accurate than a human could, if it sticks to that one thing. So, for example, he used the robot hammering a nail. If you have a nail hammering robot, that nail hammering robot is going hammer nails Faster, more efficiently, more accurately than a human ever possibly could. But don't ask it to make you a cup of tea, right?

Speaker 1:

Right, exactly and so that's what I'm seeing is that when you use the what, when you need to hammer nails and use a nail hammering robot, you've got an advantage, but when you need to Create a compelling story and use a new Heming robot, now you're like just you're. You're really just applying AI just for the sake of applying.

Speaker 2:

You're not taking applying. Yeah, absolutely, absolutely. I think that's you know. I Think it's natural that everyone you know Gravitates towards what's the newest, coolest thing, but there's plenty of AI out there. There is yeah around for much longer than Then. Jet gbt, that does really good very specific right stuff.

Speaker 1:

And so some a. So we really don't talk about social media Much anymore because it's just media like it's now everywhere, right, but when you guys are out there looking for Evidence of behavior, social media is a big component of it. Right, because, let's face it, if I intentionally wrote an article on LinkedIn, I meant for that to be seen. I'm not gonna be somehow leaking. You know, there there've been other companies before. There was one called crystal. You might have heard of sure that was it like interpreting all of your profile information, trying to make some sort of prediction about your personality and stuff like that. Yep, how close is this to that?

Speaker 2:

it's actually Something that we're working and we're definitely looking to say how do we get a, how do we get fit for a position right right now?

Speaker 1:

It's Flagging potential risk, correct, but there's also this, this finding potential matches, which is the not the not the opposite, but it's, it's another part of the spectrum. It's now. You're not like trying to protect someone from risk.

Speaker 2:

Now You're saying let's find someone that actually might be a really good match based on this totally, and it's sort of the the positive and negative being put together in a way. Yeah you can say, okay, this person isn't isn't partaking in in any misconduct, and his personality fits this. The conduct they are partaking in is good is a good match for right, absolutely yeah so that's that's.

Speaker 1:

That's something we're working on. That's the future.

Speaker 2:

Yeah, okay, probably probably in six months or so you'll see something like it.

Speaker 1:

Nicole. What do you have to say about that? You've been very quiet.

Speaker 3:

Yeah, I've been. Just I've been enjoying the ride of this podcast and the turns.

Speaker 1:

Yeah, yeah, what he said yeah, no, I mean.

Speaker 3:

Brenton is always he's one of the smartest people, but he's very. He doesn't talk a lot.

Speaker 1:

So me getting into talk is a big win.

Speaker 2:

Absolutely, oh right. I don't think he's ever talked enough to lose his voice in his life.

Speaker 3:

So I've just been happy listening to him talk about AI and tech and that's what he's he loves and he's really, really, really good at definitely comes across and I appreciate your perspective on it, especially what you know.

Speaker 1:

We have to look forward to that. That's exciting and it's a lot less boring than all the like Generally.

Speaker 3:

I yeah, it's to me it's.

Speaker 1:

It's got a real application, so exactly. Well, thanks so much for being here today. We are live at the HR tech expo in the Olio booth with the folks from Fama. Thank you very much. Thank you you.

Screening Technology for Workplace Misconduct
Challenges and Possibilities of AI Regulation
Podcast Discussion on AI and Tech