Offer Accepted

Scaling Values Interviews with Maggie Landers, Harvey

Ashby

How do you keep hiring quality high while the company scales globally at speed? 

In this episode, Shannon sits down with Maggie Landers, VP Talent at Harvey, to walk through the values interview program that keeps a fast-growing AI company aligned. Maggie shares how Harvey turned three simple values, simplicity, decisiveness, and “Job’s Not Finished”, into a global, standardized interview that any people manager can run without adding friction to the process. 

Discover how her team leveraged tools like ChatGPT and Ashby to accelerate the design of interview questions, scoring rubrics, and enablement from months to mere weeks. Crucially, this efficiency was achieved while prioritizing human judgment, active listening, and an excellent candidate experience.

Whether your team is hiring dozens or hundreds of roles, this conversation will help you simplify your values work, prove its impact, and scale it across regions and functions.

Key takeaways:

  • Simple values guide hiring: Focusing interviews on a few clear values improves cultural alignment without adding complexity.
  • Train managers as interviewers: Empowering all people managers to conduct values interviews ensures culture ownership and scalable hiring across all teams and regions.
  • Iterative Improvement: Ship, Learn, Refine. Treat values interviews as an evolving product. Quickly update interview questions and rubrics using AI, informed by focused group feedback.

Timestamps: 

(00:00) Introduction

(00:24) Meet Maggie Landers

(02:08) Why Harvey scaled values interviews for quality

(04:39) Turning three simple values into a global hiring lens

(07:47) Building scoring rubrics that interviewers actually use

(10:29) Designing a low-risk pilot for values interviews

(13:33) Getting exec and board support for experimentation

(17:35) Standardization, simplicity, and training people leaders at scale

(22:59) Using AI to maintain values programs without burning out

(27:10) Why great interviewers are listeners, not checkbox operators

(30:00) Where to connect with Maggie


Maggie Landers (00:00):

I tend to correlate excellence with impact over perfection. As I mentioned, one of our Harvey values is jobs not finished, and so I actually don't think that the goal is to be perfect. I think that what we're trying to do is have a massive impact and do it really, really fast.


Shannon Ogborn (00:24):

Welcome to Offer Accepted the podcast that elevates your recruiting game. I'm your host, Shannon Ogborn. Join us for conversations with talent leaders, executives, and more to uncover the secrets to building and leading successful talent acquisition teams. Gain valuable insights and actionable advice from analyzing cutting-edge metrics to confidently claiming your seat at the table. Let's get started.


(00:50):

Hello and welcome to another episode of Offer Accepted. I'm Shannon Oborne, your host, and this episode is brought to you by Ashby, the all-in-one recruiting platform empower ambitious teams from seed to IPO and beyond. I am super excited to be here today with Maggie Landers, the vice president of talent at Harvey AI. She came on board to Harvey AI in July just several months ago to help scale the company from 350 to 500 now and over a thousand employees. In the very near term, she's leading efforts to build high-performing teams and develop a really strong global talent strategy that supports Harvey's rapid growth in the US and internationally.


(01:27):

Before joining Harvey, Maggie spent more than eight years at Intercom, a company that lots of folks are familiar with, where she led the global talent function and helped grow the organization from 200 to 1200, which is incredible growth. She brings a deep expertise in scaling teams, which we're going to get to some of that today, and building talent programs that keep pace with ambitious company growth. Maggie, thank you so much for joining us today. I'm so excited


Maggie Landers (01:49):

To be here. Thank you for having me.


Shannon Ogborn (01:51):

Before we get into sort of the how you did this, what we're going to be talking about today is values interviews and scaling them, and let's first start with the importance of values and values interviews.


Maggie Landers (02:08):

Yes, absolutely. So I came into Harvey and the task in front of me was really a task around accelerating the pace of hiring but not lowering the bar and ensuring that we were bringing in folks who were as talented and as aligned to our ways of working as we had in the past. Harvey's a place where there's a lot of growth internally, so you have a lot of first time managers and hiring managers and also is a place that because we're scaling aggressively was bringing a lot of new leaders in.


(02:43):

Ultimately, what we wanted to be able to do was streamline the process of evaluating whether or not somebody would be really successful at Harvey, and one of the ways that we decided to do that was to make sure that we didn't just test technical capabilities functional, but that we're also testing ultimately alignment to the way that we work and what we believe is important to us.


(03:07):

And so instead of leaving it to each team to build out their own culture contribution session or allowing each region to sort of dictate what things would look like, we felt that having a global and consistent and aligned way of assessing would really help us not only move quickly, but make sure that as we expand to new geos, which we're doing incredibly quickly and bring in a bunch of new interviewers, we could maintain and potentially even sort of increase our alignment and the quality of the hires that we had made to date.


Shannon Ogborn (03:42):

Alignment is really the name of the game here because the more aligned everyone is working towards the same mission and working in ways that are constructed with each other. Not every place of work is going to be the right place for somebody, and I think getting really clear on what that looks like, not just for people from the outside looking in, but from people internally too. It's definitely exemplifies the sentiment of go slow to move fast because you're taking a step back and there were a lot of different things that went into really building this program, the first of which was codifying the values, which I'd imagine is a lot more difficult than it probably sounds you. Tell me a little bit about how you went about codifying them and sort of the process for designing the pilot that you all did.


Maggie Landers (04:39):

Yeah, so the values were rolled out before I came on board. Three values, simplicity, decisiveness and jobs not finished because the values had already been rolled out and actually the average employee could recite those values easily. It's very much indoctrinated into the culture. People know what they are, they're living and breathing them, which I think sets a great foundation because it's hard to assess when either the values don't resonate or when people aren't familiar with them,


Shannon Ogborn (05:10):

And that's a huge step


Maggie Landers (05:11):

That


Shannon Ogborn (05:12):

Cannot be understated.


Maggie Landers (05:13):

Exactly. I think if your values are overly aspirational, assessing against those values isn't really going to help you find the right talent group because if you're living kind of in la-la-land of here's how we operate and here's what our team values, but that's not actually what you're living and breathing, then you're going to have this dichotomy between the interview experience and what life is actually like when you come on board, but ultimately because the values had already been rolled out. What we did, and this is where AI has played such a big role in all of this, was leverage ChatGPT to create questions that we believed would help us assess alignment against those three values for both individual contributors and for managers.


(06:01):

And we basically had this sort of pool of questions that we ultimately piloted and we leveraged our 25 participants across different regions and functions and teams to trial and ultimately learned which questions gave the best signal, felt the most natural felt kind of overlapping, but we absolutely started with the ChatGPT, what are great interview questions to assess whether somebody is aligned toward the value of simplicity or decisiveness.


(06:35):

And what we were also able to do was create some probes around authenticity and ownership. So not only were we looking at are you aligned toward this way of working, but do we feel and can we get a sense of the fact that you're going to sort of own your actions and whether they work out or not, and also approach these conversations in an authentic way to really get the full picture of these candidates and whether or not we think they align


Shannon Ogborn (07:11):

A hundred percent. One of the most difficult things with interviewing is scoring rubrics and you can have the best questions in the world and you're like, we are so freaking aligned on these questions, but if you don't have that scoring rubric to say what is very bad, what is not great, what is good and what is yes, this is what we want to see from candidates. How did you go about building those rubrics for the questions that you came up with?


Maggie Landers (07:47):

And again, this is where AI was super helpful, but it was the combination of leveraging ai, testing it, putting it out into the wild, getting feedback and then putting that feedback back into the system.


(07:59):

We developed a scoring rubric with ChatGPT. It was like, Hey, come up with a one through four rating and describe for each of these values what a one through four answer would look like. We ended up getting feedback from the participants that it was an overly complex scale. There wasn't enough difference between a three or a four or a two or a three. There was a lot of feedback that we should move toward a one through three scale and also feedback that having that scale be very aligned, mostly aligned or misaligned was actually going to be a lot more effective than the one through four rating participants also wanted, not just descriptions of those rubrics but examples. So again, another very, very easy thing to plug in, provide an example of each question or of each answer that should fit into each of these ratings so that folks are calibrated out of the gate.


(09:00):

And what was nice about doing that is in Ashby, folks go to score and they can actually see examples. Oh, that's interesting of how somebody would answer that. We would suggest a rating that we would qualify as very aligned or misaligned, and even if everyone's conducting the same interview, if their interpretation of what good or great or misaligned is different, then you're not going to get the output that you're looking for. So I actually think that the proof is in the pudding in terms of trialing these things, rolling them out, and ultimately being able to see consistency amongst interviewers and ratings to move this all along.


Shannon Ogborn (09:43):

I feel like there is a certain analysis paralysis with talent teams, especially today. We know that there's a lot of focus on quality of hire. There's a lot of teams are, even if they're scaling a lot, they're still leaner than they would've been like companies that are 300 now five years ago they might be 600 at this stage. How do you balance the difference between analysis paralysis and just saying, we've thought about this, we're going to trial it, then we're going to get the feedback. Because I think a lot of people, and I understand it, it's no matter if it's an employer or employee market, it's a hard time to hire and get it right. We want to get it right. How do people balance? We did a few things


Maggie Landers (10:29):

To reduce anxiety around this kind of pilot and interview. First we ran a pilot instead of just going out out of the gates with this interview, we also didn't eliminate the other culture contribution sessions while we were running the pilot. So if we got feedback that, hey, our old session gave us a lot more signal, the new session didn't give us as much signal as we needed. We still had the original signal from the original session. Oh,


Shannon Ogborn (10:56):

Interesting. Okay.


Maggie Landers (10:57):

We did not give the pilot values session veto power, meaning if somebody didn't pass that session, they weren't going to get eliminated from the process. The feedback would be taken into consideration with everything else in the interview, and I think that until you have data and until you've been doing this long enough to actually say, Hey, there is a true correlation between how folks score in the values interview and how they then perform, whether that be in later performance reviews or in onboarding surveys, you don't want to give it too much power out of the gate. So I think the idea was let's roll this out, let's learn from it, and then when we get to the place where we know there's a lot of signal here, it will carry more weight and we probably will feel more comfortable saying, Hey, if we have concerns on the values interview, we're going to pass.


(11:47):

I would say right now you kind of have to go in an experimental sort of lightweight way or else you get that whiplash. Also, we put very purposefully folks who would've potentially been our biggest critic into the pilot. I think this enabled us to leverage their feedback, bring them into the fold, make sure that if we knew that there was reluctance to get rid of sessions that had been run in the past. I sat down with some of those folks, everybody submitted surveys, but there are a few folks where sitting down with them and saying, what question are you most worried about losing or what can we do to ease your anxiety with actually running this session? I think was really critical because when you do push something like this out into the organization, people look around and they want to know who from my org was part of this pilot who from my org is signing off on it and if the person who's signing off on it hasn't looked at the details, hasn't run a session, do I really trust it? So I think that making sure that you're bringing the right kind of folks into the pilot, not people who will just say, yep, loved it, it was great, but actually the folks who you think are going to be most critical and bring them into the fold and help you make it something that they feel confident in also helps with those rollouts.


Shannon Ogborn (13:08):

It's so funny because I think when it comes to selecting somebody for a pilot, I always get such different answers and I don't think there's a right answer, but knowing your organization deeply and talent people know their organization better than anybody, the entire business, it's such a critical piece of success of pilots because you need to know who the people are going to be that are going to set this up for success. Totally. I think


Maggie Landers (13:34):

You also need leadership behind you if you're doing this kind of pilot because it does take time. I'm lucky in that not only did the exec team really support this initiative, the board supported it as well. Looking at how aggressive our growth plans were support I think is huge, and then also setting that expectation with the participants that this is a pilot and actually we're looking at you to help us make it successful. It was very much, Hey, this is the group of folks that we believe are going to help us create a program that is going to be durable, that is going to be a great experience for candidates and allow them to better understand our culture and make a decision as to whether or not it's the right place for them that's going to enable our interviewers to get the right sort of read on candidates as quickly and thoroughly and consistently as possible, but we can't do it without you and we can't do it without your feedback.


(14:31):

I think was really important. We had a hundred percent of the folks who did the training actually ran a pilot. Every single one of them submitted their feedback. The feedback was incredibly helpful. I actually read all of it line by line because it was really interesting to see what different people's takes on it were. But I think ultimately it was also great to have this big Google sheet of data that I could throw back into ChatGPT and say, based on this feedback, what's the most common critiques? What should should we do differently? What suggestions come out of this? And then I also had them analyze it of what are the differences by org? What are the differences by level? Because you can read it and you can go into all the thorough detail, but it was actually nice to have tech GPT also do that kind of final analysis and not be overly swayed by one or two loud voices, but really review all of the data.


Shannon Ogborn (15:27):

Yeah, I think it's a really important note because there's a lot of differences between orgs and the culture of an org is as important as the culture of the company and there's kind of multiple circles of alignment and then you add globalization into it, and that's a whole nother circle of how can we do this in a way that's going to resonate across cultures because that's often sort of a misunderstood element of hiring, especially as you're scaling quickly globally and having a hundred percent participation in people submitting feedback. That feedback takes time. It's a thoughtful exercise and I just think if you're building talent programs, you have to lose your ego at the door because it's going to be thrashed, it's going to be ripped apart. You're going to get hard feedback and you're going to get great feedback, but most people are like the hard feedback is kind of what fixates in your mind. Absolutely.


Maggie Landers (16:34):

It also helps, one of our values which is assessed in this program is jobs not finished and it's the idea that you're never really done. There's always more to do. There's more to iterate on, and having that as a value also helps roll out a program like this and make sure people know we might not nail this in the beginning. It's going to take time, it's going to be an iterative process, but ultimately we're going to get nowhere if we don't start somewhere and let's get it out. Let's do it quickly. Let's start getting data and let's move forward


Shannon Ogborn (17:16):

A hundred percent. On the note of globalization and scale and learnings, one of the things that we had talked about before was this sort of decision between standardization and customization and what is gained and what is lost. What came out of that sentiment?


Maggie Landers (17:36):

Ultimately, we felt that the best thing to align around were those three values


(17:43):

As opposed to different sub values or ways of working, whether be for specific teams or specific regions. I think that having simplicity be a value also really helps. There were a million opportunities to overcomplicate this totally. Even thinking through how many interviews we were going to interviewers, we were going to need to be trained in order to roll this out and not have it be a bottleneck. That could have been a very complicated calculation that shifted every month with new hire starting what percent of the org is trained on this. We just decided to say all people managers are going to be trained on this. About 25% of our org is in a management role in some capacity, and so it was easy math. It was like, Hey, you're a people leader.


New Speaker (18:39):

You're


Maggie Landers (18:39):

Going to get trained on this. We also are adding, of course, a number of ics across the organization who are tenured and aligned and we're going to go from there. But that was just an easy way to say, how do we start with the base of at least 25% of our work? It would've been a lot more complicated to say, okay, here are some sub-questions for engineering and here are some additional questions that we want for new regions or founding members of new geos and deferring to keeping it simple, having three values and not eight, deciding on people, managers all being kind of trained up on this. There are a lot of decisions like that just allow us to move quickly


Shannon Ogborn (19:22):

And that's living the values. You have a value of simplicity and you're like, let's keep it simple. We're not going to overcomplicate because that's when people start to get frustrated and it starts to feel more like Big brother than actually valuable to the organization in the way that you're hiring. We


Maggie Landers (19:42):

Very deliberately did not want to structure this program like a bar raiser program where there's this small group of folks who have this special designation and their job is to protect the rest of the company. We have to grow quickly. We're scaling, and a big part of the reason why we think people leaders should all be able to do this is they have the responsibility of growing and building their team, and they're all very motivated to do so. I think sometimes the bar raiser program becomes a program that might be in conflict with trying to scale where folks feel like they have to protect the business from growth as opposed to having the individuals at the forefront of the growth really leaning in to make sure that we're assessing thoroughly and against the right things, but nobody is being tasked with protecting the organization, I think,


New Speaker (20:40):

Which that can feel so big. And if someone was like, you're, drop us to protect the organization, I'd be like, I don't want that. Don't put that on me. Totally. We are in an era where roles that exist today might not exist tomorrow in the same sense, or new roles might just evolve pop quickly out of nowhere, and if you're aligned holistically at the organization level, that person will be set up for success to work at the organization in the ways that it's changing.


Maggie Landers (21:14):

Totally. We don't want to have a very siloed organization where there's a completely different culture from one team to the other. Everybody has to really work together to move as quickly as we're trying to and to build and iterate. And so some of those principles helped us roll out a program where really everyone's involved feels like they are contributing to the growth of the organization, they're accountable for that, and that there is consistency in the things that we look for regardless of level, regardless of geo, regardless of team, and even the nuance of having some questions for people, managers and ics are not because we believe that those values show up differently, but that we believe that people, managers need to be able to lead their teams in a specific way and champion the values and actually help kind of coach and bring others along. So it's not just about them exhibiting the values, but also you're not


Shannon Ogborn (22:16):

Just living them, but you have to be a champion for the values because you're tasked as a people manager with getting your team aligned. Totally. And you can't do that if you don't exude it. But then the second part of that is that you have to be the champion that's bringing your team along, like you said. Totally. Are there any other learnings from the pilot if someone were to be doing this pilot or starting something like a values interview and trying to run a program like this? Any sort of advice on things you have learned or pitfalls or traps that you're like, no, don't step there. It's like Totally. I think


Maggie Landers (22:59):

That the AI was incredible for the enablement, the resources, the framework. Every time there was an update, I just had my little folder on chat, BT and I was like, update the execs, draft this in a Slack, draft this to the participants, take all this information, build out the new deck, you change the scoring rubric just to plug in and say, update all the assets with the new scoring rubric, everything is done. So that was, that's huge. That


Shannon Ogborn (23:31):

Was so huge because it lives in so many different places for a lot of people, and that makes it hard to make changes. Totally. The decks,


Maggie Landers (23:40):

The emails, having it all just sort of essentially be managed by chat. GPT was huge. That said, I think one of the big takeaways from this program is there's no amount of sort of program build that can't benefit from a pilot


Shannon Ogborn (24:01):

And


Maggie Landers (24:02):

Setting it loose in a real world scenario and getting feedback, and again, chat, TBT helped come up with great questions


(24:12):

For the participants and prompting it to say, what are the, let's make sure that the survey asks the questions that we're going to need to sort of iterate and adjust this. Everything from length to awkwardness to really anything under the sun I think helped. But you can't replace setting something in the wild, seeing what happens and then adjusting and refining, and I just think that the pilots are so important. And then also, one of the only things I didn't really automate was the trainings. I did them all virtual or in-person, but I required attendance for the trainings, and if you couldn't make it, then we didn't have you participate in the pilot. I find that those sessions not only enabled us to set a tone in a way that we couldn't have if we just sent over a loom video or a deck, but also people pushed on a lot of the things that we were doing and asked a lot of questions that were the right questions to ask, but I think it got them in the right head space in order to run those sessions in the right way. I think that when you're rolling out a program, a big, I think the skill to be developed in the recruiting world as AI continues to evolve is having the judgment around what do I use the tool for? What do I do in person? What do I roll out broadly? What becomes a pilot? What's an experiment and deciding what to do when was a big part of the learning, in terms of figuring out how to make it successful?


Shannon Ogborn (26:08):

Well, you've got to put some constructive fun to it. There's so much, I love how much you utilized AI to help build this, and I can only imagine how much longer it would've taken if you didn't have it. But at the end of the day, it's your human judgment saying, this is a good question. I know and I'm aligned with the culture. This is not a good question. Or let's just scoot that one out and give AI the feedback like, Hey, this isn't a good question for X, Y, Z reason. Let's take it out, but also keep that in mind. But one of the favorite things that we have talked about in the past is AI in service of what? Why are we using AI and what is it servicing? And I think your thoughtfulness that it's not random how we're using ai, we're using it for a specific reason, and here's what we're using it for, and we know what that is, and we're not using it for everything. We're using it for the parts that we think we can move faster on, but at the end of the day, it's our human judgment that's pushing this program forward


Maggie Landers (27:10):

Totally. AI is not going to make interviewers great. The reality is that the best interviewers are critical listeners. They are individuals who are not approaching the session as a check the box. They don't have sort of a preconceived notion of like, I'm going to ask this. If the person says this, then it's a yes. If they say this other thing, then it's a no. And that's actually going to be a big part of the training before we roll this out. How do we not have a check the box values interview session where you're actually listening and probing and digging deep and you can use all the tools and we've now created some prompts from the get go, but ultimately we need people to show up in a very present, collaborative, thoughtful way, conduct themselves that way, using the framework that we've built, apply the judgment and figure out how to move the conversation along in a way that's natural.


(28:29):

Figure out how to go with the flow a little bit. Somebody might have a great example answer to a way that they have simplified a project when they're talking through the simplicity value, and it might actually make sense for the interviewer to use that one example and go through all three values. And so you still have to be a skilled conversationalist, a thoughtful listener, and somebody who is pushing back and probing and connecting dots. The tools aren't going to teach you how to do that, but they're going to hopefully give you enough structure to then apply those best practices and get the information you need to


Shannon Ogborn (29:11):

Get. We might have to have you back for where are they now? We've never done one before, but it excites me about this because I think seeing how this progresses past a pilot over time, scaling globally to different orgs, that's when you really see how applicable you have made your program to everything. So I am excited about that. This is all really insightful for folks who are trying to move in that direction. We are going to get to the questions that we ask every guest, but these are now living on our YouTube in the extended version. So if you were listening on Apple or Spotify, please head there to hear what hiring excellence means to Maggie, her recruiting hot take, which I'm very excited about, and one thing she tell her early career self. Well, we are coming up on our time. Where should people go to learn more about you, your work, and Harvey?


Maggie Landers (30:07):

Well, we're hiring a lot, so you can check out harvey.ai. We also post a lot on our Harvey LinkedIn page about roles and our culture and what we're up to, so you can check it out there. And where should people find you


Shannon Ogborn (30:22):

On LinkedIn? Amazing. LinkedIn where all the talent people live. No one has, only one guest has ever said, Twitter, X, whatever. I'm so on. The recruiters on LinkedIn. Yeah, exactly. Exactly. It'd be a problem if someone didn't say LinkedIn. I think this is going to be so valuable because especially as companies move forward and soft skills are making a comeback here, the values piece of interviewing I think is going to be super important. So I'm excited for people to hear how you all have programmatically move this forward and excited to hear more results in the next six to nine months. But thank you so much for spending time with us.


Maggie Landers (31:00):

Thank you for having me. I loved it.


Shannon Ogborn (31:03):

This episode was brought to you by Ashby. What an ATS should be: a scalable all-in-one tool that combines powerful analytics with your ATS scheduling, sourcing, and CRM To never miss an episode, subscribe to our newsletter at www.ashbyhq.com/podcast. Thank you for listening, and we'll see you next time.