Dear EverMore

will AI really take my job? (part 1)

Season 1 Episode 11

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 29:46

Welcome back to Dear EverMore! In this episode, our founders Scott, Courtney, and Kelsey talk about AI — especially around the fear of how this will impact work and if AI will really take our jobs. We hear this fear a lot in our social networks, and it’s a real one. However, we think AI is a great tool to make our work better, so we have more time to be creative and come up with ideas.

This is a multi-part conversation — make sure to follow or subscribe so you don’t miss part 2 around how AI will affect the future of work and our advice for how to harness it to make your career better.

What we share + talk about:

  • What AI is genuinely good at today — pattern recognition, synthesizing or compressing information, and being a thought partner. Think of AI like a bicycle: it doesn’t replace your legs, but it can help you get there faster.
  • Where people are overestimating AI — it can’t produce original insight, handle long-term reasoning, provide accountability, make ethical judgments, or offer contextual empathy. AI recombines what already exists; it can’t create something truly novel.
  • The creativity concern — if electricity went out tomorrow and we started over, we’d still have creative thinkers, but AI wouldn’t be able to create anything from the ether. AI should remove tedious tasks so people can be more creative, and we believe the extra time should go back to you for thinking time, not be repurposed for more work in the antiquated 9-to-5.
  • The layoff mistake — we’ve seen companies lay off workers expecting AI to replace them. However, Salesforce recently admitted they regretted their layoffs due to negative customer experiences. There’s a mismatch: people want AI to better their work and experiences, not replace the human element.
  • AI amplifying bias — we’re concerned about how AI will amplify the bias in our data: in the responses it gives that could shape worldviews, in decision-making tools, and in how it engages with people. When bad decision-making from leaders and work systems is the foundation, building AI on top of it only makes it worse.

If you’re looking for a way to take control + own your career in the age of AI, we invite you to join the EverMore free beta!

Send us Fan Mail

Have a burning question you'd like answered on the pod? We'd love to give you advice! Submit your anonymous question

SPEAKER_02

Hi everybody, welcome back to Dear Evermore. We have a really special episode for you today, all around AI. Specifically, we hear a lot of the questions around will AI actually take my job? We don't believe so, but it will help you in your job, whether it is career ownership, like the tool we're building with Evermore, or around expediting and automating a lot of the tactical things that you do in your roles. One thing I do want to mention is we used Zoom for this. And so you might have a little bit of a different audio experience than normal. We definitely missed being in the same room together. We very much prioritize that. But we had a lot of fun experimenting with using Zoom for the recording. One more thing before we jump right into the conversation. Our love letter to your future self side quest in Evermore does close March 31st. So you have a few more weeks to utilize this and try it out yourself. I showed mine to my friends and family, and everyone who heard it said, Wow, that's so good. I could never write that by myself. And I answer, Well, I didn't write it. I actually just answered questions from Evermore. And it wrote this beautiful love letter all by itself with my answers. So if you would like your own love letter to your future self, we have it available in our beta. You'll need to join with the link in the show notes, or you can go to evermore.so to request an invite. We'd love to hear your feedback on the love letter because we have gotten so much great response on it. So I'm so glad that folks, if you have already done it, are enjoying it. Anyways, let's jump right in to our conversation about AI.

SPEAKER_01

Welcome back to Dear Evermore. I'm your host, Courtney. Today I'm joined by our COO, Kelsey. Hi. Nice to be back. And our CTO, Scott.

SPEAKER_00

Hi everybody, great to be back. Yeah, really exciting topic. So I'm kind of um kind of revved up to talk about this. So great, great to be on.

SPEAKER_01

We're going to be talking about all things AI, including crafting a career in the age of AI and what our stance at Evermore is on it. I know for me, AI feels like it's either the apocalypse or the answer to all of our problems. I'd love for us to build something that's a little bit more grounded and more optimistic. And I think that through line of hope is something that we talk about a lot as a founding team, because we do think there's a way to design AI that's really additive to people and communities. But I think that means getting real with what it's good at and what it's not good at. So that's where we're going to start. And just as a very quick disclaimer, Scott and I are dying from Austin allergens. If you know, no. Um, so we will be doing this with as much gusto as we can. But if you hear the stray cough, we are sorry. Um, but Scott, I'm coming to you first. So let's dig in. From a technical perspective, what do you feel like AI is really good at?

SPEAKER_00

Yeah, I think uh AI is really good at, you know, uh combing through vast amounts of data, summarizing, uh highlighting, kind of doing the grunt work that used to take so long and and now it doesn't take long at all. That doesn't mean it's always right or that you don't have to do, you know, your own, your own uh your thinking or or checks on it, but um, it certainly is like a key productivity unlock, at least for for me and and for you know others I've talked to in different disciplines. It it is a productivity booster par excellence.

SPEAKER_01

Kelsey, what about yourself?

SPEAKER_02

Oh, I uh was nodding along while Scott was talking about that because there's so many times where I have to reference check a lot of the things that AI says, but I do think from a technical component it does take away a lot of the work that I was having to do. Like even in Excel, there were times where if I'm having trouble with a formula and it's a very complex formula, it's really nice that I can go to something and ask, like, I'm trying to do this with this, these data points, what would that formula be? Now it doesn't take as long. And so I think from a technical aspect, it really elevates my work, but I still have to make sure does this actually is this accurate? Where is it coming from? And and really questioning it.

SPEAKER_01

Yeah, I've heard you describe it as your thought partner before.

SPEAKER_02

Yep. Yeah, I use it creatively as a thought partner um in my music stuff too, and then I also use it as a thought partner with Evermore and with like financial formulas, like I'm talking about, and kind of synthesizing things for me, but also helping me like kind of take an idea and really understand it a lot better.

SPEAKER_01

I was reading an article recently and I love the way that it described how to think about AI, and it was to think about it like a bicycle, it might help you get where you're going faster, but it doesn't replace your legs. Does that resonate with y'all? Absolutely.

SPEAKER_00

I think so, but I do think some of the recent models like the I'm thinking like ChatGP or OpenAI's 5.3 codex, we're getting really good results out of that, right? And when people are kind of letting it off the leash and perform some of those tasks by itself, like you know, you feed it instructions and you tell it how to test itself. You can even, you know, people are even like automating it to deploy to production or maybe a staging environment. We're getting pretty good results out of that. So there you you do wonder like when's that flip happened to certain certain things are are fully automatable uh in certain disciplines.

SPEAKER_01

Yeah, Scott, what do you think that these advancements mean for the future of engineering and like building software?

SPEAKER_00

Yeah, I like to remain hopeful on that front. You know, certainly you could say that's taking that's taking away a job that somebody was doing, or you can look at it a different way and say, these people are absolutely much more effective because of these advancements, right? So I prefer to be on the hopeful front. Just a quick personal story. Last night my daughter was talking to me. She's she's graduating this year, going to college uh in August, and she wants to be a neurologist. And she's like, Dad, is AI gonna eat my job? You know, or will there be no need for what I'm planning to study the next however many years and spending however many, however much of my of my retirement fund? Um is it going to be worthwhile or is my job just going to be obsolete? And you know, I'd like to remain hopeful on that front. Yes, uh, you know, a neurologist uh just provides recommendations, treatment plans, does analysis. Can that fully be replaced by, you know, um an agentic interface? I tend to think no. There's there's a human component, there's the the questioning and the you know, like really trying to understand and get to that root cause, not just deliver a diagnosis, right? And on your way. So there's a human aspect there that I just don't, I just don't see it being replaced, but maybe I'm being a little too optimistic and a little too hopeful there.

SPEAKER_02

I'm right there with you though. I because even when I when I'm a customer of a product and I have to talk to someone or like call or virtually chat with an AI assistant, you can always tell when someone is when it's like an AI or when it's written with AI or when it's it doesn't actually sound like them. And there's a lot of like social media content of people making these um like someone reenacting, like imagine talking to an AI in the wild or like in a conversation, and like it's all of the like first paragraph of what you usually get in like ChatGPT or Claude of like, that's a really great point. Let me go into this, but let me also pull it back a little bit. And like it doesn't actually like it's not it's not a real conversation. I think like humans we still want that real human touch, which is what we all want, anyways.

SPEAKER_01

Yeah, Kelsey, I had an interaction where I hadn't received like a package and I didn't even have a tracking number and it had been weeks, and I emailed was like, hey, I just wanted to make sure everything is all right. And I got a note that was from their friendly AI bot CC that was like crossing my fingers that your package reaches you soon, and it was like really cutesy, and at no point did it answer whether or not the package had been sent. You're like, Oh, I hope it does too. It was like, hey, can I get a bit more detail? And then the next email I got was from a human that was like, Hey, sorry, we did miss your package, we did forget to send it. And I was like, What if we were all just crossing our fingers? It was a very like disassociating experience of just not having that service. And I definitely have never purchased from that vendor again because I didn't like the interaction. And Scott, my my doctor uses AI and they send you your like blood work in a portal with the AI analysis, and it's actually a little like rattling because it's like a list of like I need like what it could be. Hey, everything's probably fine, but in the off chance it's not fine. Here's things to know. And I was like, okay, is anyone gonna call me? And then my doctor called me and she's like, Hey, sorry, I try to beat the results to you because I can I know that it can be off-putting or like you don't know how to interpret them. And I think there was something a little bit like AI doesn't have that like warmth or that empathy or that ability to maybe like reason with like um what they're looking at and like explain it to another person in a way that would feel maybe comforting or like clear versus terrifying. And I think that brings me to like a the question that we're kind of all dancing around, which is what is AI actually really not good at?

SPEAKER_00

Yeah, um, I I guess I can uh chime in with my view there, but I think I think you kind of hit upon it, which is you know, creativity, novel thinking, right? Like if you have a novel problem that hasn't really been solved, or or maybe not completely solved in in the way you're you're looking to solve it, or maybe maybe there's a design element that is is fresh and new, you know, AI is just gonna echo chamber back what everybody else thinks, right? So it's kind of like is is the crowd always right? Well, no, uh the crowd is often right, but it's not always right, you know. So I think that's where uh you can see some scenes in in what AI is not good at, right? Is is doing something creative or or novel or you know, really doing that that kind of uh first work that you know might someday later be incorporated into that body of knowledge.

SPEAKER_02

Of course, AI can create art and it can create you know videos and it can create music. And at least in like my field, like I I've been hearing a lot more of this like AI music, and I I just can't fathom being into it, but then I listen to it and it's it does feel very real. And so it it can do it, but it has to have the human prompting. I've even had like images generated with AI, and then it does the whole like fourth hand or uh or a sixth toe, or uh, you know, it doesn't understand that like an eye should be here and not on your chest. And and then you have to be like, no, take take the fourth arm out of it, and you have to like keep prompting it. And so as an artist, um, for most of my life, I actually think it would be great if there are ways that like, yes, use AI as my thought partner, help me organize my thoughts, help me organize my ideas, help me kind of take an idea that I have and really kind of like look at it from a lot of different angles, but it's still my idea and I'm still prompting it. And and so I I think it's just really bad at like coming up with an idea. And I did see this thing, and it was definitely music related, where somebody said, if electricity just went out, I'm still I'm I'm still a musician. AI is not a musician because we have hands, because we have ideas, because we have thoughts, because we have memories, like we're still an artist, even without electricity running.

SPEAKER_00

The point about the image that had it had a few problems, it kind of takes me back. I don't know if y'all had these as a kid, but there's these like uh what are they called highlights magazines where as a kid you feel what what doesn't fit in this image, and you gotta really kind of look at it and study it. But back to the electricity thing, I I think that's that's a great point. Um, you know, we we have our own viewpoint on how best to use AI and and uh you know how to how to ethically how to approach it from an ethical standpoint. The electricity uh thing really bothers me actually in in that you know uh I think there was uh some statements from the administration we need to we need to 10x or or you know uh increase our power output by this much to uh win the AI race. And then you look at like um China versus the US, you know, four times the the power, the electrical power uh production capacity, building like uh estimates are five to ten coal power plants a month. Um, and then you look at well, the AI is uh US is uh the US is clearly leading uh in in AI attack right now, but how hard is it to catch up? And at some point, isn't it, isn't it just a who can consume the most power? Um so yeah, I think the uh the point about electricity is is really important. At what point are you are you spending so much that you know AI um AI uh agentically invoking other AI, the cyclical, you know, there there's um there's a lot of uh fervor on on Twitter right now or X, where they have uh they've set up these um you know chat groups where AI is just talking to itself, AI is talking to another AI, right? And what is what are we doing? What is the purpose of this? So that is a sticky point for me.

SPEAKER_02

It kind of comes back to something, Court, you and I have talked about, or even you've written about is like the A have your AI talk to my AI. And I think that's what I love about like what we're trying to build and what we even put into the the product is the critical thinking, is the like, hey, don't forget to like check this. And we even have like our little disclaimers, and I I love the like what how it's written because I even think about that, like I'm getting a response and it's like, hey, just remember, like cross-check this because you know you never know, and it's pulling from all of these different data sources. And I think um one other thing is uh I don't think the AI should be talking, like it shouldn't be having the conversations for us. And we're all about like come into a to Evermore to understand and process before you have a hard conversation with someone, or but when you do get hard feedback from someone, let's process it first before you respond. Let's really think about like what we're trying to get at, because like the real work in all of our work systems is the conversation.

SPEAKER_01

Yeah, I think there's something around like accountability too. And if you have AI bots talking to each other and no, they're not sharing any sense of accountability for the core, like hey, what's the point? But if they're talking for other people, like where is the accountability? Where is the the judgment? Where is the empathy? Where is like the long-term reasoning between these two things? And I think that's just feels very dystopian. But I absolutely think I mean we know people are building things like that. We know that people are building that like in the dating world, like to have your AI concierge go date another AI concierge. And I mean, I don't know that I fully get it as someone who's not been on the dating scene in a long time, but I don't know if it is really like a train wreck, maybe this is like a way for people to like save their time and energy. But once again, like I don't know that I I don't know that it is this nice, a perfect parallel though, between a real person and a real person.

SPEAKER_02

Yeah, but I do I think everything comes back to like we still as humans want to have human connection. We crave closest, we crave someone seeing us and us being so different and still understanding each other. So that's the one thing I really don't think AI could and should replace. Like now I'm even thinking like I would never want to date with an AI talking to another AI. I'm like, no, I'm that's wild. I can't even imagine it.

SPEAKER_01

Yeah, I think it really sends us into like a descent into like loneliness and like a lack of connection. And we are, to your point, very like connective like beings. And I know another thing that makes me nervous about using something like AI for any and all things in your life is that it really has a way of like amplifying like whatever exists in the world, and that includes bias. And I think that when I hear people talk about things like, oh, we could use this to like completely get rid of bias. But if the data that's in there is bias, I have a hard time like understanding how that's going to work. And I think when I read studies that are published, it's not assuaging my concern. Things like that men are more likely to be associated with the word programmer both in the world and in AI, and women are more likely to be associated with the word homemaker both in the world and in AI. And Scott, I'm curious if you have any thoughts on that. Like when people talk about using it for like decision making or to like reduce bias and decision making, do you think AI can do that?

SPEAKER_00

That's a really good one because you know, AI is just a mirror, right? Of of what you feed it, but that mirror can be I I have this training data set here, and it's vastly different, different than this training data set here. I think we talked last time about like this data set that was trained on pre-1930 literature, and then they have one that's even earlier than that, right? And it and it comes up with wildly different outputs than you know, something like a recent frontier model trained on recent data, it could be useful to detect bias and in other ways can yeah, completely, completely miss the mark, right?

SPEAKER_02

Well, even on the bias aspect, like didn't we see the data of like where where all these AI models are even pulling from? And Reddit is its top source. And yet, to be on, like, I mean, to be transparent with people that are listening, like I tried to play around with Reddit, and I got as as evermore as the company of like, let me try it in this case, and let me try to have like start conversation, and I'm learning how to use Reddit apparently, and then I got banned because I come across as a bot, even if I'm writing things by myself as a human, and I think even like Reddit's trying to like control the AI slop that goes into it, but then Reddit is the top source for AI models, and then I start thinking about well, if people are writing themselves into Reddit, and then that feeds into AI, is it possible that the AI is just going to learn that way of writing? And then it's just gonna be this like very circular approach of like how our writing is just going to keep becoming AI, even if someone is writing by themselves. And and so I I just find it very strange that like we're all leaning on AI as like we're making decisions, we're gonna replace a whole team and we're gonna we're gonna replace people because I I saw that like you know, Salesforce laid off a lot of people, and now they're regretting it because they're realizing that like AI actually cannot do some of the work that people were doing. And I find this to be this very like biased approach. Like we we know as culture people and like leaders and managers that like biases go into all of these decisions of who to hire, who to fire, who to promote. And now we're leaning on AI, which is even more biased because it's like feeding all this information from all over the world. And I just see that as being very messy if we're only giving our decisions to a technology technological tool.

SPEAKER_00

That's a really interesting point. So OpenAI's uh 5.3 codex was at least partially trained and tested by its own AI, by the previous generation AI model. So then you start thinking, roll roll that forward a little bit, right? And when is AI training AI and when is it just truly just echo chamber? And then you think about it from another uh viewpoint of well, who's deciding what to feed the training data sets? And does that bring into play like people being able to move opinions and outcomes and and on a on a huge scale, right? Like if you know, one model that so many people are relying on, does that now spread to, you know, does that move markets, move decisions? So I guess like one example might be like there was a um uh vulnerability in Next.js, which is a really popular front end framework. And that that was spread widely because so many uh people use Next.js, but the underlying issue was actually uh a piece of software that NextJS is built on. So they you know they really didn't even cause the issue. The root cause was. Was you know several layers downstream. So again, you know, if you're that downstream layer and you're pushing code uh dependencies to all these other layers, does that spread throughout the population? And you end up with some really bad results, right? Because of this distance source that you really can't, there's no traceability or there's very little traceability to head that off and make sure it's not happening.

SPEAKER_02

It's kind of like a foundation of a house. It's a really bad foundation. And we all know that like most of our work systems are bad, are on bad foundation. I'm trying to sell a house right now, so I'm very much in like house foundation. But also, like if we have bad ground, bad foundation, and we're just stacking things on top of it, thinking, oh, this will make this house look better. This will make this house feel more structurally uh stable, but it's not. You have to go from the bottom, and like the decision making is the problem. And we've always had bad decision making because of bias, and now we're just like outsourcing that to AI. It's a recipe for disaster.

SPEAKER_00

Yeah, you using your foundation analogy, right? It'd be like looking at your foundation and inspecting, inspecting it in general and saying, it is really solid, but you miss that one crack that's gonna cause the whole thing to, you know, you're just missing that one piece because you you really can't see. It's it's so layered that that you really can't tell. Um, you can't head off the problem.

SPEAKER_01

I do think it seems like there is like a mismatch in like what the masses like want from AI, which I think is kind of what you were speaking to at the top, Scott, around it can do the grunt work. I think that's what most people would prefer for it to do, and like what maybe is being expected of it by maybe leadership teams or investors. And I think even thinking of like the idea of like what is a foundation built on, I think somewhat it's built on those bad decisions that are also caused by people in power either having like the wrong intentions, motivations, or not having the right information to even make those decisions. And I think that we didn't have like a good foundation before AI, and I don't know that AI helped solve that problem either. So it almost feels like we have like multiple layers of things to like reinvent in order to like to like a place of it being on solid ground.

SPEAKER_00

Yeah, back back to the mirror, right? It's it's it's a mirror of humanity, and and there's yeah, there's those issues don't go away. In fact, we might be pouring accelerant on them.

SPEAKER_02

Yeah, I did see this woman. Um, I think she's a CEO and she posted on LinkedIn a couple weeks ago. She mentioned if AI can help reduce two or three hours of someone's time for their work on a day-to-day basis. I think it's wrong that companies or people are trying to see, like, how can we use those extra two or three hours for more? And she goes, I actually argue that like if they can find ways to expedite their time and their work output, they should be using that time for their own stuff, like reading, writing, talking to people, going out and and connecting. And I think that's actually where that's still work. It doesn't look like the traditional output of work at work. Going and kind of like enjoying the world a little bit always comes back into our work and like how we think about things. And sometimes having that like extra hour or extra two hours to like go for a walk by the water is the time where I like think about things. I think about new ideas or I think about ways that I can like do something better to later or tomorrow. And so I love the idea of like AI can take away some of that time, but we shouldn't have to repurpose that anywhere else. It should be back to that person.

SPEAKER_01

Absolutely. I think that's something we put in one of our articles where we wrote about like how we would redesign like the future of work of using that technology to give space to tinker, to play, to walk, to be creative, to build. And I think that that's something when you have a day that can't even fit into like a normal work day, it's seeping into the evening, it's seeping into the weekends. And AI's not improving that. Like I don't know that AI is a positive addition if all AI's done has convinced your leaders that now you have more time, so they're gonna give you more work. Because to what y'all have said earlier, AI still requires you to prompt it and to, I think, kind of digest whatever you're getting back, verify sources, and also put it in a way that can be like readily understood by other people. I know I personally did not appreciate when people would send me like 10-page long documents that had been like generated from AI for me to review. I was like, I don't even feel that you've reviewed this. And now you're asking for my feedback. And my feedback would be this should be two pages, but now I've got to digest everything you've put here and like provide something back. And I think kind of circling back to your Reddit experience, Kelsey, it's also a bit of a shame to me that while I do think all these technologies allow for people to have a little bit more um bolstering of their skill sets and kind of breath and what they can do. It does also feel like the death of expertise in a lot of ways. The fact that someone like you who actually has all the expertise in the world to advise people seeking jobs and startup and tech, having been a startup and tech recruiter and people leader, and you're banned from some of these groups, is like really sad. Like this, you are exactly who they should be hearing from. Granted, they didn't ban me, me, they banned evermore. That's the voice behind the account.

SPEAKER_02

Yeah.

SPEAKER_01

It's just a bit of a bummer. And I think that that is, but and to this point, you have people who are just when you're when there's no accountability for your opinion, what is your opinion worth? And I think the fact that Reddit is the number one source that being fed into Chat GPT when it is made up of opinions that there's no accountability for, no one's checking credentials, expertise. There's nothing to lose should you say something that is unhelpful or untruthful, and it's being fed into something, and I think that's quite disturbing.

SPEAKER_02

Thank you so much for joining us on this multi-part conversation around AI. Make sure to follow the podcast so you don't miss part two. As an AI tool ourselves, we really care about your data, how it's used, and that it helps you grow, understand yourself, and grow your career. That's why we take a very self-reflective approach. We have 24-7 career support coaching in Evermore, as well as regular retros on your work. So you can keep those historically over time, whether it's on a weekly basis, bi-weekly basis, or monthly, yours to choose. We also have growth paths so you know exactly kind of where you're going in your career. Whether you're a manager, an employee, a founder, or even a freelance or a creative individual. Your career is whatever you make it, not whatever someone else says it is. So if you want to join and take ownership of your career and make sure that you're leading it and not someone else, we'd love to have you in our beta. We'll be closing the beta in a little bit. So this is the perfect time to snag that spot early and also enjoy some discounted pricing as long as you give us some feedback because we love a good feedback. All right, we'll see you next time and enjoy the rest of your week.