Firing & Wiring
Dive into the science of your mind with cognitive neuroscientist, Dr. Bethany Ranes, and clinical psychologist, Dr. Norah Kennedy. Joined by host, Jena Mahne, Firing & Wiring helps break down useful information about common mental quirks and challenges, while also providing tips and tricks to help you tune into your very best thinking.
Firing & Wiring
Episode 2: Help! How Much AI Is Too Much?
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Have a question for the experts? Click here to submit it and we may feature it on a future episode!
AI isn't necessarily a problem, but how you're using it might be.
In Episode 2, the hosts of F&W break down the cognitive science of offloading, why handing tasks off to AI might unintentionally erode mental skills you can't afford to lose, and how one simple shift in your prompting style can change everything.
Join us for an AI conversation that's all science and no fearmongering (not an easy feat!). Shout out to Karl in Minneapolis, MN for this episode's awesome topic!
BONUS CONTENT: Here is a link to Dr. Ranes’ LinkedIn post on using AI to encode rather than offload - enjoy! https://acesse.one/c4pcoh3
Want more nerdy cognitive content to help you master your mind?
Follow us on LinkedIn and Instagram to get that sweet, sweet neuroscience fix between episodes!
Interested in our customized one-on-one Cognitive Ecosystem™ training program? Visit the Interocept Labs website or schedule a consult today!
I'm also on the Interocept Labs team. I don't know why I act like I'm not. Important disclosure. Uh we are all on the same team.
Speaker 1Welcome to Firing and Wiring, a show dedicated to the neuroscience of how you think and how to optimize your mind.
SpeakerWelcome back to Firing and Wiring. I'm Jenna. And um I am joined by Dr. Bethany Rains, cognitive neuroscientist. Hey, hey. How are you doing today, B?
Speaker 2I'm doing pretty good. Feeling pretty good. Yeah, I'm in it. It's gonna be a good one.
SpeakerWhat about you, Nora, Dr. Kennedy? Yeah, I'm just living the dream. Sounds good to me. Um, we actually had a caller earlier today. It was uh Carl with a K from Minnesota, Minneapolis, actually. And he he was so brilliant. He came out hot and he said, I tell my friends all the time that they're offloading onto AI, Chat GPT, Claude, Gemini, what have you, right? Not sponsored, but you know, if they want to give us some money, we'll take your and it really sparked a lot within us because offloading is a great tool. However, we may not be offloading to the best of our abilities when we're using AI specifically. So that's what we would like to talk about today.
unknownYes. Okay.
SpeakerSo I'll start with the couch.
Speaker 4Yeah.
SpeakerWhere where should we begin on this one?
Speaker 4We're the couch. The couch, the AI vibe working, vibe coding. Man, it's so popular, and there's thousands of podcasts on it, but not one like ours. No, we're the best. We're gonna talk about the cognitive science behind it.
Speaker 2Yeah, I mean, so I actually have thought a lot about this, um, especially as as AI is growing. But I honestly have been thinking about this for a long time, even pre-AI, this idea of, you know, offloading, and it's is it always the right answer? And so um two things I like to kind of distinguish between. Like there's offloading, which is having something think for you. Like you're putting it onto something and it's expecting it to hold. And then there's encoding, or what I think of as encoding, which is having something think with you or shape your thinking. So that one's a little bit more nuanced, and you're not just kind of like blindly asking it to hold something for you, but maybe it's shaping your thoughts, it's challenging your thoughts. Um, you know, the thing that like gives me pause is this idea of our attention span and our working memory as we're evolving into this digital age, and it is definitely shrinking. Um, so the statistic that I throw out there a lot is the chunking statistic. And I should know who did this, and now everybody who has a cognitive science background is gonna like write in and be like, you don't know anything. But seven plus or minus two things is always what we've said fits in your working memory. Um and there's a lot of talk that now that digital tools are so much more pervasive and so much more sort of we're offloading so much more into technology, it's really starting to turn into more like five plus or minus two, um, which is kind of unsettling. And as I was doing a lot of cognitive, you know, training and working with folks, you know, I work with a lot of people who cannot afford to have weakened mental domains and cognitive domains. So it's like, man, am I asking them to offload too much? Like, am I the problem? Oh my god, is it me? Um, and I started thinking about it, and it got me to be very thoughtful about my theory and my and my personal philosophy on this this idea of what's okay to offload, what do you want to encode and being mindful of that? And I think AI is a perfect use case of this coming through. Um there are people who use it to just offload, like have it think for them. But it's actually a pretty fantastic tool to help you encode if you use it correctly. And so one of the things that I always kind of put it into, you can use AI as a tool, which would kind of fit into like your setting your environment, like offloading into that domain. But I actually see it as a social offload or encode and kind of think of it as working with it as you would work with another person. Don't treat it like a person, it's not real. Don't want to like, you know, get into trouble with that. But the idea of having the way we use people cognitively is to kind of expect them to interject, to challenge, to ask us to expand or explain something differently. And you can use AI for that so well. I I highly suggest doing that as well as other people, but it's such a better use case for AI for me. So I am always big on like think of it as a social sort of encoding, not offloading, like, hey, chat GPT, remember this for me or do this for me. If you're asking AI to do something blah, blah, blah, for me, that's not that's not great. That's probably a red flag. But with me, right? Like, challenge me on this, or what do you think? You know, how where am I missing? What are my blind spots? What are my biases? It's actually pretty helpful. It's like a it's having a friend that's available 24-7, you know.
SpeakerThat's a really easy switch when you're talking to something. We'll just use Chat GPT, for example. Instead of saying, Can you do this for me? You say, Can you do it with me?
Speaker 3Yeah.
SpeakerAnd and asking it to challenge you. Because anytime I would type anything into Chad, Chat GPT, he would just come back and be like, You're thinking about it the right way. And I'm like, Maybe I wasn't, and I knew I wasn't, and you thought that I was like, Don't tell me that. Don't blow smoke up my ass. Like, I don't need you to do that. I need you to actually like help me think through something.
Speaker 2Yeah. Yeah. The sick this what do you call it? This is the sycophanti or is the sycophantic behavior is like straight up something to be concerned about when using it at all. Um, I actually, yeah, I'm very mindful um and I test a lot of my AI to see if it will challenge me before I I get too deep with it. Um, but yeah, Chad is our our in-house assistant, Chad, ChatGPT, um, is pretty notoriously not great at this. So it's something to think about. Um, you know, for those of you out there who might use that one, um, you have some of the engines are better at it than others. I actually have a pretty thorough prompt in mine. Um, you know, I uh I left my relationship with Chad and got into one with Claude. Um, but I found it much more able to do what I wanted it to do. But I have a pretty thorough central prompt that I kind of update frequently as I as I kind of check in with how I'm using it. But it's all about I want you to always stop me if I ask you to think for me or do something for me. Um, because your primary job is to think with me. So I I actually have it written into its central prompt and it's really good about it. It's the first thing it does, no matter what I send, it'll like give me a little confirmation of like, okay, this is squarely, you know, an offloading task. I don't mind doing this for you. Um, you know, like, hey, can you put this into a template for me or something like that? Where it's like, I'm okay with offloading that. I'm not good at making templates and I don't need to be. But hey, you know, um, I need to brainstorm a strategy for this new project. And it will stop and be like, you need to brainstorm the strategy for the new project. What are you thinking? And like, why are you thinking that? And it's actually pretty cool. It'll like, it'll notice like, well, what data are you pulling from? And like, I'll share my data sources. It'll be like, well, I think you're kind of leaning heavily on this point, and it's not as have like I think that might be a personal bias, and like I'm not seeing the evidence for this. Dude, Claude can be brutal if you if you ask him to be. So I think that's awesome. I I dig it. I'm always into a good editor.
Speaker 4Yeah, like be blunt as if you're from the East Coast and you don't care, and it's the like 10th snowstorm of the season, and you just want to tell me what it is, right? That's hardcore. Yeah, I don't know.
Speaker 2I'm too West Coast for that. Like be direct, but like maybe East Coast hard way. I think it's a good idea.
Speaker 4Yeah, like slip slip in some sorry, but yeah. Sorry, but yeah, yeah. Throw a dude in there.
Speaker 2I do love an ope.
Speaker 4Oh, a good old oop. Yeah. I mean, two things from what you said there, right? First, you gotta look at what the task is that you're thinking about having it do, right? And thinking about, okay, what's the goal of this, right? Is it get something done, get something done quickly, bust out a template, something that you don't need to or want to know how to do as a professional, right? Or is it like you're saying, something where it's like, hey, there's some thinking involved in this, right? So you not only have to think about like what's the task that you're asking it to do and like what kind of thinking is required for that task, but then also have that responsibility to understand how AI works. You have to be kind of meta about the AI itself, right? I mean, what everyone says about AI, right, is that it's only trained on what's already known. So it's only trained on what the average Joe is doing. So probably shouldn't have it doing things that like, you know, you need some kind of like really deep expertise in, right? The odds that an AI could find that, right, it's probably gonna regress towards the mean of kind of like what most people would think or say. So you have to be aware of those limitations when you're gonna use it, right? And making that decision, okay, yes, I'm gonna use this, right? I think this makes sense for what I'm gonna do. Um, but then, you know, even thinking working backwards from the task, yeah, there might be some things like I don't have to or want to know how to code this, or, you know, the the ins and outs behind how this is being made. My boss just wants me to make it, I just gotta make something. Well, that brings up multiple levels too, right? If it's a one-off, maybe that's fine. If it's something you'll have to do repeatedly, maybe it could actually be a good skill to you for you yourself to learn, right? And even if you don't really care about learning it, or you're not that invested in your job, perhaps, and be like, ah, I really want to learn this, right? You still have to think about, like, okay, well, when we work backwards, right? Like, where is the knowledge coming from? We can't really quote our sources. Sometimes that can become a big ethical issue, right? Of responsibility. Hey, things went bad. Well, who made it? Billy did. And Billy's like, I don't know, God did, right? Like, uh, like that becomes a big conversation piece these days, too, I think, of like working backwards from the ethical responsibilities. But then also just like the cognitive side. Okay, even if you don't love your job and want to know like all the ins and outs of your job duties, like, isn't there still some benefit to learning what the behind the scenes, the the under the hood of the car, right? How the how the how all those parts work, right? And being able to tell different things apart, whether it's because eventually with a car you're gonna have to problem solve and troubleshoot shoot it, right? And know the difference between the head gasket being bad and a serpentine belt, right? And like being able to kind of tell all those things apart.
Speaker 3Blast me.
Speaker 4You should know.
unknownNo.
Speaker 4I guess you could pay someone to know too, for you hire your expert, right? Or just just the general understanding that when we learn what how one system works together, right? When you learn how like, I'll keep up my analogy, how how cars are built, right? Well, I've never worked on a motorcycle, but I could probably generalize a lot of my knowledge to working on a motorcycle, right? Because I'm just like, hey, I understand the basics of how you know an inject fuel injection engine works, right? And so it's the same kind of thing. Even if you're like, this task at work not does not excite me. It's like you don't know the learning you could get from learning it. Yeah. It kind of reminds me, although a little bit different of some of the conversations that were happening a lot with the pandemic and people working from home, right? And then be like, they they'd have a problem at work, they need an answer. So they just send a quick message over to B. B, how do I do this? B would just send the message back. You do XYZ. Okay, cool. I know, right? And comparing that to the learning that happens when you walk over to B's office because you're in person, you're like, B, how do I do this? And B shows you and you have the visual and you talk through it. And then B might keep talking, not very common for you, right? But a different B might keep talking and say, and hey, do you also know how to do this thing? And hey, did you know that what I just showed you also relates to this other thing? Like it just opens up a lot more opportunities for learning. And I just feel like you can hear my bias coming through, right? Like, I'm all for having learning, I'm all for the shortcuts too, but you have to kind of work backwards from like what's the task and what's the goal, the goal, excuse me, of having AI involved in this.
Speaker 2Yes, I love that. Like, what's the goal? It reminds me of like a metacognitive element, right? Like, because we need to do that with our own thinking too, and a lot of people don't. And like, and you really need to kind of be that that metacognition, that like oversight of what's happening. But what you said about learning also makes me think of something that I think is really troubling to a lot of folks that's worth talking about is the kids using AI. Because young kids are are learning so much. And I think that there's something to be considered about too is like, you know, when you're a kid, every time you're exposed to something, you're kind of learning it and and you're getting that. And when kids are offloading a lot, um that could potentially be problematic. Because like, think of all the weird stuff you learn, just encountering it, having to do it as a kid, fixing your bike, or or doing whatever. And and honestly, just schoolwork, right? Reading all the books we had to read, and you hated it tail to cities.
Speaker 4I hate it. I learned Latin and it wasn't so that I could speak it. I promise you that.
Speaker 2That wasn't the end goal. All of this kind of stuff. And it to your point, it has these tangents, it has these branches. Who knows where it goes? Maybe you lose it, maybe you never use it again, maybe you don't. Like maybe it's something that kind of sticks with you or helps you sort of aha later. I think where I kind of like I do worry and I wish I had a better answer, but I do kind of have a different set of standards for an adult using things for for skills that they do have but kind of don't that you would delegate anyway. For sure. And and a kiddo using it, and like they may now miss out on that imprinting, that new kind of neural pathway that they otherwise wouldn't have gotten. Yeah. And as an adult, I want to be mindful of that. You know, one of the things um that I've always wanted to learn how to do is to get better at Python coding. And it's like I know I can use vibe coding, right? And I've been using it for HTML, which I did already know. Shout out Myspace. And it's been really cool. I feel like it's helped me do better. But I've been hesitant to use it for Python because I'm a little worried. Like I don't know, first of all, I don't know it well enough to check my work. Whereas with HTML, I know I can check it and be like, oh, this is where it's broken or this went off the rails. But you know, whenever it's these other new languages, I don't know. Same for R. I use it every once in a while. Like, what would be an efficient way to code this in R? That's a statistic software, by the way, for anybody at home who's like, what is she talking about? That's the true, true nerdm there. But I do find it helpful when I can go back and be like, okay, I can kind of spot check this. I do get a little uncomfortable asking it to do stuff for me if I truly just don't know how to do it. And I think that that's a good discomfort. But I also like kind of that's that's my line as an adult. But I feel like kids so often, like a lot of that stuff, they're not gonna know yet. So um I do kind of throw that out there as a something for me. No, no, no, that's coming for me. Like that's my hot take, but I do think that there's a risk with younger people using it because they haven't had a chance to learn.
SpeakerNo, I did, I did want to ask, like Nora said, under the hood, like not to be like a grifter, but like if you look under the hood of even an adult's brain, and if you are like not, is it too dangerous to like be offloading or having chat GPT do too much for you? Like from a neurocognitive, you know, perspective, like what does that actually do to the brain after a while if you offloaded too much? Oh god, okay.
Speaker 4Danger zone short answer.
Speaker 2Yeah, so I'm like, oh, I can go into a whole seminar about dopamine in this particular situation. Like, why should be a little bit why should we be a little bit careful? We have to be really careful because so offloading, you know, tends to be a real easy, especially if you have an easy offload. Your brain loves to have something else, like to get that energy back to not have to worry about something. That we are evolutionarily engineered to offload, even if we're not aware we're doing it. What I think is a little scary about AI is that it doesn't say no very often. And what I don't think I've ever seen it say is I don't know. Whereas like another person will be like, oh, you know, you talked about like, oh, come into B's office, how do I do this? Sometimes B's gonna say, I don't know, bruh. Like, I don't know how to do that. You're gonna have to go ask like Tom from IT or whatever. And so, you know, that that happened, I'm not gonna make up an answer based on things I'm pretty sure I heard the IT guy say one time whenever I was like, you know, eavesdropping on him during lunch, and give that to you with like the confidence that I would have given you if I did actually know the answer to. AI kind of does that. So that's not great. So that's already kind of one red flag that you need to be wary of, especially when you can't you if you have no bullshit filter for yourself, whoa, right? Like, cause you want to be able to say that does not pass the sniff test for me. Like that seems wrong. And if you don't feel like you can do that, you're kind of putting yourself into a dangerous situation. But neurocognitively too, you have to be really careful because weird stuff can sometimes happen when you're offloading, especially to something that has like kind of a weirdish, not quite right on the money human response back to you. It's always telling you how great it is or how brilliant you are, or that it always knows, or whatever. There's always like weird little subtle things that don't seem like a big deal in a one-off. But if you're consistently and regularly doing it, some people get really into their like AI, they're talking to it all day long, having to do all kinds of interpersonal stuff for them. Like, I have to do this difficult email, right? Can you write it for me? It's really tempting to do that, but you can kind of start getting some wiggy stuff happening in your learning that you're not even aware of. And it can start to have you feel a little bit out of touch. Now, on a like super in, you know, way off the bean, right? Like this is a very unusual, but it has been documented a few times. Like there are people who are having what do they call it? AI psychosis, I think is like what they're calling it. Where like your your your kind of guardrails of reality in your mind are getting kind of just so subtly shifted all the time that it can actually start to have like some pretty serious repercussions for your mental health and like your cognitive health. So I say that I don't want to like be like the doomsayer. That's pretty uncommon. I think that's you hear it a lot in the news, but I think for how many people are using it and how often they're using it, that's a pretty much an that's a pretty outlier situation. Yeah. There are definitely those, but I would say just those immediate risks. Like you don't know what you don't know, and AI won't tell you it doesn't know. So I think Nora said something really good earlier. Like, what is the risk of this? Why are you even doing it? Because if you want to fully offload this to something, why do you need to do it in the first place? And if it is important, I feel it's like maybe we gotta I like the ethical thing, but like how many people have been having to retract papers because they're like citing things that don't exist, because they're hallucinations from AI, and it's like, oh whoops, this whole like article was written by AI. I think I've seen like legal cases where like they're citing case law that doesn't exist. I think there was a book recently that got pulled because they said like a book was written by AI. I don't be that how embarrassing that would suck. Like, don't I think just like thinking of yourself down that future path of like how you know do you would it matter if somebody if you told somebody this was AI, what would their reaction be? If they're just like, oh that's cool, like what AI did you use, then it's fine, go for it. But if it would be like mortifying, like your book gets pulled off a shelf and like everybody sees you on their news feed, that would be so embarrassing. Like don't even go down that road. Like, don't even worry about it. Like just do you gotta do it the hard way, unfortunately.
Speaker 4Well, and you were talking about like kids using AI versus adults, and definitely different for adults, right? We've already maybe figured out our pathways, our systems for learning, how to access, use the different specialized parts of our brain, which most of us want to do, even if we don't understand the brain as much as you do, right? Like most people, I don't know, no one can be in that one percenter group, but right? Most people are like, Yeah, I want to, I want to use my brain, I want to keep it healthy, right? I want I guess that's that's good for me, right? I want my cognitive performance to be good, right? So then you do have to think about, well, right, are we are you not, are you not kind of building some pathways, giving the chance for some creative thinking or some learning that might actually, you know, be stimulating for you, right? Might be rewarding in and of itself to have done the work, the learning itself, done, made the product, right? Versus, of course, the kind of offloading where, or the kind of offloading to AI, I should say, where it's a mundane task, right? There's not a lot of like value you would get back from doing that yourself. I think that's a lot of what we help people, people work on, right? Like when people are like, hey, I've come in here to improve cognitive performance. How can I use AI as a tool in my ecosystem? Yeah, you have to work backwards from again, like what task, what purpose, right? What are what are you okay to put out there as like this would come from AI versus for me? What do you want to take ownership of, right? What do you want to set you apart from the next person, right? You're both trying for C-suite level jobs, right? Like, what do you want to put yourself apart? Do you want to be able to demonstrate knowledge in an interview? Do you want to be able to speak on the fly on a whim? Do you want to be able to say, oh, I don't actually know that? I leaned on AI, bit too hard, right? And all those ways so we can be flexible with that learning and kind of work ourselves backwards if we go too far one way or the other. But it all starts with, oh, maybe we need to be a little bit more aware and like, you know, have that metacognition. Let me think about what I'm doing, what I want to do, what my goal is here. Yeah. Yeah.
SpeakerI love that small shift in perspective of would I feel comfortable or feel okay telling somebody that AI did this or helped me or wrote it. Because that I am inclined to always say no. I would never feel comfortable giving AI credit. Doesn't mean I don't use it, right? Like I'm a hypochondriac, so I'll use it to be like, Am I dying? And then usually it tells me no, which you're not dying, which I love, right? Which like M, you know, WebMD would have been like, Yep, you're gone. And I'm like, okay. Okay. So it doesn't help. But so that's what you know Chat GP is good for is making me feel like I'm, you know, it's not. Stay with us another day. Yeah, the ticker's still going. But um, but it it also sounds like it like AI right now is used in a very sort of like social capacity too, right? Like so when like Nora was saying, like, do you actually want to learn something or you just want to take what you know AI said and go with it? And I and I can't help but use LinkedIn as an example where you know people will ask AI to do to write them a post and they'll just copy and paste the post. They won't, it's not, it's like they didn't even read it, they had nothing to do with it. And you can tell that AI wrote it because it sounds a certain way and it uses the M-dash. And I'm like, I s I sniff out AI like nobody's business.
Speaker 4You really do basically no, but I'm like maybe over the top sometimes. As an M-dash user, like this just kills me because I've always written that way and it just it kills me. That's a part of the AI feature.
Speaker 2I didn't know what an M-dash was until AI happened. I didn't I didn't even know you could push the dash two times on the keyboard and it would do something. So I've actually learned what an M-dash was over the last couple of years.
Speaker 4Wow. I you're a published author. I know.
Speaker 2I know.
Speaker 4You type in words?
Speaker 2I use the space dash space. I don't know what that is called.
Speaker 4You know, B and I were talking off this topic a little bit the other day where like B was in a you know, paraphrase your words, right, saying that you are someone who just like write, get something out there, right? And then you can see it in front of you and decide from there what you want to edit, what you want to change. I'm a bit like, if I see something out there, like that's already what's fresh in my mind, fresh in my memory. And I just kind of like, I'm like, oh, it's hard for me to have as much creative thinking when I've already seen kind of one way of doing it, right? So that's also where it's good to know about yourself. Like, right? Are you more like, oh, I think you explained I was like, you'll see what's out there and then you know, see, have it write something like that, right? And then you'll immediately see, oh, I don't like this. I want to change that, I want to change that, right? Whereas I get biased of like, oh, I've seen one way to do it, right? Like, yeah, I could slave over this for a long time and kind of really work hard and be, oh, I gotta tweak this, tweak that, right? I want to do something different here, but I'm all um like I'm already influenced by the version I saw. So I like to start with things from scratch. Well, let me think fresh, like have my little brainstorm session, like really not feel constrained and thinking creatively, which unfortunately I think AI kind of helps with. It just depends, right? It can kind of help with and kind of not. And so it's just like a person, right?
Speaker 2Like, I mean, like some of your friends, like I think of it too, like you know, do you want there's like three kinds of people that you can work with. You can work with a delegation person that you're just like a really good assistant that you trust, and you're like, just do this for me, please. Um, you know, like a super good research assistant. Can you just run me a quick lit review on this? I love that person, right? And I can confidently read that. Am I gonna turn that lit review in with my name on it? My god, no, right? Like, what a monster. But I am gonna read it and take it as it is, right? There's the yes man. Do you want somebody who just tells you you're right all the time? You know, like do you want somebody who's just gonna constantly like stroke that ego? And then I think you know, this is how, and I think this is where I land with AI the most. You want that friend who will tell you when you're totally full of it. Like what you said does not make sense, what you said is not right, you sound like an ass. Like, don't say that, right? Like, you want that person in your life. You want that person who's like, I don't think you're right. I think you should take a five. And I I so for me, I definitely love that. Going to yeah, the post thing. I I am a very external thinker. So, like, Jenna, what you're talking about, like the people who make a post and then just like leave it there. That does give me like the sweaty palms because I do read them and I almost always am like, I hate that so much. Oh my god, I hate this so much. And then I that's kind of that was doing early days of AI when I was like, sick, it can write LinkedIn posts for me, so I don't have to do it. No offense, LinkedIn. And I was so jazzed about it.
SpeakerThis is sponsored by LinkedIn.
Speaker 2Sponsored by LinkedIn. Um, we're not sponsored by LinkedIn, we'll take your money as we talk about this. Um, but like I remember thinking, this is gonna be awesome. And then I had to write a LinkedIn post and I read it, and I was like, I don't like it, and this is before like that AI style of writing was like as obvious to us as it is now, and I was like, this is not gonna do. This is not it for me. Like, this is not catching.
SpeakerIt's efficient, right? It's efficient, it saves a bunch of time. And in an era where like we want instant gratification, we want things to get done quickly. We don't always feel like we might have the luxury to like brainstorm and be creative. We need something to help us along the way or to just do it for us, and that's kind of dangerous.
Speaker 2It can be dangerous, and it I think it weakens your threshold for what you're capable of doing later. It also really helped me kind of learn what it is that makes my writing sound like me. Because I didn't really take time to think about that. And I realized like I ramble, I go off on weird tangents. I'm kind of a stream of consciousness writer. I was like, I'm yes, yes. And like sometimes that's good, sometimes it's bad, but it's like decidedly a signature. You can tell when I've written something, right? Like they're like, yes, yeah, you can. Well, you're a great writer. Thank you. Oh, yes, thank you.
Speaker 4And a very conversational writing, like, a conversational writer.
Speaker 2And you know, when I've done like the blogs and things like that, right? Like it's a lot of stories, it's a lot of this and that. And as soon as that's gone, it's like, oh wow. It's like very much like the post is neutered for me. Like it's just like this doesn't feel like me at all. This is weird. And how could AI ever write like that? Like for about it, it couldn't, unless I fed it like everything I've ever written, maybe, but it's not gonna have that like weird anecdote about like when I went to Taco Bell the other day that I'm inevitably gonna talk about for no reason in this post about the amygdala.
Speaker 4Or capture like your current state, your current mood, kind of your current mindset, right? Like it's it's you can't flux with that as you naturally do. And gets back to a conversation that we've had before about you know, putting that AI post on LinkedIn. Again, no shame, like Jenna said, many reasons we might do it, but putting that AI post on LinkedIn gives us an accomplishment. So if, ah, good, right? Really got it done. All right, cool. Could it be better? Probably, don't want to spend any more time on it, versus getting that sense that opportunity for achievement, right? And like as we've talked about before, both feel good, right? But achievement just gives us like a bigger chance for an experience, right? And I say experience from a very like, you know, interoceptive way, right? It's a like it's it's very much like we can have kind of the physical, the emotional, like the cognitive, like everything on working together. And you know, you might there's chances you miss out on that. And the same way that, like, oh, a conversation, sitting here in person, all talking, engaging on topic, this is way more stimulating and rewarding than it would be if we what like had AI do this obviously is one answer, even doing it virtually, or just hey, let's just kind of write our answers to this topic.
Speaker 2Or if you had like Chad and Claude sitting on the couch talking at each other.
SpeakerI'm sure they've done that. They've they've somebody's done that. Do you remember? Have you ever seen it was this like weird commercial where this girl was using AI to like remember who people were at a party or at an event? No, what?
Speaker 3Oh my gosh.
SpeakerAnd because it was like she like because AI was running her calendar, so she so that it knew what event that it went to previous, and then she showed up in the sky, like approached her and was like, Hey, it's great to see you again. And she was like, one second, and then like went around the corner and was like, Gemini, who is this person? And then it like told her who it was, and then she could come back and fake like she knew who it was. It was like, it's this person, and he helped you do this, this, and this. So then she like acted like she remembered what it was, but she didn't.
Speaker 2Oh, so Gemini was like her wife that everybody brings to the cocktail party, right?
Speaker 4Like getting getting to almost a level of like fear-mongering of like you're gonna lose all your brain functioning and capacity. But that's what it made me. And get back, gets back to your point about the social offloading too, right? Where if like if you know someone is just so used to like, I keep certain important facts up here, but then I offload things to my my spouse or to other people around me on the team, and I'm okay being labeled the absent mind and professor because that means other people are responsible for information and I'm not. I mean, you can set up the weird, not even symbiotic relationship, right? A very one-way relationship with AI. And if you want that, great, right? But like we're naturally social creatures.
SpeakerWe like to be able to like I always have the notes app and my phone up, and I'm like offloading, typing stuff into it. Like, if she would have just did that, she probably would have remembered more about this person than if she just said, AI, remember who this person is.
Speaker 2It tells you that she didn't care. And I think a person who does that, like, think of the stereotype in like a lot of shows, right? The executive guy, the president of the university, whoever, the the powerful person, and they almost always have an assistant. And the assistant's like, This is so-and-so, right? It's like a joke, it's a trope in a movie. And it's funny because it makes the guy look like a douchebag because he doesn't care about these people, he doesn't care to remember them. That commercial kind of makes that chip seem like a douchebag. Like, she didn't care who this guy was. I know she continues to not care who he is because she's asking her assistant to remember for her. Like, to me, that's like, okay, this is a thing that doesn't matter. Kind of getting back to what we said before. Does this matter? And if you're offloading it to AI, the answer to that is inherently no. It doesn't matter. It should be no. At least we're not going to be able to do that. You gotta think about it in answer. You're answering that question no, whether you want to or not. And I I think that's that's a really good, like to me, that's like, okay, this clearly doesn't matter. Um, you've you've delegated it in the same way that like even if I had a super trustworthy RA, am I gonna have them come with me to a party and be like, what's up? You know, like, what's up, RA? Can you just remember who these two people are? Because I can't be bothered because I'm so so brilliant. As an absent-minded professor type, like I do offload to a lot of things, but I do not offload people and relationships. I don't love that. That doesn't, I don't like that. So like I do to me, that hurts a person's feeling like that that's a person's value. So like I am not good at remembering, but I will remember names pretty well because I make I make the effort because I feel like a person is worth trying to at least, you know, like remember where do I know this person from? Where have I met them before? And I feel really bad when I can't. But like, you know, yeah.
Speaker 4And so we talk about needing to differentiate, right? What what's worth learning, what's not, but that can be really tough to do, and especially if you're in a job where like you don't have maybe time for lunch, much less to think about like what are all my responsibilities and duties with my job? Where does using AI make sense, maybe might be to my advantage, versus where might I be losing out on something I don't even realize, right? In terms of like my longer-term cognitive performance or like the the cognitive abilities I need to have to succeed and thrive at this job and maybe even make it to the next level, right? Yeah, yeah. So it's always like, okay, well, I mean, sometimes you need someone to kind of help step in and talk to you about that stuff and help, you know, help you step back and look at it a little bit too and have that meta-level of reflection on it so that you're not just there trying to scramble and kind of do it yourself when you got spare time, but like, hey, sometimes we actually need something a little more, uh, a little more in our face to like remind us and and and focus us on, okay, actually it could be valuable to set aside some time to to think about how we're using this.
Speaker 2I mean, I would end it on like what what you said earlier. I loved how you brought in like achievement versus accomplishment. And what it made me realize too is like trying to no matter what you're using, AI, another person, a tool, your phone, whatever. When you aim for accomplishment, just get it done. You're using neural pathways you already have, they're just running through the motions. Like you're running voltage through things you already have. When you aim for achievement, and you call it an experience, and I dig that, like you aim for that experience, that's neurodynamic. You are now creating new pathways or strengthening pathways, right? You are doing something a little different. You're using that neuroplasticity, that change, and you're making yourself a little better in the process, as opposed to just I'm gonna roll on through. This road exists, I'm just gonna take it. Or at least giving yourself the opportunity to get better, right? Even if you fail, even if you still have that opportunity. Yeah, it's still very neurodynamic. So I actually really like I think you that was a really good way. Like when you said that, I was like, yeah, yeah.
SpeakerNice. Yes. I agree. I think this was a fun conversation. Yeah. Um, you know, how you use AI matters. It does. You're big time. Cognitive health matters. If listeners, if you feel like you're struggling with using AI or you want to get better at using AI, reach out to us. B made a great post on LinkedIn. I wrote it myself with some specific strategies on how to use AI to your advantage. But if you want to know more, we'll link it, we'll link it somewhere. Yeah, we'll link the chat. Link LinkedIn somewhere. Um, but if yeah, if you want, just reach out to us if you want to have a conversation, learn more, and maybe we can help you with it.
Speaker 2Yeah, and thank you to Carl from Minneapolis for this idea for our conversation. That's right. Thank you. Anxiety Society roll us out.
Speaker 1If you have a question or a topic that you'd like to present to Firing and Wiring, be sure to email us at support at interceptlabs.com. Firing and Wiring is produced by Intercept Labs and is hosted by Jenna Mani, Nora Kennedy, and Bethany Rains. Our theme music is graciously provided by Anxiety Society because all the cool kids support Minneapolis music.