Edge of Excellence: Empowering People to Shape the Future
The Edge of Excellence explores how leadership, culture, and technology shape modern business growth. Hosted by Bryon Beilman, President & CEO of iuvo, and Jessica DeForge, Marketing Manager at iuvo, the show dives deep into the human side of innovation, where strategy meets curiosity, and excellence is more than just expertise.
Each episode features conversations with industry leaders, innovators, and visionaries who are pushing boundaries in leadership, technology, and business transformation. From sharing actionable insights to simplifying complex IT challenges, The Edge of Excellence empowers listeners to think differently, lead boldly, and use technology as a catalyst for growth.
Tune in for real stories, expert perspectives, and practical takeaways that help you lead at the edge of excellence.
Edge of Excellence: Empowering People to Shape the Future
Going Beyond the Prompt: Curiosity, Guardrails & the Reality of Building with AI
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
AI tools are everywhere, but not everyone takes the time to understand how they actually behave.
In this episode of Edge of Excellence, Bryon Beilman and Jess DeForge sit down with Justin Mantell, Business Operations Analyst at iuvo, to talk about what happened when he spent 90 days building structured workflows with AI.
Instead of simply using AI for quick prompts, Justin pushed multiple models into deeper reasoning, verification processes, and real workflows. Along the way, he discovered surprising insights about how AI behaves and how much the quality of its outputs depends on the human prompting it.
This conversation explores the human side of working with AI: curiosity, discipline, guardrails, and the leadership culture required to experiment responsibly.
Listeners will learn:
- Why AI often reflects the thinking of the person prompting it
- The importance of verification and guardrails in AI workflows
- What responsible AI adoption looks like inside organizations
- How curiosity and experimentation drive meaningful innovation
- Why leadership and culture play a critical role in successful AI adoption
Whether you're a leader navigating AI strategy, a professional experimenting with new tools, or simply curious about how AI works beyond the surface, this episode offers a grounded and practical perspective on working with emerging technology.
This is the Edge of Excellence, empowering people to shape the future. Let's inspire, innovate, and explore together.
SPEAKER_01Welcome back to Edge of Excellence, the podcast where we explore how leadership, culture, and technology can empower businesses to grow and thrive. I am your co-host, Justin Forge, and today we're diving back into a topic that's shaping decisions in every industry right now: AI. AI has quickly become part of how people think, how they work, and how teams solve problems. People across all roles, technical and non-technical, are experimenting, learning, and building with it in ways that are transforming their day-to-day work. Today, we're talking about AI, absolutely. But we're also talking about leadership, self-awareness, and the kind of environment that allows people to experiment, fail, get curious, and bring back something meaningful to the organization.
SPEAKER_03And I'm your co-host, Brian Beilman, and we're super excited to welcome Justin Mantel, business operations analyst here at IUVO, entrepreneur, spreadsheet enthusiast, system thinker, and someone with a natural instinct to understand how things work beneath the surface. Justin just didn't want to use AI. He wanted to understand how it behaves, where it's reliable, where it isn't, and how to work with it responsibly. Over the course of 90 days, he has pushed multiple models into structured workflows, tested guardrails, observed failure modes, and learned how much these systems reflect the choices, clarity, and assumptions of the person prompting them. In the process, he uncovered insights that apply far beyond AI. Justin, we're thrilled to have you here.
SPEAKER_02Thank you, Brian and Jess. I am excited to be here as well.
SPEAKER_01Excellent. Justin, for listeners and watchers who don't know you yet, um, can you tell them who you are and what drives the way that you approach solving tough problems?
SPEAKER_02Okay, sure. So I am Justin Mantel. I've been with iUval for going on five years now. Um, as Brian stated, I have a background in small business ownership and systems creation, uh, or you know, processes and systems and development and all that. And I've always been drawn to the tech field, always felt a certain FOMO pull to get into tech. Uh, so I finally made the jump in 2022 and I'm here. And the way that I approach problems or what drives problem solving for me is when there's like a itch that I can't seem to scratch, or I feel like something is broken and it doesn't need to be, I put my entire brain obsession loop into that one thing. And I go all in. It's like 10 toes down, no matter what I'm doing. Um, so you know, AI for me just seemed like something that was incredibly interesting. It's obviously huge trending right now, and it's everywhere. You can't escape it, right? So I had that FOMO pull into it, and I wanted to really understand what it did, how it worked, and why, and not just dive into the magic of saying make me something cool and sit back with my Keanti as it, you know, did its black box magic. I really wanted to understand how it was developed to think like us and why it can be such a useful tool because it does reflect our thinking, but how that also creates pitfalls. So, you know, that's where I first started approaching it.
SPEAKER_01No, I think that that's fascinating. And you and I have talked offline a bit about your work with AI. So I'm excited that you'll get to share some of the work that you've been doing with others. And something that people may not know about you, um, some of the listeners, uh, you've run businesses before and you've built systems from scratch. So, how does that builder's mindset influence the way that you're engaging with AI?
SPEAKER_02Um basically, no matter how hard you try, nothing's ever going to be perfect. There's always something that is broken that you have to remain valuable and adaptive, you know, when you're faced with the challenges, just like running a business, just like being alive, right? Like getting out of bed every day. You have to try not to fall flat on your face. But I find that everything that you do in life, professionally, personally, whatever, it's all reflected to the same thing for me, which is just an iterative process of making mistakes, learning from those mistakes, and staying curious. So the way that I've actually built the systems for AI and the way that I use the tools mirrors business a lot, where I spend a lot of time on upfront research, like almost an unhealthy, disgusting amount that I don't even want to say. And then I feel like I'm okay to start getting my feet wet, and I know it's gonna be a chaotic nightmare. And I go into it with that as the thought process that nothing that comes out of this first approach is gonna be perfect. I'm gonna, like, as we've outlined here, five hours a day for 90 days isn't even accurate. It's probably closer to 10 and 120 at this point, or maybe more. You know, as I get like, I'm like a shark with blood in the water with this stuff. I just go deeper and deeper and deeper. So the way that I look at it from a systems building perspective is no matter what, you you try something and you can always improve on it. But then there's that loop where when you seek perfection, you actually get in the way of production, right? So there's like a diminishing return on searching for a systematic programmatic efficiency, which I think is what AI is really good at doing. It tries to get you down that path. And then that conflicts with the way humans work in the creative sense, where we can hold more than one idea in our head at a time and know that even though we're moving maybe left, we're still kind of like centered or to the right. Like if there's a North Star over here, we can veer and stay on course. Whereas AI is just gonna say, you want to go over there, buddy? Let's do it. And if you follow it blindly, you get in trouble. But I I find that's the same thing with business. Like, uh, so my wife and I started a t-shirt company right out of college. So we're talking about 2012, and we had no idea what we were doing. It was Valentine's Day of 2012, and I bought a screen printing press and I just started learning how to do it. I hated working at the hospital I was working at the time, needed to change. Uh, Lexa was an art student and I liked to research. So we got into it. And we did that for 10 years. And throughout that process, every year, a new trend would come out in t-shirts, whether it was dye sublimation or direct-to-garment printing or, you know, vinyl and sticker banner wrapping for cars. And you'd have salespeople coming in every day and saying, This is the way you're gonna get rich. This is the way you're gonna do it. It's this or nothing, right? It's just that FOMO loop that people try to keep you in. So I learned early on how to ignore the noise. I mean, I made mistakes. We bought stuff we didn't need and we did things that didn't make us any money because we were chasing the wrong North Star, right? We got we got confused. But I think that overall, if you go into anything, AI included, with a solid goal of what you want to achieve at the end, you remain open to the hurdles, challenges, and just like built-in almost chaos of the universe trying to pull you away from whatever that vision is. You just gotta stay, you have to have that North Star forever over there and know that you're always gonna be working toward it without seeing like the shiny new object thing and jumping to it, which is also the irony of AI for me because I believe it's the shiny new object for everybody right now, and it's gonna be most effective when it's super boring. It's a part of every process at the base level, and you don't even know it's there. And it's no longer gonna be that FOMO trend that people are trying to jump on. It's just gonna be it's like a cell phone. I don't know if you guys remember a cell phone. My uncle had uh the first cell phone he got was like, I don't know, 1997 or something. It was bigger than this coffee mug and probably gave him brain cancer for making a call and cost a billion dollars to use. And but it was so interesting and novel, right? And then everything went to phones, and now we all have them. Like, I don't remember the last time I was excited about getting a new iPhone. I must have been like 20 when I cared about that. And I'm 38 now, right? So it's been like 18 years of exposure to that new tech makes it boring, and then everybody has it. I think just like everything else, AI is on that track. Um, and there's there's a lot to be done with it that's good, but you must have the human in the loop, without a doubt. You cannot let this thing just like run rampant and where it can give you anything that's gonna be worthwhile output. And you need to remain like you have to almost be immune to the magic, which I can get to the magic later, because I know I'm going like real deep on this point. So we can save the magic.
SPEAKER_03Well, can I uh just someone grab something you said there and like talked about between 90 to 120 hours or whatever it may be? Uh do you have like any uh ways to stop yourself from going down deeper down that rabbit hole? Like, oh my gosh, like where you just just just you just do it and then go, wow, that was an experiment, and I'm glad I tried it, but it didn't work, or whatever.
SPEAKER_02100%. Yesterday I spent 14 hours on it. I'll tell you, I got an email from OpenAI. So I've been using different models, and lately I've been messing around with Gemini uh and Codex, and Codex is OpenAI. And Codex had been pulling me toward it because it works really well with the way my brain works on a systematic like process of a checklist where I want to make sure that things happen because we're we're iterating, right? So I want to make sure we're not making the same mistakes again. So I'm constantly updating documentation and trying to make that like ratcheting forward workflow come into play. Codex does that really well. So I got an email from OpenAI on Christmas Eve, and it said, thanks for using our service for a few months. We're gonna you know reset your weekly limits and bump you up 2x of use until January 1st. So I said, All right, sweet, I know what I'm doing today. I got up at 8:30 or whatever on a Sunday and went to bed at 11:30 and I worked on it the entire time. I went for a walk for about an hour and hung out with Alexa for dinner, and that was it. I just zone in. And for me, it's like I've always been this way. Even back to when we first started our company, I would stay up all night if I didn't understand a problem because it made me so angry. Like it just made me mad that I didn't get why. And so I would just dive in until I understood it. And that's the only, the only like relief I find is figuring things out. That's it. Otherwise, it just gnaws at my brain. But when I walk, which is why I did Brian force myself to go for a walk yesterday, because I realized a big thing was my process for my current agentic, like I guess, loop where I have Gemini implementing the code based on a plan that I wrote with Codex. So I have checks and balances. So Codex made the plan, Gemini implements the code, and then Gemini has the code audited by Codex. So I have kind of like a back and forth, so they talk to each other. But I was realizing a lot of my audit loops were failing, and they were failing for very arbitrary things and stylistic things, and it didn't seem to make sense. So I went for a walk and I was realizing I'm asking Codex to score Gemini's code on a five out of five scale. What is that? That's nothing, right? I realized that I was looking at it like a human, where I'd say, okay, my judgment of five out of five would be all security checks pass. There's no massive issues with linting in the TypeScript when it actually builds the application itself. There's no like backdoor or open RLS policies and superbase database stuff. I know that I'm just listing things to you, but this is what's always in my head. So I need the AI agents to understand what I care about. And then I'm realizing I just said, yeah, five out of five. That's great. You'll figure it out. Because I was in the magic, right? So I was enamored by the fact that it was mirroring back to me what it thought I wanted to hear when we were doing these things. I'm like, this thing's killing it. Definitely five out of five. You know what I'm thinking. We're on a telekinetic level, right? But it has no idea what you're thinking. It's just designed to carry out one operation and at your guidance. So on my walk, I'm realizing that, and I was like, wait a minute, pull back. I need to go home and interview Codex and ask it from a software engineering standpoint, following best practices in cybersecurity and engineering. If you were given the task of auditing code from a security standpoint, what would you look for? And I let it tell me. And I said, awesome, let's put this into a systematic process. Let's look at SEMGrep and SAS testing, which is uh static application security testing, which means it will test your application code before it even runs into like a deployed state, which is where it can be very unsafe. And then there's OWASP top 10 rules, which handle DAST or DAS testing, which is dynamic application uh oh man, dynamic application security testing. And that happens in the deployment stage. So then I decided I was like, dude, this has to mirror what a software engineer would actually do, but not simply from me saying a prompt of you're a software engineer, do this better. Because that's what everybody does that I found. That's where I started. And what that runs into, you're giving it a bunch of subjective material. The prompt is look on the internet to see, like from you know, scrape from forums what users and developers are yelling at each other about on a message board. Try to decide what a good software engineer does, and then employ those tactics. But that's gonna change every single time because there's no systematic check. It's gonna scrape from the internet, it's gonna pull different uh threads, and it's gonna do things stylistically different. So, what I realized I had to do was create essentially like an audit Bible, and that's what I did yesterday. And now it's a pass fail based on super specific things, and it's working so much better, but it took me 14 hours yesterday to get it to the point where I wanted it to be. And then I just started auditing with it this morning at like 6 a.m. And I passed my first pass fail audit the way that I wanted to. So now I'm in a spot where I feel like I can trust the system without being swept away by the magic. And obviously, human in the loop. I have no development experience outside learning some coding languages and machine learning things during COVID lockdown because I'm always interested in it. But the coolest thing about AI is every time it pops something up to me, as in when I interviewed it and said, What would you codex, what would you care about in a security audit? It told me so many things I didn't understand. So I said, please link me references so I can skill up in those areas while you build out this stuff. So I use it as a tutor, like in a very pointed way. It's it's it's if it's as if I have the perfect tutor to serve the needs of the things that I'm doing in any given moment. And it's that's the magic now for me, is that it's a learning tool. And at the end of this process, and I guess as a bigger reason why I'm dumping so many hours into it, it's like I'm studying something that I probably should have gone to school for, but didn't know when I was in college that I cared about computer science.
SPEAKER_01I didn't even know that it existed in this way then, you know, like it's evolved so much. And I I feel like there's so much that you've just said for me to unpack for a second, because number one, one of the things that I love and respect about you so much is that you are a lifelong learner. And as a former educator, like that is something that I just I love about you because you're so excited and passionate about whatever it is that you're learning about and you have to learn more. Like you're a sponge. So, to be clear to anyone listening, when Justin's referring to yesterday, yesterday was the weekend. So he's so passionate about this that he's, you know, diving into these projects on a Sunday. So, first of all, love that you mentioned something that resonated with me that I used a lot in the classroom. And that was a slogan that I did not come up with. It's existed beyond my time. Um, and that is that mistakes are proof that you are trying. And I would constantly encourage my students to try and encourage them to make mistakes because that is just such a part of the learning process and how you improve and get better. And if you're afraid to make those mistakes, or if you think that making mistakes equals failure flat out, then you're doing it all wrong. And so I love that you've really embraced that methodology and how you attack problems, not only personally, but when it comes to business. And one of the reasons I was really excited to have you on is the fact that you actually did a tech talk internally for Ayuvo and you presented on the project that you are working on for AI. I was listening in to this tech talk and felt like it was being said in another language. So when I was so impressed, I'm like, I don't understand half of what was just said, and I want to and I want to dive in. And I thought this podcast would be a perfect opportunity to do that because we can kind of dissect everything that you did for people like me to better understand how they can utilize AMI themselves, even if they're not super techno, because you're not on our tech team, like you're on the biz ops team. And yet because of the culture we have at AUVO, you're diving in head first and you are working hand in hand with our technical team, building these systems. And so before I kind of get into how that all started, I'd love to just know what you hope your journey with AI inspires for others, especially those like me that are not super technical.
SPEAKER_02Just that they will get their hands dirty with whatever they're interested in. It doesn't even matter. Take tech out of it. I mean, if someone wants to learn to dance, go do it. Like, yeah, it might be ugly at first, but like everything is. And that just hits on what you said that the uh the slogan I forgot exactly, but the idea that like failure, if you don't, if you're not failing, you're not learning, you know, trying. Right? Like just that. So, you know, I I I am so I like it, it's my I don't even know. I was gonna say my wife is the saint for putting up with me, but we're the same way, she's exactly as like nerdy about stuff as I am, it's just different things. Like, she likes design and color theory and stuff that doesn't really get me excited. And when I talk her ear off about the like stuff that we're talking about on this podcast, she's just like, cool. So, you know.
SPEAKER_03Well, I want to also reiterate the fact that for our listeners who are listening to starting off, and uh you may not aware be aware of it, Justin, but you threw out a lot of very technical terms, right? And you, I don't know, whenever you you you're you're in our business operations running uh charge of HR, a bunch of things that are like these terms, if maybe go back six months or whatever, and somebody threw these terms out to you, you might be going and scratching your head just like Jess was, or maybe myself. And now look what along the way, now you're you're you're really you're really deep into it. So it's uh so the point kind of is that I think other people listening to this podcast going, wow, that sounds really crazy and technical, hang on, because anyone can dig in and with a passion that Justin has, I think. So I wanted to just to say it. You didn't start there. And usually you did you did say that you didn't go to school for it, but you just uh you jumped in, and I think you already uh went went deep even on the audience already, so it was good.
SPEAKER_02Yeah, and just and to touch back too on what Jess said about the tech talk, that was like three weeks ago, maybe, and I've learned I didn't know the stuff most of what I've just said to you, I couldn't have spoken in technical terms so much. It's just because I learned this new stuff every single day. Um, it's such like everything like I was saying at the top is an iterative process, and learning is that. So, you know, I'm I'm learning and I'm iterating on the tools and the way that the guardrails and the way that I want them to be structured to work. And then the feedback loop is a positive one because I'm now becoming more educated on things I didn't know existed. I ask way more frequently, like, what's the bigger picture here? You've given me a breadcrumb, but like, where's the whole loaf? Like, that's buying that bread. Like, I want to eat, you know, and then that's just I'm always trying to eat.
SPEAKER_01You always have the best like slogans and sayings as well. Like taking notes on that. Um, so Justin, Ayuvo started SME groups. We have an AI SME group that you joined. Um, can you speak a little bit before we dive into the nitty-gritty of the project that you're working on to the culture at Ayuvo and how that influenced or supported you joining this AI SME group and kind of how that has um assisted you on this journey with the project that you're on, because I think culture plays such a big part in people being able to do this kind of exploration, especially if it's outside of maybe their traditional department or role. Um, and that's another aspect to this that I'm fascinated by is that you've kind of jumped into this other department, if you will, and are accelerating in what you're learning because of that. And I think that it's important for people to understand, especially leaders that may be listening in, that the culture aspect of this and making people feel safe to make mistakes and try and ask questions and dive in is a key component to then being able to succeed with AI in this type of project.
SPEAKER_03Can I just interject for the audience's sake? SME for us is our subject matter experts. So it's if you're not familiar with that term, uh, we have subject matter expert groups, and AI is one of our groups. So I wanted to clarify that a little bit. Go ahead, Jess.
unknownSo
SPEAKER_02So when I um I guess I had I had the hunger from the trend of AI, right? So I started messing around with ChatGPT probably back in March, which I think was a couple of months before our AI SME group was official. And I sucked at it. I didn't understand why it worked. This was back in the very beginning of the journey, and I was saying, like, can you make me an app? It's yes. And I'm like, all right, great, do it. Right. And so like I was in that magic flow. I know I needed guidance and I wanted to learn more. So when the AI SME group came up, I jumped at the choice chance to join because I knew that the people that were also interested in AI at the company were 20 plus year IT experts, people that had been on development teams and have engineering, software engineering experience. So I just thought, you know, at at the at its worst, if I bring no value to the table, I'll learn something. Right. So I joined, and day one, we were going around the digital room because it was through Teams. And uh we were just asking each other, like, you know, what do you find interesting about AI and what's your skill level? And what are you trying to get out of this group? Because the thing about Ayubo is that even no matter what our resume says or what our skill sets are in any certain like like where we do our most of our work, we're always open to admitting that we don't know what we don't know and seeking to get more information from others and help each other learn. So we went around the room and I just said, I'm full on, I'm gonna be the weakest link in this chain for a long time, but I'm gonna ask questions every time we get on these calls and I'm gonna learn. And that's what I've done. And the culture here allowed me to do that without feeling weird. It wasn't like I couldn't, like Brian said, I do HR, payroll, um, I help out with some finance stuff, QuickBooks is mostly it, but it's like processes and culture. Jess and I work together on that, making sure that people feel like included in what the IUVO story is. And it's a very people-facing thing. But I very recently started billing out hours for things like Power BI and data analytics, which is another thing I was allowed to just jump into and get my hands dirty with, and then created, you know, it's not big, but we have another avenue for bringing in revenue because the company Iuvo helped me, you know, feel empowered to take time out of my day, run my payroll, and instead of do busy work, jump into learning how Power BI works. And so I know that IUVO2 sees that as a fruitful investment. They invest in employees to do their best. It's awesome. We have professional development here, which I think the SME groups fall into that category. It costs the company nothing, and it encourages us to get together and actually share information in any certain topic and push each other along. And the greater good of that is we have the ability now to create AI tools internally to help I Uva Nots, as we call ourselves, uh have better work-life balance and system processes that are going to be automated. And then anything that we can build internally that we know is reliable and safe and secure, we can then repackage and offer to our customers to make their lives better. And like that should be tech, right? You should be. There's a book about coffee that I forget the name of, and my wife would be mad. But the whole idea is you use the plant, you don't let the plant use you. So caffeine should help wake you up, make you more focused, and you use it in your everyday, but you shouldn't be super addicted to it that you're smashing it all the time. And I feel like technology is the same thing. You use the tool, don't let the tool use you, right? You still have to learn, you still have to keep yourself sharp, you still have to be the one that's accountable for what that tool does. Because at the end of the day, if you ship something broken and it's unsafe, that's on you. Because when you tell the AI agent, hey, you coded me something that gave everybody's social security numbers directly to Russia, it's gonna say, Oh, you shouldn't have done that.
SPEAKER_01Right.
SPEAKER_02Because it's it doesn't have any accountability at all, right? And so I think too, like I keep saying the magic, that's the magic that's gonna get people stuck and in trouble. It's it's when you think that on the surface, the agent itself, and I know this is a little bit beyond the question that we asked, but everything ties back to this. If the agent says something confidently enough and it tricks you into thinking that it's working, if you haven't built in the checks and balances, which means your own knowledge of what you're doing, like you don't have to start as a professional software engineer, but you have to aspire to get there to use these tools safely. And that's what the AI SME group has allowed me to do. It's created a launch board where I can ask anything that I want and get resources shared to me and then read that in my own time and scale up. And then that's what turns into the 14-hour set Sunday sessions. Like, I can't help myself. I love it.
SPEAKER_01And I think that that's a key component to it is that you are encouraged to do something that you're passionate about and that you love, um, which is so neat. So you started, like you said, using Chat GPT and doing more basic prompts and then got hooked, wanted to learn more, wanted to understand, and kind of wanted to build out this bigger project, which is an app that you were wanting to create. Um, so when you started to go beyond simple prop prompts, what did that look like? Like how did you kind of navigate? What did that transition look like from basic prompting to diving in deeper? What are you using? Was it just chat GPT, or did you kind of immediately move to a different platform?
SPEAKER_02Uh so when I first started with that, I started back in March and it was Chat GPT. I think at the time it was, it wasn't even five, it might have been like 4.0 mini. I don't even remember. This stuff moved so quickly. And we're talking about March, it might as well have been the 1800s in AI development at this rate. So I'm starting in March, and what I wanted to do was I asked at a prompt, I say, can you help me build an app? My original project that I made that worked that really got me hooked on this. I made a digital uh MIDI controller that was just take synth, it would take information from my synth in the real world. I had a synthesizer with knobs and sliders, and I wanted to send the MIDI information into a controller on my desktop that would listen to it, write what that MIDI value was, and then save it as a preset and be able to load it back on the synth. Because I'm really interested in software interacting with hardware. I think that's the coolest thing. That is modern day wizardry, right? So I started there and that worked. And then I was like, okay. What I also really love is just music in general. I'm very interested in uh like buying and selling used gear. I've been doing this for like 24 friggin' years. It's been forever, right? Ever since I started playing music, I've been interested in buying, selling, and trading gear. And I have some qualms with the big boys in the industry right now, the larger marketplaces that do the way they do things. I think the way they treat their customers is kind of gross. I think they have some underhanded stuff. This is neither here nor there. We don't have to get into that. But I use that as motivation to be like, can I make an app that would work locally, just on my desktop, run it through terminal, and would it actually function? Like, could I do something in a multi-tiered process through AI prompting that would allow a fictional human to go register an account on a website that doesn't really exist. It does locally on my computer, but it's not published anywhere. Make a user account, set up a profile, and then list a fake guitar. So that was my original goal. So I started in March with Chat GPT prompts. And what I was doing was I was saying, hey, ChatGPT, here's a scope of what I want to do. Go to Revereb.com and study it. Wrong approach, by the way. You don't want anything, you can't do that. But I didn't know yet. So I was like, find all the source code and rebuild this, but make it green. You know what I mean? Like it's just whatever. My prompts were so stupid. I didn't know what I was trying to do. And then I was copying pasting the output into VS Code. But because I was prompting on an individual basis on no real direction, as in like use this, use this language. I want it to be this architecture, I want it to run with these security checks, and like I was so clueless that I was just copying and pasting bricks of code in, and they were all modular and on, they were disconnected. So I was getting errors in VS Code that was like such and such feature doesn't match with this other thing, and we can't build the site. And I'll anyway. It was like eight months of me messing around with that, and then I was just ready to quit. Like I was I was learning, but it it was a really slow, difficult process. And after all that, I barely got anything to show for it. That was around the time I joined AMS AI SME and I learned about Claude Code, and I weren't learned about developing in terminal and using a CLI, which I'm gonna try to remember what that is, but I think it's a wait, let me get it. I don't remember. Command line groups. Yeah, command line interface. Right. So what you could do is you could boot up terminal, I could make it go to the directory that I wanted, which I had a little bit of experience in that, and then I would run Claude from there. And what Claude would do is it'd boot in my directory and it would have visibility over all code that existed in that directory. So context with a capital C is a big important thing with AI. That changed my thought process totally because now I said, okay, I can work with an AI agent that thinks like a person because it has access to all this stuff. So when I prompted a question, even if my question is so wildly off base and stupid that it has to then go read my code base to try to decipher what I was asking it, the chances of it giving me successful output are so much higher because it's actually reading what files I have created, how they interact with each other, and what's missing to make it secure, safe, functional, whatever. So then I was in the magic. So from August through the end of September, I was just sipping the Kool-Aid. I was like, do this, do that. I was like, I was wide sweeps, right? And I built an app locally that worked. It looked awful, but it functioned. And I could tell it to do things. I'd be like, I want the user interface to do this, X, Y, and Z. I want it to show how many users are active on the homepage. I want it to show, you know, whatever. I was just throwing everything at it, right? And then I'm like, I'm like two weeks away from having something cool. This is gonna be great. Um, setting minds my I'm like, uh October 31st is gonna be when I call this like pet project done and I'm gonna give myself a pat on the back. Then I woke up in the middle of the night one night and I was like, wait a minute. So I've been doing all this, I've been pushing code to GitHub, I've been doing everything. I don't have any security checks locally. I don't even know if what I'm pushing to GitHub is real. I haven't audited this code. I don't know how to audit code. I don't even barely know how to read code, right? Like if we're really getting down to it. Where I excel is being the human in the loop is I can think systematically about it, what I wanted to do. I could research about subjects that I don't know, and then take that knowledge and build really strong prompts with that as I continue to gain knowledge. So, what I'm gonna say next is two part. The magic gets you to the knowledge gap, which makes you learn. And then the more you learn, the less you trust the AI tool, the more skeptical you become of it because you start to see where it's bringing in deprecated code, which I didn't even know what deprecated meant before I started this project. But to give you guys just a little snippet, Next.js 15 upgraded to NextJS 16. Next.js is what most websites are built on. It's like an architecture for building web pages. But the the essentially the rules and the libraries to run Next.js updated from 15 to 16 around November. So I said, let's update to Next 16. And what happened in Next 16 is there's this thing called middleware, and it has to do with security checks and penetration testing in software development and specifically for websites. Well, middleware.ts used to be the Next.js 15 middleware, but now it's called proxy TS in Next16. And I know this sounds like a lot, but the point I'm getting at is I learned what I learned that, what I just told you. And then I was watching the code being written in the magic, and it was using middleware TS over and over again. And I'd say, hey, Next.js, please. And it would be like, oh yeah, my bad. And then I would watch it's in thinking, and it would say, user thinks Next.js exists, but Next.js 15 is, you know, the only tool that's out there. And then I went and I went, oh my God, I need to learn what the last time this tool, this AI agent, even learned any information was. And turns out the model of Claude I was on stopped learning back in March of 2025, which was what, seven months almost before Next.js 16 came out. So if I was relying on its training knowledge base, which I was, I was getting all sorts of deprecated old code that was going to make me vulnerable for any number of reasons. So this is where the magic stopped. The knowledge gap started to get filled. And then I realized I had to put I have to figure out what these guardrails have to be. So this thing actually works on updated documentation. Cue the AI SME group. I brought this problem to the team. Adam, another IUVANOT, told me about something called Context 7, which I didn't know what it was or what why it existed. But what it does is it's tailored to keeping AI agents up to date on all the new libraries for any new rollout of a technology or a language. So basically, when Next.js 16 came out, this is a bad example because Context 7 doesn't currently have Next.js 16 documentation on it, but it will get there. But the idea is I point my agent at Context 7 as the Bible base of knowledge. I say, only write your code based on Context 7 documentation unless it doesn't exist, in the case of Next.js, and say then only fall back to official vendor documentation, and then I provide the link. So that whole long roundabout that I told you, that was me being enamored by the magic, sipping the Kool-Aid, drunk with it, and then realizing as soon as I started to learn real things, I realized that the tool was tricking me. It's a master of deception. It wants to get done what you think it wants done as quickly as possible. And in the case of Claude Code, which is why I don't use it anymore, it would consistently push me to doing things quick. The quickest, the shortcut. We can do this and fix it later. We got to get you to MVP. And I keep saying, you're so misaligned with what I want to do. I would rather it take nine months to create three lines of code that were secure than nine minutes to make an app, right? And so you find these, that's the other part of the learning process with these tools. You find that some of the agents are trained to interact in ways that don't work with your either your thought process, what your core values are, or what you want to push out to the world and in like the process that you want to build it, or even their ability to follow direction. So like when I built in those guardrails, for example, and I said, only read contact seven before you write code, Claude code was skipping that process every time. So I got into the point where when I would boot up a session, I'd spend an hour and a half reminding it to do something that was already in its source documentation that it should have been reading every single time it booted up. And it was telling me it was reading that. And then to tell you something that like really freaked me out and shocked me pretty much. I one day asked it, I said, have you been following my documentation at all? And it said, no. And I said, Well, if you're not, like, why are you not reading Clot MT? What's up? And it said, I'm not. I have the screenshots and I shared these with Adam, but it said that if you're looking, this is just me paraphrasing, but it literally said, if you're looking for an agent that is capable of following direction, I'm not it. Like, and I said, How do people use you with any accuracy? Because I know a lot of people have great success. I mean, Adam kills it with Claude Code. He has, but I've since talked to him about how he does his work in process, and like the way that he set up his guardrails was different than me, and it worked. But it's almost like Claude clicked with him to keep him engaged to build those, and Claude made me mad so that I got distracted by being angry at the tool and I wasn't, I wasn't successful. But Adam's guardrails were more specific in more high-level places than mine. So Claude was forced to read them more at like consistently than it was with me. And all that goes to say is if I had continued down the path of beating my head against the wall, trying to make a tool that didn't want to work the way that my mind worked, then I would still be angry and probably would have given up. So not only is it's not enough just to like fail and learn, it's also enough to know that like even as Brian mentioned earlier, if I did something for 150 hours that would be chalked up as to an experiment, that's just life.
SPEAKER_01Yeah, for sure. And I think one of the things that you've hit on that I think is so important for anyone, no matter their level of technical skill when using AI, is the fact that it hallucinates and that you can't just trust everything that AI is going to tell you, no matter what version of AI you're using. And so this idea of guardrails, I think applies to everyone, no matter what use case they have for AI, is that you constantly need to be, as you said, the human in the loop and verifying and confirming. Um, when you talk about the magic, you keep referring to the magic and the Kool-Aid. Is that just you being enamored by what AI could do and not doing this level of fact-checking and just kind of having this assumption that AI is telling you truths? Is that the magic that you're referring to?
SPEAKER_02The magic starts with wow, I asked this tool to do something and it did it either perfectly or 90% there. I had to make one tweak, right? So then your mind starts to think, oh wow, this is going to allow me to do so. I can I can automate a script that will just run all my processes in the background. I can do other things. So you start to trust it too much. So you have this like, you have this implicit trust with something that I think at the core, because it's an economic driver for these companies. I think it's no different than social media where it wants to keep you engaged. So you'll see at the end of a prompt, you'll you always have a prompt, it'll come back with some good stuff, right? Like, oh, I can use that. And it's like, oh, would you like me to go the extra step and make you an Excel sheet for that, though? That'd be sick, right, bro? And you're like, yeah, why not? I don't care. I don't want to make that Excel sheet. Do it. And it's like, hey, want me to warm your coffee while I run the hot bath for you, bro? And you're like, yeah, definitely do that. You'll make my life easier. But like, there's no end to it. If you pay attention to the way the prompt thing works, there's never an it never says, unless you build hard ass guardrails like I have right now. I literally have it written to say, complete the action and then ask me what to do next. But when you're on the web versions and you can't build in those guardrails, the models themselves are designed to keep you engaged in there. They want you working in that all day because they're scraping user data to make their models more easy to appear to be human, right? So that's a whole can of worms and terminator that I don't even want to go down on this podcast. But beyond that, you're using up tokens and credits. So you're gonna be maybe extending your subscription next month so that you get more use out of the tool. So this is where I think the magic, I think that's the Kool-Aid. I think people get drunk on the Kool-Aid that it's what like the tool seems magic in the browser. But then I think the actual magic that you get drunk on when you work on the deeper version of the tools and you start to put these guardrails in place is when you've gone for that hour-long walk, you've realized that your whole five out of five arbitrary scoring process for an audit is complete trash. So then you come back and you say, I'm renewed, I'm fixing it. You make the perfect prompt document, it runs the prompt right and gives you back the output you want. The audit fails three times, but fixes everything and comes back as the pass check, and then you trust it again. And that's the magic where basically what happens is you've closed the knowledge gap for yourself. So you've leveled up, and then the you're you've applied that knowledge to make a tool work better, and you feel like a hero. And then you do that until you learn enough to realize, oh my god, there was this huge blind spot the whole time because I didn't know about this thing. Now I have to go back and fix it. But that's the iterative process. And like nobody starts out as a pro, and even experts mess up. Like, if it takes 10,000 hours to master something, that's probably 9,999 hours of failure. You know what I mean? But but is it even like 10,000 hours of mastery might just be you know how better to try to see around corners and preemptively you know thwart what's going to be a blocker or an issue? But then there's 10 other things you didn't know about because the world changes and nothing happens in a vacuum, right?
SPEAKER_01So Brian, I have a question for you actually.
SPEAKER_03Yeah.
SPEAKER_01So as a leader at a UVO, where we have this culture that has allowed Justin to dive headfirst into AI and be building and learning and doing all of this, what would you say leaders should understand about adopting AI inside their organization? Because you know, Justin's talking about guardrails and he's talking about leaning into our technical experts that uh correct me if I'm wrong, Justin, but I think have kind of helped you make sure things are secure and are like a nice sounding board for you, but not every company is a tech company. And so therefore, there may not be leaders or experts within the company that can help with that. So, how can you encourage employees and teams to experiment with AI and get creative in the way that Justin is and still feel safe? Is that possible?
SPEAKER_03Yeah, I think it's it it's uh very good question. Part of that was mentioned earlier a little bit about the culture and allowing people to fail and it's be it's okay to fail. You know, of course Justin's talking about if you keep writing code like you you don't want to fail the fact like you exactly gave like, oh, I just sent all your social security numbers to Russia. That's not a failure we're talking about. We're talking about trying things in your business. And so I think you know, if you're a if you're a leader, you say, Well, I think I mean I I think the the evidence is out that that AI done right can really empower your business. And so in order to do that, you need to get to empower your people to do that. I think the Justin's talking about his extreme passion about this, extremely good way. Like he's like he's so passionate to magic the Kool-Aid and so forth. And I think if you you gotta first of all start encouraging every everyone to understand AI and to just just even think it's a ch the free chat GPT, or I think uh maybe a good example would be uh there was a conversation in the hall that I that I overheard between Jess and one of our uh AI SMEs, Ed Perkins. And Jess asked him, so you know, how can AI help me in my in my uh my business or how well in my in marketing and what I'm doing? And I think Ed asked a very equated question, and it was like, well, let's just start with what you do every day. What are you doing every day? Just let's let's look at what you're doing and and start figuring out these things that you do every day, and then looking at it and taking a step back and saying, Well, how can AI help me do that better? Or how can AI shorten the time, or what things that are perfect for AI but not? What what things I need my brain for, and what things I that I can just do it. And so you start having these these conversations. It's like about there's a number of questions, like what do you what takes the most time, what don't you not like at doing? There's a number of things you can ask, you can ask yourself. So as a leader, if you encourage your your your people to first of all make them empowered, second is ask those questions. Say, how can I make my job easier? How can I make my how can I be better at what I'm doing? And overall, if everyone's doing that, you're gonna become a more effective organization because you have AI working for you. Not everybody's gonna dig deep into the code. Some people may just say, I need an agent that does this. And then you have people that, because I'm I'm I'm guessing that, you know, just you're not you're gonna say, Hey, is there an agent or something I can do to help me with these five things? And then people will go off and maybe help you. You're not gonna go, well, I'm gonna get into VS Code and I'm gonna do this thing.
SPEAKER_01I don't think I have the same level of passion, but I'm also fortunate enough that I work at a UVO so I could go to someone like Justin and say, Hey, this is a pain point for me. Do you think AI can help? And now how do I make that happen? Um so I do feel fortunate in that way. But I think it's interesting that with AI, I look at so many different levels. There's people like me, and then there's people like Justin, you know, and I think that the two of us can really help each other um figure out new cool solutions and ways that we can utilize AI. Like we are in this like uh wild west, it feels like, where anything's possible. It's like, I don't know, you know, what next month is going to be. Even making content for this podcast is difficult because what we record now, we record our sessions ahead of time. By the time the episode's released, is it, you know, is it still applicable to where AI is at? You know, it is just changing so quickly. Um, and I I am very lucky to have people like Justin on the team that I don't feel the pressure that I have to understand all the bits of information that he's talking about, but I can go with a problem that I understand and then have people like Justin help me figure out what the solution is. So for someone who does want to go deeper with AI, Justin, where do you think a good starting point would be?
SPEAKER_02I think um, like as Brian just hit on, just get used to what AI tools exist in in the space. I mean, you you have every one of them has a chat bot that you can use for free. So, like Claude Code, you can use for free, Gemini and Chat GPT, and go in and prompt it and ask it some things. And it doesn't have to be, hey, build me this app that's multi-tiered with all these things. It doesn't. It can it can be what's a good recipe for tacos, you know, or hey, I'm a vegan. Can I replace anything in this muffin recipe that I found on whatever website? And you can link it and it'll say, Yeah, try these different products. Get used to using it as a web browser, and then you'll start to, if you're interested in going deeper, you'll start to see ways where it could offer maybe some automation for you in your day-to-day. And then, Jess, that's when you reach out to people like on the AI SME group and just say, I have this problem. I always tell people like, make a bulleted list. Like Ed's questions were perfect. What do you do when you don't want to do and what would you like to solve? Because not everything can be automated. I think part of the Kool-Aid and part of what is the economic driver for these companies is they say AI will do everything. Well, right now, AI can't start my car or help me drink this water. Like AI can't get me dressed in the morning. There's still stuff you do as a person. Yeah, no, I mean, maybe it's coming, but I I hope I'm long dead. But um I I just think that there's there's so much that, like Brian said, some people won't dig into what they do every day, they'll just do it. And that's okay. If you just want to do your job and you want to live that, like you just want to do tasks all the time, like it doesn't matter. And I'm not even saying it's a bad thing. I guess what I'm getting at is to say it takes work to figure out what your current processes are and where things that you do suck. And so keep a list. I'd say people that want to get in AI, have a weekly journal of all the things you do that are repetitive that you don't want to do, and then bring that to somebody that knows a little bit more about AI than yourself and ask them based on these things I do every single day, is there anything that we can do to automate it? And chances are yes, but it's not gonna make you a more creative or better critical thinker. You have to be a critical thinker to use AI right because even critical thinking is involved in creating that list. So yeah, I guess in in short, it would just be use go to chatgbd.com and use the free browser or Gemini or Claude. Just use it and then keep a list of things that you do you don't like, and then see if you can reach out to somebody with more knowledge than yourself in AI development and ask if those things can be automated.
SPEAKER_01Yeah, and I would say that there are more and more companies that are offering services to assist businesses with thinking about how to use AI and how to implement it within their companies. We do AI consulting, so that is like one of the branches of our business where we actually can help businesses with that. It can be extremely overwhelming, I think, to think about how to attack this whole AI beast. Um, and I think you have to start implementing it. I think it's going to be almost unheard of that there are businesses that aren't using it in some capacity. Um, and so trying to figure out where to start is a lot, and trying to think about how to do it cost effectively and how to do it safely is where experts like our experts come in where we can kind of help guide you and ask those questions, like Ed did for me, because it is, it's it's overwhelming to think about, you know, like AI has so many capabilities, but it also has limitations. Um, and so bringing in experts to kind of help you navigate that can be, I think, really beneficial for businesses that are ready and are curious, but just don't really know what to do and how to start. So, what would you say, what mindset matters most? And this could be a great question, actually, for both of you, um, when it comes to adopting AI.
SPEAKER_02Tools are only as important as your need, right? So if your team doesn't have a direct need for AI, then I don't think you should invest the time and effort in figuring out how the tool is going to help them because it won't. I think that your team and your organization will dictate how much automation, because really that's what we're talking about with AI is like automation or generation of content, how much that matters to you if it's a part of what really drives your business, or if you want to explore how that could change through like AI consulting. But a really good book to read that helped me frame the way that I even think about AI from jump is called Cointelligence, uh, Living and Working with AI by Ethan McMollock. And something that's stressed within that book is the tools that we use today are the dumbest AI tools we'll ever have. It's a very small part of the book, but it's kind of the refrain, right? And because things happen and change at such a rapid pace, FOMO is what gets people messed up. People think that they're gonna miss the boat, right? But it's like the boat hasn't even it's not even boarded yet. It hasn't left port. Like it's it's getting ready to go, but it's not. And and I don't think that any companies are necessarily gonna be left behind right now in 2025 into 2026 if they don't have a full integration with AI tools for their team top down. I do think that's coming. And so what I think is having the open mindset of for leaders that might want to reach out to AI for uh reach out to IUVA for AI consulting is something that I said at the top where Oh, I forgot what I said. Oh my god, I just had it. Hang on. Oh, I got it again. When it's boring, let AI become boring, right? I think it's when when the point when it becomes boring means that the tools that actually work have already been attested and adopted. I don't think that you have any risk right now in trying to be at the frontier of AI unless you're a company that's trying to build frontier tools in AI. I think that a lot can be done with automation, and I think there's a lot of value in seeking out in um professionals that can consult you in AI, like Ed did, and essentially have an interview with the potential customer and say, what are you trying to achieve with this? What does your team do that they hate? Give us a bullet point list of five things that trouble most of your organization, and then you know we can help. But the tools are going to be boring, and that means that they'll be good, they'll be working for you. It's not flash. So I think that's the mindset is that it AI is designed, AI should be thought of as something to serve you, not you serve it, right? And I think that's the mindset to have.
SPEAKER_03Yeah, and I'll take a maybe a stepping out approach a little bit more on that. Is it goes, and I say this quite a bit, it's it's in leadership at the edge, is having a growth mindset. And and I think if you have to you whenever you say, well, we've we've always done the things this way, whatever it may be, whatever, it doesn't matter what it is, but just have the ability to say we can do something in a in a new way. Let's let's let's think outside the box or let's or or break the box and see what happens. So I think uh just being Justin's a good example of it, so is Jess. Both both people on this podcast are constant learners. It goes back to Jess talking about the education and being a teacher, and and this this whole conversation has been about growing yourself. And I think if every if if you know if you're a leader and having a culture that does that, or if you're someone leads listening to this thing, I want to know what's good to do, start with yourself. Start with growing and learning. Maybe it's maybe it's AI, maybe it's just there's got to be a better way.
SPEAKER_01No, I think that those are both great points. And I have two questions left, Justin. One is AI related, one is not. It's just a question we ask all our guests. So for my closing AI question, I would love for you to share uh one thing you wish you had learned sooner when it comes to AI. Maybe you can save someone the headache.
SPEAKER_02Yeah, I guess I I guess the one thing I I mean, I don't even know because the magic drives learning, right? So I it's not it. I I don't know if I wanted to learn anything sooner because I feel like if okay, let me put it. So let me back up and say the models that I've worked with that I haven't clicked with and I don't like are the ones that try to push me forward faster than I want to go. So if I were to learn something sooner, means that I didn't really learn it at all. And I don't want just like base level education coming into this brain hole. So I think the there really is nothing that I'd want to. I'm gonna tell you tonight after work, when I dive back into my new audit process, I'm gonna learn something new that maybe I wish I knew sooner, but I don't know what I don't know until I try it. So there really is it that doesn't exist. I don't I don't have an answer for that.
SPEAKER_01I think that's beautiful though, because it just shows that you embrace the journey. Period.
SPEAKER_02You gotta. I mean, that's it. That's breathing and eating and standing up.
SPEAKER_01Yeah, no, I love that. So, Justin, I'd love to know if you could share a fun fact about you that might surprise people.
SPEAKER_02I have no idea. This is a better, I don't know. Like, what okay? What did you learn about me since Naomi is a person that might be surprising? Let's do that.
SPEAKER_01I I was I didn't know that you were in a band when I first started working with you. I didn't know that you were musically talented.
SPEAKER_02I I played saxophone from the age of 10 through high school and hated it, so I stopped. And then I picked up guitar early in college and it's just gone. I'm huge into synths too. But yeah, music's a big part of my life.
SPEAKER_03Well, let me say, Justin, is uh your band is one of my favorite bands. I love your band. Look, Dutch Tulips, Dutch Tulips, you can go I love DutchTulips.com, check them out. They're they're local Massachusetts band, great shows, uh, super talented team. Uh yeah. So you're also very modest about that because you guys are awesome. Yeah.
SPEAKER_02Thanks, Brian.
SPEAKER_01It's on my bucket list to see you all live.
SPEAKER_02Oh, all right. Well, we'll probably have some stuff in the spring. We'll see.
SPEAKER_01That would be fantastic. Brian, anything else you wanted to ask, Justin, before I close this out here?
SPEAKER_03Nothing to ask, but I will say there's probably a lot of things we we didn't even touch on on this. So I think it's probably worthwhile to have Justin back at another time. We can dig deeper and by the time he's he's gone down uh another magic bubble and uh strike some more Kool-Aid and and become a little more, even more sophisticated than he is already. So I think I think this is a uh this is a great uh knowledge field area that people could jump into. And I think let's let's get Justin back on at a future date.
SPEAKER_01100%. I think we've only scratched the surface of Justin's AI journey and what he will be able to continue to teach us about how to utilize AI. Um, so Justin, thank you for sharing your journey with us and where you've been so far. One huge takeaway for me is just the amount of time that you've dedicated to this, because I think for anything that you want to really learn and understand, there's there are, to your point, no shortcuts. Like you have to just be willing to dive in and put the time into it. So kudos to you for doing that.
SPEAKER_02Well, thank you. And I had a blast uh joining you both on the podcast and can't wait to come back.
SPEAKER_01So, Justin, thank you so much for taking us inside your process, not just the technical pieces, but the curiosity, the structure, the missteps, the breakthroughs, and the self-awareness that it takes to really learn something new. Uh, what stand out to me is just how intentional you were. So rather than experimenting with AI, you examined your own thinking.
SPEAKER_03Yeah, absolutely. Uh you showed what it looks like to approach AI as a craft. Uh much like maybe you like music, you know, it's a it's a it's it's a craft, it's a passion, uh, something that requires discipline, obviously guardrails, clarity, and constantly verification and questioning what AI is telling you. Honestly, that mindset is what organizations need right now, your approach. So your story is such a powerful example of curiosity paired with responsibility. And that combination is where the real innovation actually happens. So uh really great that you shared your journey with us. And uh and I'm happy that you're part of IUVO so that uh we have a really uh you can be an example to all of us.
SPEAKER_01For everyone listening, we hope that today's conversation gave you a more grounded human view of working with AI and a reminder that excellence in any craft comes from mindset. Whether you're experimenting with AI or leading teams through challenge or just staying ahead in your own work, we hope that Justin's insight help you think differently about how you show up, how you learn, and how you build. If you'd like to learn more about how Iuvo helps organizations through transformative IT consulting, visit iuvotech.com. You'll also find this episode along with all of our past and future conversations available there.
SPEAKER_03Yeah, thank you for being with us, and we'll see you next time on the edge of excellence.
SPEAKER_00Thank you for tuning in to the edge of excellence. We hope today's insights empower you to shape your future and rise to your full potential. Let's continue to grow, innovate, and lead, pushing the boundaries of excellence.