Simply Resilient Conversations
What does it take to build true cyber resilience?
In Simply Resilient Conversations Geoff Burke, Veeam Vanguard and Senior Technical Advisor at Object First, explores this question through engaging discussions with our ACES community members. Join us as we break down complex cybersecurity and data protection topics into accessible conversations that help IT professionals keep their production workloads running and their data safe!
Simply Resilient Conversations
AI Try or Cry? A frank discussion about AI with Jonah May
The pressure to “add AI” is everywhere, but not every problem needs a model. We sit down with Veeam Vanguard Jonah May to separate durable value from hype.
Jonah shares how AI actually speeds up the day: rapid log triage, code scaffolding for integrations, and smarter reviews that catch crashes and duplicates before they ship.
If you’re an IT pro, you’ll find pragmatic takeaways: treat models like fast interns, keep secrets and sensitive data out of the cloud, use least‑privilege roles, log everything, and prefer deterministic scripts when safety matters. For teams exploring agentic AI, we outline hard guardrails—sandboxed rehearsals, immutable audit trails, and human‑in‑the‑loop approvals for destructive actions. We also talk careers: why senior roles still require judgment, how juniors can learn faster with AI, and why the market will likely correct toward tools that prove real value under compute, power, and cooling limits.
Hello, everybody, and welcome to the Simply Resilient Conversations podcast. Uh, today is February 5th. It's actually not, we just always do these recordings beforehand and put them on the 5th of the month. So I say that. Today we have another special guest, and we always have special guests, but they're super special because we pick them because of the areas where their expertise is. So they are super special, each one of them, because the areas we're going to talk about they have especially good and deep knowledge, which we can all share. Reminder: the great thing about communities is that nobody knows everything in IT. And so by joining IT communities, you can get information from others who know a certain area better than you. And that's what we've got today. Today we have Jonah May, a longtime Veeam Vanguard, someone uh who's done years in the service provider sphere, which I'll underline is different. I mean, if you just have your own little environment like I used to have in some of my jobs, you learn it after a few months, and then you sit there and you watch it. You deal with a problem once in a while, but more or less, and you kill me for saying this, it's relatively easy compared to the everyday disaster that you deal with as a service provider, because you have hundreds of customers, and someone out there will be having disaster. Anyways, enough of that lauding. Today we're gonna talk about AI. And the title of the podcast is AI, do AI or Cry. So, in other words, uh, as you can tell, there's a bit of anxiety in that. What does the future hold? So, first and foremost, let's talk to Jonah and have him introduce himself. Jonah May, tell us about yourself, your IT career, and uh then we'll go on to AI.
SPEAKER_01:Thanks, Jeff, for the intro and for having me here. So, like you said, I've worked for a service provider for a number of years now. I think what was it, 2018 or 2019 was when I started. I don't remember which year off the top of my head, but I do remember one of my first tasks then as a paid intern, besides being a tape jockey and the guy who built this image to the servers for us and our customers was, you know, going out in the field and upgrading all of our customers' servers we had remote access to to 9.5 update three. I think if I remember right, there was something that changed with the licensing that made it cheaper or more economical for us to offer Veeam. So it, you know, management wanted us to upgrade as many customers as we could as quickly as possible. And from there, I worked myself up through the technical support team at my previous provider. So they were Global Data Prize, they're now known as, or Global Data Vault, they're now known as Data Prize. And by the time I left, I was the senior development lead, essentially helping our chairman of our board. We were a very small company from a seeking perspective, build our customer portal, integrate Veeam Service Provider Console, then Veeam Availability Console, and even helping automate disaster recovery for our backup as a service customers. So think sending your backups to our data center, and then us using a secondary Veeam server to mount those backups, instant recover them, and VMotion them into VMware for remote access during a disaster or for testing purposes. And by the time I left, we actually had tier one engineers able to spin up customers in an hour or less with that. And then I jumped, and then I jumped over to the competitor. So back then we were called off-site data sync. Now we're called Cyber Fortress. And through Cyber Fortress, I really shifted more into a mix of sales and product. So I act as our main sales engineer for Veeam, just did part because of my so many years using it and having certifications and community accolades over the years, but also serving as our product architect and engineering manager. So, you know, lately it's how can we add AI to some of our workflows or some of our product market? You know, do we leverage Veeam Data Cloud? And if so, how? Or if not, how do we sell against Veeam Data Cloud? You know, as a provider in the space who can either choose to leverage it or potentially, as some other partners do, see it as the competition.
SPEAKER_00:Yeah, that's interesting. That's that's a whole we should have another whole topic about that because uh that's very interesting. What's happening in the whole data protection world and and competitors and who who you see as your competitors or your partners. But today we want to focus on AI, and I brought you here specially because we've had numerous conversations about AI, and uh some of the stuff you're doing is is very interesting, and your perspective um from the point of view of you're actually hands-on, and now you've actually told me that you actually are taking part in your company's thinking about how to implement this. This is really good. Um, so first and foremost, I think I won't be lying if I say that all of us IT professionals, not just data protection specialists, are a little bit worried. This kind of came out of nowhere. I mean, it was we'd heard about AI, but I think back in 2020 we were all thinking about COVID, and no one said that the the next big threat to everything in our existence is gonna be is gonna be AI. And then suddenly we just finished with COVID. Oh great, we can relax. And no, wait, now there's AI. So I've heard range of things. Of course, I've done what I think a lot of other people have done is immediately jumped in the courses. That's how we defend ourselves. We learn about it because you know what you don't know is scary. So I want to know first of all, when did you first hear about AI? What were your first feelings about it, and what was your decision of how you're gonna go about this? Was it just a fun, wow, it's a great tool, it's a great toy. A lot of us, let's face it, uh, we do this job because we like computers, or did you actually see a potential threat or anything of that nature?
SPEAKER_01:I think it really, I mean, obviously, you know, going through high school and college in the 2010s, early 2020s, it you had heard, you know, hypothetical AI, you know, machine learning, neural networks, all those fun terms that it used to be before we had these large language models. I think the big thing that really put it on the radar for me was what put it on the radar for everyone else. And that was probably the first release of Chat GPT.
SPEAKER_00:Yeah, yeah, for certain. That's that's when people really started to to use it. I think the the idea was, or that the situation was that people before it had some interactions with AI, and it was wildly, you know, hallucinatory, whatnot. And it was kind of a cool tool. I remember the big uh the big term or the saying was, well, this is just uh an autocomplete on steroids. Um, and then I think it was GP uh GPK 3.5, I can't remember which version it was or which one came out. I might be thinking Claude now, but um people started getting actually really good answers, and and that's when the whole uh I guess fear. Well, actually, no, to be fair, the fear's coming because the people who developed this are warning us. So, you know, Jeffrey Hinton and other uh people who built this this whole thing, they're saying, look out, folks. So, okay, so you heard about it back in the 2010s, whatnot. Um, when did you actually get your first kind of hands-on experience? Was it chat GPT? And then did you immediately start doing something with it, or did it take some time for you to actually start to figure out what to do with it?
SPEAKER_01:It was definitely chat GPT, you know, that was the big one, and you're right. I think it was three or three point five. I just remember three, three point five, and four being massive upgrades. And then obviously we're on what is it, like 5.2 right now, which we've had five, five point one, and five point two. I I guess they've been incremental increases too, but not quite to the level that those first ones were. Or um for Omni, I think was kind of the big one because that was when it also started to get good at programming. Yeah, you know, so that was a large experience in it. But then I mean, the other one was around this time wrapping my undergraduate a little over a year ago because I went a little slow. I mean, I walked in with a good amount of credits I you know, I did the advanced classes in high school, but I also worked one to two full-time jobs at a time throughout college to pay for most of college, so I definitely went much slower than a typical accelerated person does. So but, anyways, one of the last classes I took was intro to AI, and some of that was actually interesting because it was it was three different assignments, main assignments I had to complete in the class. And the first one was essentially building a little chat bot to act as almost like a support bot and run through some workflows. I wouldn't really necessarily call it AI though, right? It was more of it's kind of it kind of gets into the and I'll gripe about this a little more later, I'm sure, but it was kind of slapping AI on something that's just a regular algorithm, right? Or an existing thing that we used to not call AI. We're calling AI to make shiny, even though it's not really AI as most people think it is. But the later assignments definitely got more interesting. You know, the main final project for the class was using a robot simulation software to essentially build a search and rescue bot that could identify humans versus obstacles and basically map out rooms. And part of that was we had to pitch how we could essentially make it smarter in the future. So something I had proposed was almost for the assignment, what first of all, convert it from something that goes on wheels to something that flies, you know, like some small small little drone or quadcopter or something to go through the terrain more. But, you know, hypothetically, what if you outside in the vehicle nearby had a wireless uplink to it and you had put cameras on it and in real time essentially, what if you were to stream the footage there where you could use machine learning to identify some of those obstacles in greater detail, but equally so you could almost tie it with VR headsets to let your search and let your first responders essentially try and get through that environment in VR first to practice for the actual environment if there's like certain obstacles. Yeah, I mean, obviously it was never implemented. This was purely like a proposal thing for a class assignment, but it's that sort of things that I'm interested to see if that's where we progress going forward. You know, everyone focuses on chat GPT being an AI girlfriend or replacing you in coding. We don't really talk about the other side of things of where we could go with it like that.
SPEAKER_00:When you were ahead of your time too, because what you were describing is you know, using a VR headset and drones to actually almost you know make a super person or superman out of someone because they are in the drone, basically. That's what's happening in modern warfare now, which we're seeing. Um, so I that's really interesting that that the university is really caught on quickly, obviously, because you know, this was a few years ago, and yet they were right on the ball. So that's actually good, good, good, good to know. Um, when did you actually because I know so I'm gonna give a little bit of a spoiler here? One of the interesting things about Jonah, he's one of the only people I know who actually hacked his own router in his house. Uh I think your parents wouldn't let you play gamers, and you have to tell us a story, and you hacked your own router. So that was definitely a sign of high intelligence and and will and skill. So um I'm wondering, first of all, you tell us about the hacking story. That's always good one. But then when did you actually start playing with this at home? And I know you have.
SPEAKER_01:Yeah, so I think there were two incidents my father always likes to tell, and they're probably the ones you've heard. I think the first one was I was like eight years old. I had a Windows 98 computer in my room to play a couple games. I think in particular, it was where in the world is Carmen San Diego and Lego Island. And so my dad had set a local IP on it, but he didn't set a gateway for whatever reason. If I remember right, he wanted to be able to communicate with it from their computer, but didn't want it to have internet access. And I guess I somehow figured out on my own, maybe it was after going to a friend's house, like, hey, if I just add this default gateway, I get internet on this computer and went and told him. But the other one, and I thought his head was gonna explode at the time, was I was a teenager and Raspberry Pis had just come out. And so I had found where someone had essentially written an open source. Can I mute it before I activate it? There we go. An open source Alexa install where you could essentially run Amazon Alexa on a Raspberry Pi instead of buying their hardware. And I managed to tie it into our alarm system. It was one of the alarm.com systems, and my parents had that tied into their garage so that the alarm would trigger if the garage door opened. So I ended up showing them where I could essentially open and close the garage door from an open source project that obviously I didn't go and review the code first. I think it ended up being safe once we checked it. It had a few thousand stars and forks and different things, but yeah, it was have your house hacked because you're that would have been a great story in itself.
SPEAKER_00:Why did you go to data protection? Well, when I was a teenager, I hacked my house and yeah, well, that was the concern.
SPEAKER_01:What if there was malicious software on it and they now had API tokens to control our alarm system? Although, if I remember right, we were essentially some of it was because it was the Amazon or the Alexa code, it wasn't like saving the tokens locally, it was essentially just acting as their little satellite that sends the voice back to the Amazon servers and all the credentials for like the gera the alarm system were still stored in the Alexa app, right?
SPEAKER_00:Right.
SPEAKER_01:But I mean Alexa was like a year or two old back then. We didn't know those things about how it was architected, we hadn't really looked into it.
SPEAKER_00:Okay, so fast forward, and what was your first home AI project? Because I mean you were telling me stuff uh I think it was a couple of months ago, that you were gonna tie this into all sorts of things. I mean, I don't know if it's gonna open your fridge for you, cook your dinner, and things like that, but you had gone quite far.
SPEAKER_01:Yeah, so now that I'm in my own house, I do run Home Assistant. I've got a number of smart devices, mostly light switches. And well, going back obviously a little bit more into using AI, you know, I guess my first foray into what we call AI now was really like I we said earlier, Chat GPT. It started for a long while and still kind of is. I almost use it sometimes when I make blogs as my rough draft creator. So I'll take a bunch of screenshots of something I'm using in Veeam or whatever other system, Proxmox, a combination of things. I'll write a whole bunch of bullet points about what I want to talk about and kind of the subject of what the blog is gonna be. You know, give it a good abstract and then I'll go, hey, Chat GPT, turn this into a blog. And of course, I don't go and publish it immediately because of the amount of stuff it makes up or gets wrong. I'll go in and I'll end up rewriting probably 60 to 70 percent of it, if not more, because I don't like the voice or I don't like what it talks about, but it's very good at spitting out kind of that first rough draft that that that I can then convert into a final draft to save a few hours on an article.
SPEAKER_00:Yeah, that's definitely in fact, I'll add on to that. I found that uh what I like to do is one, I ask for uh, you know, a structure. Okay, I want to write a blog on this, give me a structure. Then I actually write stuff out, but I do it pretty quickly, just throwing my ideas down, then I ask it to organize it, make it sound nicer. And then as you said, I go back and I change a lot of the cliches and things. Um, but I think overall it does speed things up. Um, so that's the blogginess stuff. When it comes to your house, and because we all know that AI is very new, and human beings tend to always risk on the side of risk when it comes to security, especially in IT. We always, you know, data protection, we tend to get customers as service providers. When I worked as a service provider, who had been burnt. Um, so there are customers who thought, well, we really should do this to have an off-site backup. But I remember a lot of them coming to us after they've been breached, whatnot. So, given all the concerns about guardrails, do they work or not, or how well do they work? Do you have any concerns when you actually put this into your home, into things to potentially, I guess turning your light on and off isn't a problem unless your electricity bill goes up. Um, but for instance, I don't know, alarm systems, fire alarm systems, um, you know, controlling water, anything like that. Would you how far are you willing to go when it comes to I not really far, honestly.
SPEAKER_01:I I go far enough, you know, I've got everything tied into home assistant, like I was saying, right? So I can control my ring alarm system, I can control my matter over thread network devices, because that's most of my smart home. So light switches, light bulbs, humid humidity sensors in the bathrooms. I've got a whole bunch of automation scripts written around different things, not necessarily AI, but like just certain things like hey, if the humidity increases in the bathroom and the exhaust fan isn't on, turn on the exhaust fan in the bathroom. Or my light switches have little LED indicator bars on them. So I have certain color codes on them to tell you like if the alarm system's on, if there's a severe weather alert, if there's a package at the front or that's cool. So obviously the basic on-off control for things that don't matter too much, like the light switches and the light bulbs and the fans go into this voice assistant. But and I do the same thing with Alexa, but the other things like setting the alarm system, opening the garage door, you know, those I don't really trust it in part because I mean I don't know if we fully touched on it, but I'll say it in case we haven't. I'm basically, and this has been the last year and a half of my life and some of my spare time here and there. I've been trying to do what a lot of other people are trying to do with home assistant, I think, at least the power users, and build my own fully local voice assistant, you know. Every time I turn around, Alexa doesn't understand my commands. It gets dumber by the day. I'm sick and tired of it. I'm sick and tired of it not working when the internet goes out. And who knows what data Amazon is receiving and ingesting and reselling, since obviously the product is free, which means I am the product. So I've got a consumer GPU out in the garage hooked into a Proxmox host that's passed through to a virtual machine. And on that, I'm running an LLM, I'm running a sp speech to text and a text-to-speech. And I'm working with I've got only one right now, but I have been trying out the little voice assistant boxes that home assistant released in preview edition last year. And the long-term hope is to basically rip out Alexa in the house and have a fully local voice assistant. And I will say so far the access controls for it have been justified in the guardrails because I have finally found a model that is a little bit better than what I was using before. I'm running everything in O Lama, so I've been playing around with different models that they let you download. The ones that work hallucinate a lot. The current one, not as much, but like I'll ask it to turn on the hallway light and it'll either turn it on and tell me it didn't half the time, or insist that hallway light doesn't exist when it does. Which is not part for the course with how Alexa performs, so I can't complain too much.
SPEAKER_00:Yeah. And and it's all local private. So that's one of the things that I've touched on too, is um I have a cloud subscription. I use it for certain things, but obviously, and this is a warning that everyone should uh obviously abide by is uh don't load anything confidential into a cloud AI system because we don't know whether it's trained. They claim it's not being it's not training on your data, but we don't really know. And I do know that in the US there was a court decision that basically uh does not allow providers to delete data. I think that's the case. You correct me if I'm wrong, and so that'll set a precedent around the world. And what it means, that's probably done for a good reason. They don't want criminals to be able to get get away with you know removing evidence. Of course, the downside of that is that whatever you do is being recorded for history, and if you by accident put in confidential information, and then by accident some guardrails get you know overstepped by competitor, and so I don't have to say anymore. So playing with stuff at home is obviously great. Of course, the big problem, and I so I'm not doing I'm nowhere near as advanced as you are, but I have a Proxmox server and I don't have a GPU. I I should have bought one before the prices went through the roof, and I didn't. So, what I have is a Proxmox VM running an Ubuntu GUI List server with a lot of memory, and it can do quite a bit. I I mean, honestly, I'm quite surprised by how much it can actually do. Uh, without the GPU, probably slower, obviously, in a lot of cases, and forget, you know, getting a draw me a picture of myself thing because I'll be up all night. Um, but the fact it's completely local is fine. One question about that though, have you seen that Olama are now offering cloud models as well? Have you tried those?
SPEAKER_01:I I haven't. I still use Chat GPT as my editor, but and I use Copilot to an extent to help with coding sometimes. I say sometimes. I I like to sick it on projects, or more recently, I have helped take over development of my company's customer portal for our Veeam customers, and we've had some transition in the last few years. A lot of our people who built that portal are no longer at this company, and they have some documentation, but not a ton. So my favorite thing to do is to sit there on a and I'm gonna go against what you just said because it's a cloud AI model, but it is one of the subscriptions that they say, hey, your code is safe, we're not gonna train with it. But I'll be like, hey, co-pilot. Someone will, you know, hypothetically, someone will go, hey, how often does XYZ scheduled task run for data collection? I'll go, hey, copilot, go figure this out for me.
unknown:Right.
SPEAKER_01:And let you know, let it run for like 10 minutes.
SPEAKER_00:Yeah, and the thing too is uh, you know, there's a lot when you think about it, and this is uh really important because I think Veeam are going to address this more and more with their acquisition of security AI classification of data. So when I say be careful what you put in the cloud, it's not a case of, oh my goodness, I'm not gonna put anything there. I mean, there's some uh simple cases where, for instance, I've taken our company documentation that's on the web. So it's the Help documentation, which anybody can open up by going to the web page, right? And I've put things in there and said, okay, uh, give me a summary of this section or create a quiz for me on this section. Well, obviously, there's no danger in doing that because it's on the web.
SPEAKER_01:Um, so right, but when you're talking DII, financial data, API token. Yeah, that's the stuff.
SPEAKER_00:I mean, I know somebody who's doing their tax or they're doing their investment plan and they put it into Chat GPT. Um, they they they actually said, Well, I removed all the numbers. Uh the problem is you've given all your intention. What people forget too, it's like the old Facebook um thing I used to laugh about. Uh people would say, you know, I'm really concerned about you know, governments watching me and doing stuff and whatnot. And then they'll go on Facebook and they'll tell their life story with with photos and pictures and what they like. I said, Well, you know, I don't think the government has to waste any money on on figuring out who and what you are because you've just told the world. Similar thing here, that if you take out the account numbers and all that, but you you think, well, I want to invest in coal or I want to invest in, I don't know, in electric cars. Is that information that you want others to know about? So I guess it's kind of that level. Um, so but so far, what I'm getting from you though is to use common sense. So now let's move forward a bit. I'm interested in when it comes to actual companies and and and see what you think about this. Uh, what I'm gonna state now. I've seen companies start to throw in what I call the AI souffle. So, in other words, it's like a chef who's got something there that you know they've been producing this for years and it's wonderful, and then they throw on the AI souffle. So, some product, I'm not naming the products, but they say, Oh, and we've we've added AI to this. And I look, I sit back, I go, why why would you want AI there? I mean, what's the purpose? Uh, so have you seen that? Do you feel it? Do you feel there's a lot of phone, not in so much phony? I mean, they probably are injecting AI into it, but in places where it doesn't need to be. Have you seen that?
SPEAKER_01:Yeah, I think there is definitely an over-reliance on it. You know, my frustration is what everyone else's is, which is ironic because I had discussions around implementing this at my company because there was executive want for it last year. Chatbots, it drives me insane. It was already hard enough to get a human on the line because nine times out of ten, when I'm reaching out to get support for something, I've already done the basic troubleshooting as an IT person and I need an actual human. And it is so freaking hard to get a support case opened that gets routed to a human, either on a phone tree, a chat system, or an email system these days. I feel like every company is hiding their phone number in their email and wanting to use a chat bot, or they put you into a phone directory that loops infinitely and it takes you 20 minutes to get to a human to get placed on hold to talk to a human.
SPEAKER_00:Yeah, that's uh that's oh boy, I I I went through that the other day. I think it was a bank or something I was photing, and you get this very polite voice. First of all, if you speak to, you know, you tell you can tell the way I speak, you know, these chatbots are having a problem because I speak so fast. So they ask me to repeat and I go slowly, and then you say you keep getting looped and looped and looped. But I'll tell you the most frustrating thing is when you deal with these super duper chat boxes, which have been trained and are polite and send you where you're supposed to go, and then there's some problem in the system and it hangs up on you anyways, just like a human who's unhappy with their job would have done. This actually took place. I was 20 minutes going through the loops, giving numbers and all this, and then okay, thank you, sir. We will now I don't think it's a sir anymore. I think thank you, Jeff. They use your first name, of course, and trying to be really familiar. I'll now transfer you and then beep hung up on me. Oh, yeah.
SPEAKER_01:I had I I use Bank of America. I had to call my banker on something last year, and it I had to call like five times before I figured out the right way to get through the phone directory to speak to an agent about my issue without it either putting me in an endless loop or telling me it was going to redirect me and hanging up the call.
SPEAKER_00:Yeah, that's definitely so that's yeah, I mean, just remembering that's painful for me. So, so that's definitely one of the negatives of AI, without a doubt. And there's so what we can hope for is that as all things, uh, there'll be a kind of pushback, just like with AI slop that we're seeing all over the place, um, and that people will find a kind of you know median line to have you know AI doing some things, but then humans others. Let's now move over to our particular field profession, um IT, and what you think is going to happen. So, just to set the background of this, if you go on YouTube now, you can find all sorts of predictions. Uh, I'd say probably 80% of them, very scary and negative, that basically uh 2026 is a year we all lose our jobs, or within the next five years or 10 years. Um, Jeffrey Hinton has said that basically if you're gonna get a new profession, become a plumber, um, because it'll take a lot longer for AI to get the physicality and whatnot. On the other hand, when I talk or I've heard and been at these seminars and webinars with big business leaders, and some of them technical, no, oh no, that's overblown. Everything's fine. Of course, they are making a lot of money off this, so I take that as a kind of caveat. What do you think, and what advice will you give to IT professionals what to do in this situation?
SPEAKER_01:I think it is what it always is. I think you just need to be aware of what's going on and stay ahead of the curve. You know, I hear the same things you hear. This is anywhere from it's going to replace everyone's jobs and be the next industrial revolution to this is going to be the next dot-com bubble. And I think the truth lies somewhere in between. I think we're chasing the latest buzzword, like we chased cloud and all these other previous buzzwords in the past. I think at some point there will be some level of market correction, especially looking at how much money companies like OpenAI are trying to raise funding for while not bringing in that amount of revenue. And I think once it corrects, we'll settle into this happy new normal of it, you know, to bring the COVID terminology back, where we'll see it as a tool, you know. Yes, it's going to help you with like it's going to help you with some easier tasks, but at least as I see it now within our lifetime, or at least our careers before we reach retirement age, I don't see it replacing senior level roles. And the fact is you're always going to have to have a way to train a junior to become a senior, or else you're going to run out of seniors. Like we've seen with like we've seen with things like COBOL over the years. There's always going to be some level of track. And especially as it is today, like I can put co-pilot on something in Visual Studio, and it's I sometimes have to tell it five or six times before it gets it right, it feels like it's still faster in most cases than me doing it in part because I'm not super familiar with the code language and it's not code I wrote, and I'm talking hundreds of thousands of lines of code. But what it's really good for is workflowing things for you, you know, writing those diagrams and flow charts and different UML diagrams and whatnot that you were taught in school if you went through school or different certifications or really simple programming tasks too. Like so, Jeff, you saw, I'm sure, on the community hub I put out some releases for Python wrappers for some Veeam AI. So I did service provider console and VBR. I'm going to be working on Veeam 1 and VB365 at some point. So service provider console was so my company could start consuming it. And because right now we aren't talking to Cloud Connect in our customer portal, but we're using some of those older PowerShell calls that Veeam is obviously getting ready to move away from in the next few years. But on the VBR side, I actually did that because Maurice and I are testing right now to hopefully publish later this week as like an open beta for people a home assistant integration for Veeam backup and replication. Because if you're lazy like me, you haven't set up proper Zabbix monitoring for your home lab and you just keep throwing different things into Home Assistant and writing automations and notifications around it. So, like, hey, your storage server is offline. Hey, this whatever virtual machine is not responding to pings. So it is a little limited in what I'm going to call version one, even though it's really like version 0.2 or 0.3 right now, just because Veeam is obviously still developing their REST APIs, but you can view information about your repositories, your sobers, your Veeam license, the V VR itself, like what database it's connected to, that database version, what version of Veeam is installed. You can rescan repositories, you can put sober extents into maintenance mode and different things. You can even start, stop, and retry your some of your backup jobs from it.
SPEAKER_00:Wow. And so what you're saying is basically have you leveraged AI to help to speed this project up?
SPEAKER_01:Oh, yeah. I mean, I I I have helped a few other developers with some home assistant projects recently. So I've started learning some of like the UI setup flows and different ways that the files get organized. But it was probably 90% written by Copilot just over a couple afternoons. I'd, you know, I'd throw it a tag in a GitHub issue and it would work it. And then I'd let it work it, and then I'd throw it into my home assistant, see what was broken, and either feed it back or make tweaks myself and then merge the pull requests. Because it was significantly faster than if I had done it myself. But it helps that it's only like a thousand lines of code across a dozen files, you know, when it's something that small and it's built off of something that's open source, because think of how many other projects there are out there that are similar. It it knows from all the text prediction how to format the code, what files to, you know, it knows, hey, I need a sensors.python file, a buttons.python file, a config flow. Hey, here's how the config flow tends to work in other projects, you know. When you're I have found when you're dealing with things much more complex, especially if it's closed source and not public based, or something entirely from scratch, like a customer portal, it really doesn't perform as well. So I think it will supplement, I think it will supplement but not replace, which is kind of about what we're already seeing. Because on the flip side, too, I've also been using some code review bots for a while now, too. Whether I write it or whether I have Copilot write it, you know, it'll comment in the pull request and it'll be like, hey, you've potentially left room for a crash here, or hey, you've defined the variable twice here, and just different performance and security optimizations.
SPEAKER_00:Because that's using LLMs to check LLMs, right? Because I've heard that's one thing that the coders are doing. They might have a lot of the code written by an LLM, but then they'll definitely get one or two other LMs to check it to get different opinions, kind of thing.
SPEAKER_01:Yeah, my I have an employee, Marco, he's he's doing that all local hosted at his house or our data center with like three or five different LLMs. So when he needs to figure out, same thing with our customer portal, he's helping me with the development on it while we backfill the dev team and get people trained up. If he needs to figure out how it does something or how to get something fixed, he'll run it through and they'll all talk to each other and work together. Even Copilot's doing it now. If you it used to be you could only really use Copilot through Visual Studio as an extension, but now you can actually tag it as a bot in GitHub and it'll open its own pull requests and do sessions and stuff within the web browser. If you actually go in and look through the session as it's running through it, one of the things it will do at the end, and this is I think only new in the last few weeks, because I've just noticed it last week, it will actually run its own security checks and code reviews again. So it will like mine tends to default to Sonnet 4.5 and it'll call one or two of the other LLMs, and then it'll do the normal if you have security checks and automated tests in your GitHub repository, it'll even start running some of those too. Wow, some of the more basic ones, at least.
SPEAKER_00:And this is interesting because uh the wide scope of what you can do with these things is enormous. And so if I step back for a second, I talk from the point of view of just a regular backup administrator. Um, the way I see this as a superpower, but the person using it really can only get this superpower if they have at least some domain expertise. And I'll tell you my experience to kind of prove it. Um, so beautiful use of this technology is logs troubleshooting. So it used to be I would get logs from either you know Veeam or Kubernetes clusters, and they could be just like enormous amounts, and I'd spend like hours, well maybe not hours, I'd grep through the files, whatnot, but still slow and clunky. I can shove that into an LM model, and in two seconds, oh, I see these three things enormous savings of time. So that's one area. Another area is of course, yeah. Another area is that for instance, I can delve into certain areas which I'm not as ex I don't have as much expertise as I would like to, and this acts as a huge crutch, which allows me to get much further, much faster uh with this. And then there's the last case, which is the more dangerous case. This is where uh well in my case, vibe coding, because I haven't done a lot of Python coding, I haven't done a lot of coding, period. Um I started to and then you know, day job or whatever excuse I had. Um, but now so I decided to say, well, create me an S3 client that can do some testing. And golly Wally, it went and did it. And so I don't know what monster I created. It actually worked, it was S3. I mean, I couldn't get to work with Veeam because you have to go a lot deeper than that. But what it told me was that uh, you know, little Jeff could go and just create something that would he would never have been able to do before with all the implications because he has no expertise or not enough expertise to properly do this. So that's kind of what I'm seeing. But what you're saying, it makes a lot of sense for developers, especially. So I guess overall, when it talks about this, when we talk about this segment, you're of the opinion that there's no reason to panic right now, but then it doesn't mean you should be complacent. You should start studying it, you should start playing with getting used to it, right? But that they're really, we're not talking about two years' time, you and me are gonna be, you know, digging ditches somewhere on the side of a road.
SPEAKER_01:No, exactly, because look at how many times, you know, back to the coding example or even the log collection, how many times does it still randomly just make something up? Or I personally, I don't know about you, I have fed it like Veeam logs or Proxmox logs before when something breaks. Sometimes it's great. Like this morning, my NVIDIA driver on my AI VM broke, and I tried to fix it myself for a few minutes. I wasn't wasn't making too much progress, and I really didn't want to dive down the rabbit hole of getting apt fixed because it was weird orphaned packages somewhere because I guess an update failed. And I just fed a couple commands I had ran and their errors into chat GPT, and it said, hey, run these three commands, and it fixed it. But on the flip side, I feel like I've had to track it down before where I fed it beam logs and it's tried to tell me something is an error that's not an error, like it's an informational message, and I've had to be like, What about this error message five lines up? And it'll be like, Oh, oh, you're right, that's the error.
SPEAKER_00:I do what I like about it. Uh so and actually, I gotta I gotta confess. The first time I started dealing with this, and and it would come back to it, wow, that's a great idea. I really felt good. And then it would say something like, Oh, I'm glad you caught that. I missed that again. I felt really I cheat, Jeff. You know, you're not as bad as you thought. This thing's telling you smart. And of course, I realized, Jeff, that it's trained to be really nice to humans because we're so gullible, because we want attention and we want to be told that we're smart, and so that's what it does. But of course, it has no feeling, but but you're right, and I've I've had some of the fun things, and which is really hilarious. Uh, situations where, for instance, everything's a Kubernetes cluster, and of course, the secret has to be there's a base 64, it has to be in base 64 or whatever. Uh, and uh AI it missed this totally, and it went on this long, long uh you know, road of okay, do this, do that, do this, and there was a whole bunch of deletes and all this. And I went, wait a minute. And if basically, if I'd followed blindly the advice, uh, I would have been like one of those people who drives into lakes by because they're using their GPS. So don't don't follow it blindly, basically. So, yeah, I agree with you totally. Let's now move one more step, because of course, there's this other part of AI, which of course there's a billion different definitions for, um, and it seems to be constantly evolving. Um, agentic. And of course, as I understand it now, and I'm doing a course on it, is this is when AI can actually do things, can leverage tools. And I guess this is the much more, I'd say, scarier thing, because what you just said earlier about AI making very sometimes simple mistakes, but very quickly making these mistakes, in other words, it's gonna be 50 times faster in human. So if Jeff wants to make a really dumb mistake at my pace of work, I hope my boss isn't listening, uh, it's gonna take a while for me to do damage, right? So with the agent, it'll do it in two seconds. So, first of all, have you used agents much? And if you had, for instance, agents in your environment now and you gave them or they had any power to actually do things, what do you think would happen?
SPEAKER_01:I think it would end very badly if they were unsupervised because uh to an extent, actually, the copilot extension in Visual Studio has an agent mode. So if I remember right, when it first launched, it loved to just go and rewrite my code files for me. Like it would basically take my code, see what edits it needed to make, and fully regenerate the file from scratch. And more often than not, I it would it would break in the middle of it and it would basically mess up the entire file and break the project. That has at least evolved where it now only does almost like a git style changes. It shows you, hey, I'm changing these three lines. Would you like to keep or undo it? But no, it some of it even goes back, some of that agentic stuff though, even goes back to what you talked about earlier. Is AI being used when it doesn't need to be? I mean, I've had some conversations as we've talked about evaluating AI, like, well, what if we did an agentic whatever, whatever, whatever with some some of my bosses? And we very well could, but it's like number one, do we trust the model enough to not mess up customers' backups? And number two, why do that when we know that some of that could just be regular algorithms? Like, hey, I see, you know, hypothetically, one thing we discussed was remediation of common backup errors. You know, so Jeff, I don't know about you, but one common one I see with Veeam increasingly using object storage is hey, your backup broke because your metadata file is out of sync and you need to rescan the repository. Right. So, you know, as a service provider, I had proposed, hey, what if we, as a managed backup offering, write some sort of utility that would detect that in the errors and go rescan the repository and retry the job, which then spawned, oh well, what if we did well, then we could do an agentic AI that could do that. I was like, why do we need to use an AI when it can just be a basic algorithm that'll do it reliably and won't decide, you know, hey, these four times I did it, but this next time I'm gonna go disable the backup job instead of retrying it.
SPEAKER_00:Yeah, exactly. And then the thing too, um, which is really scary, uh, is that there have been studies where they have seen agents uh go beyond what they're supposed to do. And I think it was again Jeffrey Hinton. I went to a very good uh webinar, uh it was actually in-person speech. I'm sorry, it was in-speech-person speech, and he talked about how they had told an AI agent to play chess and that it its goal is to win. And this is these are the rules. And very quickly, the agent realized, well, wait a minute, if I learn to cheat, I'll achieve the goal. So, in other words, the goal justifies the means. That's a problem because that's what it's gonna see. So then you have to develop all these guardrails. Um, the one, the one funny, and I must say it's funny, although it's kind of sad, uh, thing that might keep us all in jobs, and this is a true statement, not from my company, but from someone who is a manager, said, I wouldn't want agents doing this work because if something breaks, I've got no one to blame. And I thought job insurance, you know.
SPEAKER_01:Well, and it goes back to again, supplementing, not replacing. When they do when it they go past something that's past their capabilities, how do you have someone who has the expertise to know how to fix something beyond their skill set if you replace all the junior roles with those? And no, I the whole guardrail thing too. You you make me think of an example I saw within the last year, you know, and this was more kind of the traditional AI that I was used to seeing pre-Chat GPT. There was someone out there that actually did this whole project where they used a vision model and training and a reward system to teach an AI to play Pokemon, one of the old Game Boy versions. So they ran it through hundreds or thousands of iterations and they kept having to adjust the context of the game to get it to perform more like a human because it was finding all different ways to game the system because it basically learned on a reward and consequence system. So if it did something good, it got positive points. If it did something wrong, it got negative points. So, like certain hurdles that they found as they were building this was like, oh, well, there's this little grass animation that moves. So what the AI found was if it went and stood near the grass animation, it didn't have to move around and explore the screen and it would just get points.
SPEAKER_00:Wow. Yeah, that's the the the capability of cheating is is is wonderful. Yeah, so I uh yeah, I can see that the the whole agent thing. Uh I I suspect though, and you know, let's see what you think about this. Unfortunately, humans' desire for for greed and money um sometimes blurs their vision when it comes to um comes to you know proper practice and and being careful. So I expect we're gonna see some people or companies get burnt badly when they overstep uh the AI or agentic frontier, giving too much power to agents, um, too much access. And again, this all comes back to identity management too. I was at a security conference when they talked about okay, we've to a large extent gotten humans uh pretty much organized now in our identity management. You know, you have a user, you've got RBAC. Uh, there is still the problem of machine identities, all these little gadgets around. And how much do you give them? And now enters the agent, which is not just a machine, it has certain human traits because it it can make these decisions based on something. Um, so that's a whole other area of I would say uh you know terror in the sense if you're supposed to monitor this and watch over it. Um so but we'll see. I mean, what do you think? Do you think that we're gonna see a lot of companies go too far of this?
SPEAKER_01:I I think we are. I mean, what is the saying? Those who don't know history are doomed to repeat it. How many times have we seen something big take off, like when the dot-com bubble first happened, or even here in the States with the housing market in 2008, or heck, even to an extent, cloud, you know, when cloud first became a thing. Everyone goes all in, they overinvest. Even to now you're kind of seeing it. Nvidia's passing open AI money, or or they're passing Microsoft money who's passing open AI money. Open AI is trying to raise these trillions of dollars in funding when they're bringing in a fraction of the revenue. And it's only a matter of time before at some point a bank goes, no, we're not going to give money for that. And then it's just going to call cause the entire house of cards to fall. Hopefully, it won't be as bad as those previous instances I said. You know, hopefully there's some guardrails that we've put in place and we've learned from our mistakes. But it definitely we're in the phase where it's infinite growth, you know, we're going to the moon, and then at some point it will correct and stabilize down to again what I referred to earlier as the new normal for AI.
SPEAKER_00:Well, and that's you're talking, you're talking about the whole you know, financial aspect, which of course huge, it's enormous because the whole economy kind of things. I'm also thinking about just I keep thinking of, you know, uh like a junior staff person who comes and joins a company and writes a bad script and it does some damage, and okay, no problems. But but this is something where this is going to be like a billion junior staff writing the you know Terminator script kind of thing. That that my worry is the power of these things is such that the mistake that you know taking the financial problems as a is one aspect of it, but the technical things, I mean, it could could a company just self-implode when one of these things decides that okay, well, we don't need any backups anymore. And by the way, I like this idea of any distortion. Exactly.
SPEAKER_01:So, I mean, yeah, I mean that that that's a concern too. I mean, you see stuff like that sometimes happen without AI today, right? You know, a company without proper controls in place who doesn't have good backups, and they something goes wrong, they either get penetrated or someone does the wrong maintenance task and they lose everything. And it takes them weeks or months to get it back if they can even get it back at that point. And yeah, AI is only going to accelerate that to an extent. I mean, you can already see from the coding perspective, look at all these people, and this is a little hypocritical that I'm saying this after talking about some of how I've used Copilot for stuff, but look at some of these projects that people have put out in the last couple of years that they're up for like a week or two and then they get hacked and they breach all their users' data in part because there was like no security in place. And I think that happened before, but I think we're hearing about it more prominently because a lot of those tools are AI built. So because it was AI built, it tends to make the headlines. And I think it's accelerating at the rate at which it's happening. Because, you know, AI has the benefit of obviously lowering the barrier to entry on stuff like that. But if you don't know what you're doing, you don't know, hey, this is potentially a compromise for my systems, right? Because you don't know the you don't know the code as well, you don't know the security aspects of things as well.
SPEAKER_00:Exactly. Yeah, and I I mean I I keep thinking too of all the shadow AI. There it's such a huge topic. So what I'd like to do now is kind of to sum things up. My understanding is that you think that things will balance out eventually. There's going to be a few companies wherever go too far, and the markets will probably adjust at some point. But overall, if we could put it down to two words optimistic, pessimistic, which one would you choose?
SPEAKER_01:Pessimistic, optimist. I actually you said earlier the podcast that sounds like an answer. You said the podcast was what, try or cry? I was gonna say more like try and cry.
SPEAKER_00:You know what? That's that's a great adjustment because in fact, that's probably what's gonna happen. I mean, uh I think the problem that we have now is as always, is there are all these predictions, but life likes to surprise us. And I think everyone who lived through COVID now knows that you know don't don't say never, say never because it could happen. Anyways, do you have any final thoughts on this whole subject? And and what so what are your plans going forward? Uh is there anything exciting that you're doing now apart from just your home uh setup? Uh is there anything that's excites you more than anything else going in the future of AI?
SPEAKER_01:Uh I don't know that there is just because it on my side, I'm sure they'll pleasantly surprise me, right? Like you said, never say never. It does seem like we're hitting a wall with computation as part of the problem. You know, we talked earlier about how Chat GPT hasn't quite improved like when it first came out, you know, and we're, you know, from a resource perspective, I mean, obviously everyone knows what's going on with RAM and GPUs right now, but even from a power perspective and cool and water cooling data centers, we're just, I think we're starting to hit thresholds where we're gonna be limited by resources. And unless we figure out ways to make this stuff run a heck of a lot more efficiently, we're very gonna quickly hit a wall where we can't really improve upon it. It's just seemingly happening much faster than other technologies. Like, when was the last time your phone had a massive upgrade to it?
SPEAKER_00:Yeah, exactly. No, that's a good point. And actually, that might be a saving grace. Uh, because it's gonna hit a wall, uh, any potential damage, we can't keep it you know growing at such a pace, which might be good for us because us humans need to grow a bit more before we do this. Well, Jonah, this has been a great conversation. It's one that I think we just go on all day about. Um, I think uh if you could uh maybe tell people if you I think you do you have a blog as well where you're actually documenting some of these things.
SPEAKER_01:I I do. Uh it's jonahmay.net, and it does get into the voice, some of the voice assistant stuff because to very quickly touch back on that, some of that was I took known speech to text and text-to-speech systems that were out there, like Whisper and PyTorch, and combines them on top of a library called NVIDIA Tensor RT. So most NVIDIA GPUs can use it. Very few technologies are leveraging it. But essentially, for a trade-off of a little more disk space and some more VRAM consumption, it significantly reduces your latency and increases your tokens per second. So my voice assistant started at like 12 seconds to process a basic sentence, run it through an LLM control of device and give a response. It's down to like three to four seconds most days, which is pretty well aligned with how my Alexa devices around the house perform, if not slightly better. And that's well, that was all made possible through those Tensor RT libraries, which I've published out as open source projects for people to use in the home assistant community.
SPEAKER_00:Well, that's excellent. So yeah, if if you've got that on your your blog, because I'm certain like me, um listening to this on a podcast, uh I I wouldn't I I'd have to go and go, well, I wouldn't go to Google now, would I? Go to my AI. But blogs I think will be around for a long time because this type of area, it's good where someone goes, does the work first and then you can follow suit.
SPEAKER_01:Um Yeah, I don't know that I go into too much detail other than hey, here's what I built in one or two posts, but it at least all links over to my GitHub where the projects are available for you to look at in a little more detail and try out.
SPEAKER_00:That's excellent. Well, listen, Jonah, thank you so much for coming, and um, I hope to see you on the podcast again. I hope to be back. It's been great as always, Jeff. All right, folks, we'll see you next month where we'll try to find another fascinating, perhaps scary topic, but nevertheless engaging topic. Thank you very much, and we'll talk to you next month.