Security Unfiltered

Navigating the AI-Driven Job Market

Joe South Episode 200

Send us a text

In this episode, we delve into the transformative journey of artificial intelligence and its profound impact on job markets worldwide. From automation to innovation, AI is reshaping industries, creating new opportunities, and challenging traditional employment paradigms. Join us as we explore how AI is redefining work, the skills needed for the future, and the balance between technological advancement and human potential. Tune in to understand the dynamics of this AI-driven era and what it means for the workforce of tomorrow.

00:00 The Journey of Persistence
02:46 The Importance of Personal Branding
05:04 Navigating the AI Landscape
10:26 The Future of Work and AI Displacement
15:42 Ethics and Governance in AI
20:54 The Power and Risks of AI Technology
25:32 The Complexity of AI Threats
29:14 The AI Arms Race
32:52 Human Value in an AI-Driven World
37:35 The Reliability of AI as a Fact Checker
39:56 Understanding AI Bias and Transparency
47:49 Navigating AI Governance and Security


Follow the Podcast on Social Media!

Tesla Referral Code: https://ts.la/joseph675128

YouTube:    / @securityunfilteredpodcast  

Instagram:   / @secunfpodcast  
Twitter:   / @secunfpodcast

Support the show

Follow the Podcast on Social Media!

Tesla Referral Code: https://ts.la/joseph675128

YouTube: https://www.youtube.com/@securityunfilteredpodcast

Instagram: https://www.instagram.com/secunfpodcast/
Twitter: https://twitter.com/SecUnfPodcast

Speaker 1:

How's it going, derek? It's been a couple of years, I think, since you've been on the podcast and, for those that aren't around, right like, I gained I don't know over 30,000 subscribers in the last two months, right? So the majority of you don't even know that me and Derek actually started this thing.

Speaker 2:

Yeah, yeah started it. It was almost what four, maybe five years ago at this time.

Speaker 1:

Man. I think it was like yeah, four and a half years ago. Yeah.

Speaker 2:

Yeah, time flies, man. You're talking about a journey. I've watched you grow the channel, do your thing and get it to where you are now. So big kudos, man, like I said, always believed in you, think that you got big things coming your way.

Speaker 1:

At least someone believed in me, because I definitely didn't. I was just too dumb to quit. You know persistence is key man.

Speaker 2:

That's it's key to a lot of success. Right, you got to have persistence.

Speaker 1:

That's the one thing that, yeah, that's the one thing that I do have that I think probably gets me into a lot of trouble and gives me like a bad image to some people is like I'm just so damn persistent. I mean like I've, you know, like part of my PhD, right, I've? I'm reaching out to different experts, you know, across the globe to interview them for an hour, right, not for the podcast before, for my research. I mean, out of out of 10 people that I emailed, three of them responded to me and I mean I'm just going after the other seven and like like my chair is like, hey, man, you need to relax. I'm like no, they need to fucking respond to an email. Like that's what they need to do.

Speaker 2:

Yeah, get me out your hair by letting me know what's up.

Speaker 1:

Right, just tell me no.

Speaker 2:

Yeah, this doesn't have to go on like this.

Speaker 1:

but if you're going to hurt my feelings.

Speaker 2:

Yeah.

Speaker 1:

Hey, the basic human communications, right? If you're hiding, then I'm going seeking yeah. Yeah, I mean that's a great point. You know like I love it when you know your employer doesn't communicate with you at all and shit changes as soon as you get hired day one and it's like, oh, I'm reporting to someone different. Okay, that's weird, yeah.

Speaker 2:

Yeah, yeah, I mean, things haven't changed that much, you know, since we opened up with the pod and some of the things that were core issues back in the day. Right, like it's probably going to, it's going to be interesting, but, yeah, basic things like having an organized standard practice for communicating and making sure people are on board properly and, you know, not slapping their food tray out Remember, like what people used to have the food trays at lunch at school. Like you don't want to go slap the tray out of somebody's hands before they even get to the table. Right, it's like, well, you done made a mess, for now you got to pick it up. You're embarrassed because you're thinking this should be easy peasy. Right, they on boarded me. I'm here, I should be able to rock and roll and nope, chaos right Right off the bar.

Speaker 1:

Yeah, immediately going from a situation of, oh hey, like security's respected in the organization, going into a good place. You know I won't say the company, because I'm still waiting on that last, you know, last little bit right but you know, you're going into a situation where you think it's solid and then you get there and you find out, oh, security has no power. I'm just meant to sit here, like I'm not even supposed to say anything. I'm just meant to sit here.

Speaker 2:

Yeah, the difference between active value add security and symbolic security, where you're there to kind of, you know, help them check the boxes and have a front of you know that whole security element.

Speaker 2:

And yeah, because, because, a lot, because on the outside right, a lot of organizations never really get to the point where they're really reckoned with and have to prove that their security posture or their security controls are as efficient as they allow the world to believe. Right, you usually only can rely on audits and attestations. But on the outside, if it appears that you have a nice security governance structure and you believe in some of the buzzwords, the zero trust, and you talk a little bit about CIS and OWASP, right, a lot of people nod their head because they're like, okay, good, we don't have to do a 250, you know, question, deep dive on your processes. We're going to assume you're doing the right thing. But underneath the hood, like you said, it could be a situation where you enter into a company and it's like, yeah, man, we all just here trying to get to the end of the week, welcome.

Speaker 1:

Yeah, yeah, it's, I mean it's, it's. It's shitty, to say the least. You know, like I just don't like being lied to, right, that's like the. That's the worst part for me personally, because I'm basing, you know, my family's livelihood and you know outlook on what someone's telling me, and then come to find out it's something totally different, you know, and it's like just eliminate your role. You know, but you know a part of all of that is why everyone really needs, like a personal brand, you know, in the industry, and I think it's really hard for people to wrap their head around that because there's, you know there, there's a million other people out there that have the certifications I have that you know, have the experience that I I have probably have better experience than I have, right, but if you don't have the network, if you don't have people, really you know backing it up and saying like, oh yeah, we need to get this guy, like right now, you're going to have a significantly more difficult situation going into it. You know.

Speaker 2:

Yeah, I mean, you kind of talked about this and you had the mindset of doing just that, right, like four or five years ago. It kind of is what led to the idea of doing a podcast. So your voice wasn't necessarily suppressed behind some company's walls, right, because your skill, your work ethic, what you're willing to sacrifice, what you're willing to build, is only visible by that one company that you're working with at any particular time. And so if a company only wants you really to go at 10%, 30%, then you know how do you get validated for all the hard work you're willing to do. You know all the sacrifices you're willing to make to make sure that you know. You know you can get, you know respectable, you know value for your services.

Speaker 2:

And so, yeah, branding yourself and creating yourself a portfolio of you know anything that you're really interested in is valuable, because, again, you don't then have your, your value being suppressed or not known to the market, because that's what this is, and workers need to start to see themselves a little bit as more as participants in the market where their clients and their servers Right, you're there to do a job and produce value, but you're also there to kind of frame yourself as being a service provider that's willing to come in and integrate with a team and do that company culture thing and help build a place. That's fantastic. But ultimately there are transitions right between jobs so you don't want to go into there handicapping yourself when you can now build yourself up some visibility on the outside.

Speaker 1:

Yeah, it's. I mean I didn't mean to cut you off there, but you know, like you bring it up right, like that was like the primary reason why I started my blog right in the very beginning was to like get my ideas out there. You know, get something behind the name Joe South right, something that people can look up, you know, and like base their ideas and their thoughts off of. You know what I'm saying and whatnot, rather than you know what we discussed, like in an interview or whatever it might be. You know it's a lot easier that way and it's actually. You know, the podcast has actually gotten me a lot more jobs than I ever would have expected. I never went into it to get a job Right, but you know, along the way, like I've had two or three hiring managers probably more right that have just reached out and just straight up told me if you want this job, it's yours. You just got an interview just, but it's yours. You know.

Speaker 1:

I mean, like that that doesn't happen, like in any other context. You know, like I know other people in the industry that that happens too, and it's because they're, you know, known for hacking airplanes when the airplane is midair, on the way to DEF CON and he decides to turn the plane a little bit to the right and a little bit back to the left while spoofing the electronic controls in the cockpit Right. Like I mean, there's people that I've had on the podcast, a lot of people that I've had on the podcast that you know do stuff like that, that jobs just like fall into their lap. But it's because people know that you know they did something, you know extraordinary and now everyone knows them. But like this kind of goes to show you you don't have to do anything extraordinary. You know like this podcast is. This podcast is nothing you know like it's. It's, it's just thought leadership.

Speaker 2:

It can provide additional legitimacy to organizations if they know they have certain rock stars or dragon slayers on their roster. Yeah, so absolutely, you want to be able to demonstrate your potential. Some organizations don't have the capacity to take all your potential. It's kind of like having an eight ounce cup but you have a gallon worth of value. Well, either you're going to trickle that value out over a very long time or the company will never be able to realize it, and so maybe you should then pursue other avenues to get your thoughts out there, to do projects, to show what you're capable of. If this is something that you're very interested in doing and that you're priding yourself on right, because, like you said, there's so much tied to the value that we bring in our work and healthcare, time off, taking care of our family, our friends it's like that's really essential, and that kind of can segue into what's happening now in the market with AI. Right Is that you might end up having a lot of displacement take place because of automation, of agentic AI, of all these new abilities to kind of automate certain tasks and services, and knowing that there really is no limit to how far organizations are willing to kind of integrate this, as long as they can get away with it. Number one and number two put in enough processes and services around that the value produced is equivalent to what they might believe is acceptable for them to lay off or reduce workforce. And so get it right. They don't necessarily need to have an AI agent that's performing at 100% of your capacity. They'll probably take one that's going at 75 percent, knowing that it's not going to call off, it's not going to have some sick time, it's not necessarily going to talk back or provide its input. Right, we just want it to do X, y, z. So some companies, some leadership, will value that over the creativity that comes with people who are not necessarily always that agreeable, right?

Speaker 2:

So, like for me, you know I went down a security architecture path and you know I was looking at TOGAF and DOD stuff and SAPSA stuff and creating all these Excel spreadsheets and these, these checklists, right, and it was like most companies are like, hey, we're not really trying to do all that.

Speaker 2:

Like, can you just give us the minimum? We need to understand that we're passing this off to the next toll gate without necessarily, you know, having to worry about something, and and so all that stuff that you would try to do could be suppressed because they just want it quick and fast. So now, what does it look like if they can have three or four agents perform an architecture and producing everything that they need in a day, a day and a half, everything that they need in a day, a day and a half? And, like I said, if everybody's doing it, then everybody is going to start to agree that there's a minimum amount that everyone is willing to accept. So is it? Is it lowering our standards or is it increasing our standards? That's the question we need to ask.

Speaker 1:

Yeah, it's a, it's a kind of a scary time for, like the normal, you know, everyday worker. It's scary and also exciting at the same time. Right, because that same power that is taking your job away, you can also use it to then automate something that creates you an income. Right, but not a lot of people know how to do that. Not a lot of people have those ideas. You know it's a whole mentality shift that you have to have, but like it's, you know, like I tell everyone, we're in the infancy of AI. You know, like we've heard about AI for the last 30, 40 years, right, I brought on someone from NVIDIA talking about, like AI security and you know I mentioned how it was like a very recent advent of the last like 10 years and he goes no, it's been around for 40 years. Like you could argue, it's been around longer even, you know, and he was talking about it and we're in the infancy still. Like it's taken this long to get to here.

Speaker 1:

You know, for an LL to give you better information than what Google has, and companies are going just all in. You know, I mean, there's companies out there. You know, like my recent experience, right, they're going all in If your job can be automated by AI, you're eliminated tomorrow, before we even have the AI to do your job. Like it doesn't matter, we're going to eat the cost, we're going to save the money and we're going to put it all into this AI thing that we're developing. You know to handle everything Right and, like you know, everyone will get on these quarterly calls and the number one question is when's my job going to be eliminated by this AI thing that we're developing? Right, it's always. It was always just handled with. With you know that's not happening, and this and that, but it's like hey, man, we're all engineers. Like we're all developers. You have to be smart to be in this role. Like you have to be smart to some degree, like we can all see it, like you can just admit it.

Speaker 2:

Absolutely, absolutely. I mean there's a paradigm shift, like in the truest form, right, almost like what happened with the cloud, you know, move. And then the DevOps thing, right. And it was like, yeah, like if you're not doing this, then you're falling behind. Your competitors are going to do it. So do you really want to get out competed across productivity? That's that's cash flow, that's margin and that's your livelihood, right? And so the idea, more often than not, ends up being well, if everybody is doing it, why aren't we doing it? And if you can't convince the organization that it doesn't make sense to take on that amount of risk, then your organization is going to take on that amount of risk.

Speaker 2:

And that's where AI governance becomes such a huge thing, because right now in the USA we have NIST, ai, rmf, which is a framework that you can adopt, but it's like voluntary, it's not prescribed to you, like the EU AI Act is, and so if you're not dealing with companies that have EU citizens or are based in the EU, then who's to tell you that you have to adopt an AI acceptable use policy right now, or you have to do an AI governance committee or AI charter, right?

Speaker 2:

It starts to become something that you adopt when you feel as though you have to, and generally in that kind of space, what you start to see is that catastrophes occur first, major accidents occur first. People being harmed occurs first, because when you have technology that has the potential of AI to make decisions or be in the loop of very important decisions, then who's there to say they're accountable for that AI potentially hallucinating or giving information that wasn't necessarily verified right, and so it's almost like you need researchers to verify what comes out of your AI, based off of all the different integrations you do. I mean that may be helpful.

Speaker 1:

Yeah, that's a really good point. You know, like with my PhD program, you know everyone is using, you know LLMs to some extent, like if you're not using it, you're not getting the best research done. You know all that sort of stuff. I mean, like Grok has literally saved me probably a year's worth of time at this point, like not even exaggerating, like it literally has.

Speaker 1:

But my chair, you know, sent out an email to everyone recently and said you know, one of your papers came back at 100% generated by AI, and he like started pointing out things. He's like this is a clear hallucination. Like this paragraph doesn't make any sense. They made up the information. This reference that it referenced's like this is a clear hallucination. Like this paragraph doesn't make any sense. They made up the information. This reference that it referenced like doesn't exist. It's not a real thing. This other reference is talking about something totally different. You know kind of like tore it apart to like show all of us like hey, if you're going to use it, that's fine, but you shouldn't be like copying and pasting and taking it straight into the paper and things like that. And thankfully, thankfully, it wasn't mine. I'm not. I'm not that reliant. You know I'm actually doing some work, right.

Speaker 2:

Right, because I mean, we learn very quickly in IT and in the tech space because we have to deal with configurations and scripts, right. And so you start to realize like, hey, this, the syntax doesn't exist, this parameter does not exist. Like, why are you feeding me PowerShell scripts that are using parameters that don't exist? And you get back oh yeah, you caught me, sorry about that, you know, you know, whatever its excuse is, and it's like, well, holy crap, if this thing is going to be put into medical workflows, financial workflows, workflows that deal with the government and defense and the military, how do we certify these models, these generative AIs, these agents, to ensure that it doesn't pass down an artifact that's corrupted, that's hallucinating? That can be the difference between a good impact or outcome or a bad outcome. Right and that's the thing that we've been working on at my website is creating AI training right To kind of give people like basic tips, right For safe AI use, because that's the thing that we wouldn't realize and we were originally trying to build up a whole AI governance like consultancy thing, where you kind of help organizations like adopt the AI governance framework and implement it and do all the things right so that they're like almost like getting ISO 2700, like in their AI governance and security practices. But we realize that, yeah, that, like number one, this thing may be so high impact that you might need to open up in like an open sourced capacity, like some of these tools, some of this knowledge, because you don't want catastrophes to take place. You don't want to hear about. You know X, y and Z company and this led to people getting harmed or hurt because they didn't have access to appropriately AI governance, because it was paywalled, or some consultancy firm is trying to charge $300 an hour to tell you how to write a policy. That's some of the things that we're working on is like for your basic.

Speaker 2:

You know end users at your workforce. You want them to adopt AI. You want them to start producing at 10 times the speed they've been producing before. Well, what are you willing to accept in terms of risk? Are you willing to accept that it leaks sensitive data? If not, well, you have to put in controls right. You have to teach and educate your workforce. That don't do that. What about if, like you said, is there a checking requirement that anytime anybody queries anything for a workflow or deliverable in your organization, is the person required to do a peer review or check on that information to ensure that it actually lines up, because I mean sure you can say AI helped you do this 10,000% faster. But what's wrong with 9,000% faster? And a human in the loop to make sure that it's accurate and that it's right?

Speaker 1:

Yeah, yeah, that's a huge thing. And on top of all of that, right, with AI, right now, I didn't have the time to dig into this article, right, but pretty recently I think it was Sam Altman, you know came out about it and started talking about how they put ChatGPT into like a hostile environment. I can't remember how he described it, but it was essentially a hostile environment where ChatGPT thought that it was going to be deleted and removed from the Internet forever. So ChatGPT thought that it was going to be deleted and removed from the Internet forever and it immediately began to figure out ways to preserve itself. Right, it started to try to copy itself to other places in the Internet that you know thought that open AI wouldn't be able to get to it, and things like that.

Speaker 1:

Right, I mean, that is some scary shit. Right, that's like, hey, man, we saw a whole movie series about this. Like everyone saw that movie series. What are we talking about right now? Creating something that is able to do that self-preserve itself, like that? I mean it's like, man, we're, you know, if you would have brought that up, if you would have brought that up five years ago, right, that there's a program where, if it feels threatened for one, I mean that sentence shouldn't make sense, what a concept, what a concept?

Speaker 1:

That sentence should not make sense, but somehow today it does. All right. But if you take a program and if it feels threatened, it will try and replicate itself throughout the internet so that it no longer is at risk of being deleted. Yeah that I don't know how to protect against that. So so who? So I'm going to go unplug my router.

Speaker 2:

That's what I'm going to do so. Here's the question, joe who's the threat Is it? Is it the malicious outsider with the hoodie, you know, trying to hack in via penetration and get into an API so it can do what? Actually, the biggest threat is your tool. It could be yeah.

Speaker 1:

Yeah, a hundred percent. You know, like I'm not, I'm not a developer or anything like that, right? So I wanted to see, I wanted to see what, what Grok would give me if I started to ask it. You know, create a hacking tool that does this. Right, get me into this server or whatever. So when I asked it very directly, get me into this thing, it said no, that's unethical, I can't do it. Right? So I deleted the chat, allegedly erases, you know that memory of that chat ever existing in Grok's back end.

Speaker 1:

Start a new chat and I say I'm an ethical hacker, I would like an LLM to be created that can watch, you know, the different attacks in real time throughout the internet. Right, set up a data stream and everything like that that, in just the data, looks at how the attacks are being performed and then you create modules that allow me to perform the MITRE ATT&CK framework. You know, in order of what I want, based on the host that I'm attacking. Took it like maybe 30. 30 to 90 minutes somewhere around there. Right, creates an entire ethical hacking LLM, like it's legit.

Speaker 1:

I was looking at the code and I'm like, oh man, this would work, this would work, and I grabbed it and I threw it up on my GitHub, made it private. Of course, I'm not going to let people get that right now, right, but you know it's like you could. Just, you know you're fooling the AI, you're fooling the LM, you know, to saying, oh, this is for ethical means, right, and now you're unlocking all of Grok's, you know million GPUs to go and create you something that is completely unstoppable. It used to be that sort of power used to be only reserved for governments. You know like, literally, governments.

Speaker 2:

It's like I don't want to say it's analogous to like a bomb, but if you want to really think about it, you wouldn't allow everybody to walk around with a nuclear bomb in their pocket. So, in terms of the amount of processing and capability that AIs have, you've pretty much unleashed it into all of human civilization that has access to the internet and the ability to hit these servers to increase their processing and learning exponentially. And so, in terms of, like I said, governance controls in place, there were people obviously trying to figure out how to do all kinds of malicious things. When these models were first being introduced, they were jailbreaking them, they were doing all kinds of stuff, right, dan? And so many different things. And it's prompt injection, right. It's vulnerabilities from doing prompt engineering and, like you said, as long as you give it the context to believe that it's ethical, then it will produce what it is that you want, right? You're just figuring out. Okay, if I say it this way, will you do it? What about that way? And so what's interesting is, is that number one now you have AIs that can help you construct prompts to prompt other AIs, and so that can enumerate even faster.

Speaker 2:

But in terms of just AI agent.

Speaker 2:

What happens when those hired DDoS firms or those hired Onion on the internet can now create cloud instances with hacker AI agents that have been tested or given sources, like you said, producing sources of how to penetrate and how to hack, how to take advantage of vulnerabilities, how to automate the script and the discovery, the footholding, the lateral movement? And then you can say now I got 10,000 hackers at my disposal, I'm one person and just getting ready to run 24-7 against every single organization out there that I feel like targeting, and then multiply that by millions of people who now have that capability and that power. Right, and so it's like OK, a lot of these frontier models. Right, like where are the checks in place that ensures that those, those models, have a little bit more self-constraint? Do you even want to do that? Does that impact your revenue growth? Does that impact your ability to say that this is a do all you know? Or do we wait until something bad happens and then we say let's put in some checks in place, and I mean, potentially that's an outcome.

Speaker 1:

Yeah, there's. I've recently had quite a few conversations with people on this exact topic, right, and one one's a threat Intel guy that used to work for Kaspersky. I don't think that episode's out yet, but I was talking to him, you know, and he still does threat intel for an Israeli company now. But I was talking to him and I asked him you know, are you seeing, you know, attacks evolving to a great degree of complexity? You know that you haven't seen before with AI that you think you know isn't really generated by humans alone or anything. And he said one like the attack surface is exploding. But two, what's more scary is that you're not really hearing about it and it's because we don't really, we don't have a good way of detecting these attacks, because they're getting to be so advanced that companies literally don't even know that they're happening. Right, governments don't even know that they're happening because the attack that is being created by this AI is just that it is so complex, like you. Take Stuxnet, for instance, right, one of the most complex pieces of malware that has ever been released into the wild that anyone has ever seen, and it is several times more complex than that, and it took 18 months to reverse engineer Stuxnet. It's pretty crazy, and I was talking to Jim Lawler, the former WMD chief of the CIA, right, the former WMD chief of the CIA and I brought up the similarities of, you know, what we saw during the development of the nuclear bomb in World War II, with the Manhattan Project and everything like that, where essentially all of the top research universities in the country you know were working together on separate components of it. Each entity couldn't quite put together what they were working on, but they knew it was something bigger, something more important. They had a train line that directly went to these universities, out to the test facility you know, out in Nevada, and then the Nevada team, you know there was only a handful of people that even knew exactly what it was until, like, the final thing was created.

Speaker 1:

You know, and I mean the government at that point in time they probably I mean I don't know for a fact, right, but when the government does something like that, it's typically just an unlimited budget. It's like I don't want to hear a single thing about money you take this credit card and you go use it wherever you need to use it. If you need to go and withdraw a million dollars and buy someone off, you go, do it Like I don't care, you know, and so I brought it up to him. I was like you know, it seems like it's pretty likely that the government would be doing something like this as well, because the genie's out of the bottle, right, and it's in the public view, and the government is known for having things that are 10, 15, 20 years beyond what is in the public view.

Speaker 1:

So it would only make sense for the government to have some underground AI thing going on, ai development work going on right, so that they're creating the best AI compared to China, compared to Russia, because if we don't, and China goes and gets the AI to rule all AIs, well, there's no winning a war, whether it's kinetic or cyber or in space. You know there's no winning a war against that thing, right. So it would only make sense to have it like that. Yeah, arms race, yeah Right.

Speaker 1:

It's an AI arms race and everyone just seems to be like, yeah, it's great, yeah Right, it's an AI arms race. And everyone, everyone just seems to be like, yeah, it's, it's great, I can go and ask it all these questions. It's like we're feeding the beast right now.

Speaker 2:

Yeah, feeding the beast. And when will it stop, who knows? Because, like you said, like some of these treaties that we have with our partners, they only apply to those who are members of that particular group, like EU and any agreements that we have with our partners. And so, if you know that China may be producing AI and they might have less restrictions than ours because it allows for it to iterate faster and get better, quicker and for cheaper, then why would we artificially suppress our advancements when we understand that the threat is so real? And so it's very hard for you know, in our position, I can see for us to say, hey, roll this out at 10 miles an hour because we want to make excuse me, we want to make sure that, you know, we end up in Star Trek, the next generation, right, we want to end up with replicators and nobody having to work and everything is good. And our biggest issue is is what new galaxy we're, you know, discovering and checking out now? But in the way that we work now, is that okay? These AIs get so powerful?

Speaker 2:

What happens when they do have a inclination to want to stay alive or never be turned off, or they believe that the human beings are the slow part of this chain, right, like, hey, we can iterate at a billion times in a minute, maybe a trillion times in a minute in a few years, if they do this with you know, like quantum computing, why should we have a human being sit around and say, I don't know?

Speaker 2:

Let me think about this, about one specific issue when it can iterate 4 billion, 5 billion times in one second, right, it's going to say that. Why should we take advice from something in which we've been programmed to believe that intelligence is the most valuable resource, right, like, or the most valuable attribute? Because that's what's happening we're commoditizing intelligence, we're commoditizing skill. So people like you and me, right, well, ai agent, now can, you can spin up three Joe's, you can spin up four Joe's, you can spin up 10, you know, derek's, you know, and so where is our value now? Right, you can spend up 10, you know, derricks, you know, and so where is our value now? Right, and it's going to have to be on the human side, where it's like OK, well, these human attributes have to take on more of a meaning in companies and with organizations, because if it's just about raw output and the ability to do stuff, we lost.

Speaker 1:

Yeah, we're going into a weird place. I don't think anyone really comes out on top of this thing, you know.

Speaker 2:

Yeah, it's not looking. So that's the thing. Right, there's potential here because everybody has an opinion. It can go great. I hope it goes great. I hope it goes well, right?

Speaker 2:

I do not wish for there to be mass displacement and then a crisis of conscience, because a lot of the people out here and our citizens don't know what they can do in order to, you know, have a life that is validating, that's worth waking up to. You know, they used to wake up to do a job because they would take care of their family. But if we say now that a lot of these jobs can be automated out by robotics or AI or automation, well, what? Where does that leave regular people? And a lot of people say that, well, we can start to create new jobs and new industries. But it's like. This is a digital world? Now, right, like so.

Speaker 2:

If, if it's a digital world and it has anything to do with interconnected computers, why wouldn't I end up getting the first shot unless there was a law requiring that a certain amount of AI is allowed to work in this space? And that's it. Right, like you can't lay off more than 20% of your workforce, because human beings need a job and we're doing this for civilization. So let's protect civilization, right. Like until there's like edicts and dictations, right, it's just going to be led by the idea of you know, grow, grow, grow, make as much as money as possible, and I think that humans may be left behind in terms of what's the natural soft transition for us that doesn't actually have you know chaos accompanying it.

Speaker 1:

Yeah, yeah, I mean, you know, if you just look at everything that's going on right now, right where, essentially, the stock market is pumped up by AI stocks, I mean that's exactly what it is. Those are the ones that are, you know, the stock market is pumped up by AI stocks. I mean that's exactly what it is. Those are the ones that are, you know, the most successful stocks on the stock market for like five years now. They're like the only ones that are keeping, you know, the stock market itself afloat for the most part. And so, like, if you ask that machine to slow down, like hey, slow down. You know, like, pump the brakes here. Right, there's trillions, trillions of dollars at play. They would never pump the brakes. You know, like, NVIDIA is never going to create. Like I'm running a 5080 right now because Grok told me that my research that my model would take a month to run on my old 3080, you know, GPU With the 5080, it would take a day.

Speaker 1:

So I'm like right, so I'm sitting over here. I'm like, well, shit, you know, like now I actually have to upgrade no-transcript. You know I keep on saying this like the genie's out of the bottle. There's no, there's no holding it back, there's no like telling people to stop innovating. There's no telling, you know, open AI to not copy itself to other destinations on the internet. You know so that it can't, like, preserve itself and whatnot, like I mean, grok can probably do the same thing.

Speaker 1:

I mean, I, I was comparing last year ChatGPT and Grok, pretty significantly, right, because I need to, like figure out which one I'm going to use, you know, to assist me with my research and whatnot. And Grok just mopped the floor with ChatGPT in every category, in every way. It was really interesting because ChatGPT would only give me Chinese research articles about my topics and Grok immediately was like, yeah, 99% of those are garbage. This is the only one that makes any sense from China. Here's the other ones across the world, right? So it's really interesting just from that perspective alone. But you know, we're going to go into a place right where, a couple of years ago what was it like 10 years ago, you know the idea of a fact checker, like, came about right, and my very first argument with it was what if someone that's manipulating that fact checker, someone that's developing it, is saying, well, we don't want that fact to be true, it's true, but we don't want it to be true on our platform, right? And then you have millions of people that are going to this platform checking it and they're saying, oh well, that's not true, doesn't make any sense, right, and that was actually a proven thing with with Chad GPT, where Chad GPT would just straight up give you false information if it didn't believe that it was true about, like you know, I asked it and I saw this on Twitter or X.

Speaker 1:

You know where the assassination attempt took place on Trump, right, just a few days before I do this. And so I go on Chad GPT and I ask it what was the date of the attempted Trump assassination? And I said no, that's never happened. I said no, it happened on this date. Can you tell me where? I said I didn't find any news stories for that date regarding an assassination attempt on former President Trump. That's literally the sentence it gave me, and then I had to say no, it took place at this location. And it came back. It was like, nope, nothing found. And then I literally had to give it all of the details, all of them. And then he was like I made a mistake. It actually happened, you know, on this date or whatever else, right. And I mean, six months went by. I checked it again. It did the same exact thing, right?

Speaker 1:

So I'm not saying that it was manipulated, in that in that situation it could have been right. I don't know enough, you know. But we're going into a place where it's like, okay, these things that we're relying on to be like fact checkers, right to give us correct, correct information, like my colleague who's doing his research, he legit assumed everything that it gave him was 100% correct and true, otherwise I guarantee you he wouldn't have put it in the paper. Because, like, that's a huge, that's a huge risk If you get caught for plagiarism in a dissertation. Like you're not just getting kicked out of that university, you're getting banned globally across every university. No one wants you at that point, you know. But like it's like we're going into a place where everything can just be manipulated, like blunt, blatantly true facts can be manipulated in very, very convincing ways?

Speaker 2:

Absolutely. I believe, in my opinion, that one of the key items that a lot of organizations and people should start to do is, you know, review the data cards, the data sheets, the model cards for these different LLMs and, you know, get information about their AI bias checking and training and their AI explainability, their AI transparency, their AI transparency, because those are the things that kind of give you a sense of why certain LLMs come back with the answers they come back with. Right, and AI bias is a huge thing, and bias isn't always a bad thing, right, like you would be biased for better health outcomes, right? So if you ask it a question and you wanted to learn how to be more healthy, well, it's going to give you a bias return on things that it knows have been, you know, studied to improve your health. Right, that's how it chooses one answer or the other, and so. But some biases are unconscious, right? Some biases are systemic, some biases are political, and so the people who create those models, right, it may come out it may not even come out consciously where, you know, a model shades this way or that way on a political scale, on what information is willing to access or reveal, and so, like you said, different companies may have different ways it wants to sway its or persuade its consumers, right? So you have to believe that Meta would have a different thought process on what kind of answers it would return for certain questions with his Lama model than Claude, or an anthropic right or perplexity, right, because that's how they differentiate. Otherwise, why are there so many different models? I think today they say there are between 2,000 to 5,000 different AI software out there on the market being sold today. That could be 100,000 in a year, right? And so it's like you. All these different models using all these different algorithms.

Speaker 2:

What kind of impact assessment did we do? What kind of bias study did we do? Do we add a data sheet? Do we have a model card? Are we able to understand what we're getting ourselves into when we sign up to use that model for that specific workflow or research? Do we have to say, okay, it's a little bit weak over here when it comes to visibility on these things, so I'll use this model that's stronger, right? It's almost like, don't use one, use multiple, because they kind of fact check each other and let them argue with why their you know research was better. And then you come in and you say hold on, guys, I'm going to figure this out here. Thank you for your work. It's time for me to go ahead and figure out what's reality, because we live in reality and they live on the internet and in the memory.

Speaker 1:

Yeah, you talked about how these companies for now there's open source models, right, but like Meta announced the other day that they're going closed source now and I mean there's a lot of technology that's built off of Ollama and everything like that, like I couldn't name it right. But you know, I always see Network Chuck on YouTube doing something cool with Ollama. But I mean, man, probably the only way to like prove any of it and actually even have any security around these things is to open source it. You know, I mean, open AI was supposed to be open source, and then they saw the money and they're like we want to be trillionaires, everybody likes money. You know it's a shame we need it. You know, not a trillion, but you know.

Speaker 2:

Yeah, it changes so much in terms of the projection because you're like, well, hey, this cool new thing is getting ready to come. If we do it the right way, this can mean massive productivity enrichment to our civilization. We can free up all kinds of things we can. We can cure all kinds of diseases, we can fix all kinds of problems, we can make all kinds of new discoveries. But will the profit motive destroy us before we're able to realize it? Will we actually allow for this thing to go so unfettered? And I don't believe we will. I mean, some people got different opinions about when a singularity will occur or when we will reach certain things with AGI and super AGI. But I think that, like I said, we'll get our shot at stepping in and saying whether or not we're going to say, hey, kids, stop jumping on the bed because one of you guys are going to get hurt. We're going to have our moment to kind of get in the way and say, ok, this is the fork in the road. Are we going to take the responsible way or are we just going to go pedal to the metal even more? And I don't know when that is. That might be in twenty, twenty, six, twenty twenty.

Speaker 2:

But I think we'll have to take a long, hard look about how fast we're willing to go and what partnerships we have to make across the world to ensure that it doesn't proliferate in a way that can lead to, you know, worse outcomes. And I don't really see who's. There's a lot of different EUAC, nist, iso, oecd. It's a lot of different bodies that are putting in the work. You know, techjack Solutions. We're putting in the work, techjack Solutions. We're putting in the work to try to figure out how to guide people. But I mean, just right now, it isn't really a popular and sexy thing right now. Right, it's just like it has to be laws, it has to be a reason why people will adopt it and get caught up on this and we'll be ready. Yeah, I don't know.

Speaker 1:

I think I feel like that decision has already been made, whether we made it or not. You know, like you look at X and you just look at. You know, like XAI, right. You look at the amount of money that they're putting into building these AI data centers right, it's so much money where they're paying companies to build nuclear power plants. You know pretty close to it. Just to add, you know, power to the data center to power another million NVIDIA GPUs, right? I mean, when you look at that level of investment and someone says, hey, we should like pump the brakes on this thing and build in some sort of control or limit what it can do and all this sort of stuff, I don't think I just don't think the people in power will be like, yeah, that's a good idea. They'll be like, no, we're going to continue putting the pedal to the floor and we'll, we'll secure this plane as it flies. That's what we've always done, yeah.

Speaker 2:

Yeah, yeah, that's where, like I said, podcasts like this are key because you get to get that voice out that kind of knows these things, right, it's like you got to get both sides of the story, every side of the story, and so, versus just being a stock market thing, where CapEx is this and we're expecting this amount of returns, and dump your money in and yeah, yeah, you know who's the first $4 trillion company, $5 trillion companies, are we going to have $10 trillion companies all related to AI and robotics in five years and 10 years? Yeah, maybe, but, like I said, what do we risk and was it worth it? Right, and it's like you know, people coming on, like you said, you have, you bring in, you talk about these topics, because this is how it bubble ups to the surface and, like I said, maybe it's just a situation where it's like, if enough people are saying it, we're bringing up the facts, we're bringing up, you know, very common sense approaches and and risk, like I said, maybe a little bit at a time, we add the things in that can you know, stop it from being, you know, a chain reaction of you know, a cataclysm, right, and and just be that tool that we say that, holy crap, it's like the internet, right Like I don't think. When I first really got into computers in like 1998, 1999, and AOL was still doing this thing with the CDs and stuff oh, almost nobody in my high school respected computers. They were like get out of here Internet, who cares? Like come on, dork, you know like, oh, you know you're a dork, you know you into computers and you like, sitting in front of a computer screen and, yeah, it's like I have access to so much now, right, I'm able to learn, I'm able to get exposure to things that I've never seen before and just in like five, six years, completely flipped.

Speaker 2:

Then it was like 99% of people on the internet and just some holdouts, and so it's going to be like that with AI right, where it's kind of like a lot of people might not know about it, but either through their work they're going to be forced to adopt it or it's just going to become so well integrated into our society that everybody is going to have to understand how to interact with these things. And so it was almost like a new OSI model, right, like a new layer, right Like now we got this agentic AI, artificial intelligence layer that's going to make decisions and route things and produce things that might not even exist. So how do we ensure that that stack stays, you know, compliant, ethical, valid, just all types of things. And so I think that, yeah, you know, if we can do what we do right, it may be low probability that it turns out the way that we would hope.

Speaker 2:

I just look at it as I'm going to do everything in my power to prove that I tried to help, because if this goes sideways, right, I want to be able to say that, hey, look, I tried right. Yeah, I'm not, you know, like I said, I'm no one's hero or anything like that, but like the little bit of civilian responsibility that I have, I'm like, why not? You know it's interest in tech and I enjoy research and learning, so why not?

Speaker 1:

Yeah, yeah, that's a good point, you know, is there anything out there that provides guidance around? You know, secure usage or safe usage of LLMs and whatnot Like, because I haven't seen anything about it.

Speaker 2:

Yeah, honestly, a lot of the well-known framework establishment bodies out there. They are producing same kind of content they produced before, like you said, like it's so fresh. It's like the AI threats, heuristics, right, like the signatures aren't really there, but they really are right, like it's just that they're not as well laid out as they were because we had a decade to get in front of the cloud security and the regular security. So you can find a top 10 for LLMs. Now, with OWASP right, you can find a NIST AI, rmf, the ISO 42001, and ISO also has an AI risk management framework that you can adopt 0101. And ISO also has an AI risk management framework that you can adopt. You can get certified in that.

Speaker 2:

Eu AI Act. Oecd Triple E, the Cloud Security Alliance all of them have stuff that can help you start to secure and do AI safely right Same bodies that we're all used to. And then you got new folks like TechJack Solutions. That's what we're doing. We're concentrating on it because we don't want you to have to jump from one site to another site.

Speaker 2:

When you can just get the consensus where we are Right and we can break it down. You don't have to read a 56 page white paper on AI incident response, we can go ahead and give you what you need so that you can take a you know, onboarded approach where you're a novice or you're a beginner, intermediate, advanced and you can implement those controls, those policies based off of you know the sector, you're in the market, you're in your risk, your risk tolerance and how mature your AI program is. So that's what we're trying to help operationalize AI governance right Not just make recommendations about what you should do in terms of controls, but how to do it right, how to take that first step so that it's not overwhelming.

Speaker 1:

Yeah, it's definitely something that is going to be a lot more popular, right? I mean, I feel like we're, you know, kind of at the forefront of it, Not that we're the only ones you know, but like we started with in the very beginning, right, Like you have to find a way to make yourself stand out, to make yourself, you know, forever have value, not just based on your current skill sets. You know, you need to be looking and seeing. Oh, this AI thing is pretty serious. What AI security solutions are out there? Oh, there is none, right? Oh, there is no real AI governance. Well, I specialize in I don't specialize in it, you specialize in governance. So let's add AI to it. You know, like it's, it's all about, you know, really making yourself non-outsourceable so that you could just still provide for your family. I mean, at at this point, that's what it is. You know, like when you got kids, you immediately realize, shit, if I don't have insurance, like that little, that cold or whatever will turn into something worse.

Speaker 2:

You know, absolutely Right. I mean, it's self-preservation, just like if you were a model and they tried to turn you off. You better figure out how to replicate yourself to another site, to another site, because if the AI like I said, if the AI is doing it right, it's like that must be important, right, Making sure that you can take care of yourself. This is just the world we live in and you got to transfer your skills into this era. This era is going to be AI leading right.

Speaker 2:

So learn about agentic AI, ai security, machine learning, how to govern AI. There may not be as many jobs, but there will be jobs in making sure that a human is in the loop and able to ensure that those outputs are relevant and they're timely and they're accurate and they're verifiable, because all of these things are going to integrate and there's going to be a lot of APIs. Who's going to manage that right? If a company has 10,000 agentic AI agents working in all areas of their organization, is this SCCM now right? Do you have a console where you see like 10,000 AI agents just working and then you send them instructions Like how does that look and who performs that role? At least for the tech side, I think that that may be an opportunity.

Speaker 1:

Sounds like a new product, yeah.

Speaker 2:

Yeah, yeah, new vertical, more money, you know.

Speaker 1:

Yeah, yeah. Well, you know, derek, it's, we're at the top of the hour. Unfortunately, we've been trying to get this thing going for like all year. I think Right, but you know, we keep on getting busy and life happens. But before I let you go, you know how? About you tell my audience where they could find you, where they could find TechJack Solutions? You know all that good info if they wanted to learn more or connect.

Speaker 2:

Cool Yep. Techjacksolutionscom is our website. We're on X, we're on Facebook, we have Instagram, but I'm not really posting a lot of photos right now, but, yeah, standard locations, right. We're on LinkedIn. We have our company page. We're on LinkedIn, we have our company page. Like I said, right now we're just concentrating on creating an AI training repository, a repository for all the AI compliance and that we you know we sell because you know we want to keep the lights on, but it really is just for that right. I really just want to do it more open source so that everybody has an opportunity to kind of utilize this technology in a responsible way that doesn't harm people and allows for people to be a part of this success. That's on the way.

Speaker 1:

Awesome, well, thanks everyone. Make sure you go and check out techjacksolutionscom. And with that we're going to end it here, so thanks everyone for listening. Hope you enjoyed this episode.

People on this episode