ClearTech Loop: In the Know, On the Move

AI as a Digital Co-Worker With the Experience of an Intern with Timothy Youngblood

ClearTech Research / Jo Peterson

As AI becomes embedded across security operations and business workflows, organizations are confronting a new reality. AI is no longer just a tool. It is behaving like a digital co-worker acting on data, surfacing decisions, and influencing outcomes. 

In this episode of ClearTech Loop, Jo Peterson sits down with Timothy Youngblood, a four-time Fortune 500 CSO and CISO, board member, angel investor, and adjunct professor, to explore what it really means to manage AI responsibly at scale. 

Tim introduces a powerful and practical analogy: AI as a digital co-worker with the experience of an intern. Capable, fast, and eager, but not ready to operate without oversight, guardrails, and accountability. 

The conversation looks beyond hype and focuses on leadership realities: 

• Why AI capability is advancing faster than accountability models 
• How AI agents quietly expand risk through data aggregation 
• Why governance must be operational, not policy driven 
• How oversight enables innovation instead of slowing it down 
• What CISOs and executives must own as AI becomes embedded across the enterprise 

If you are responsible for cybersecurity, risk, or enterprise technology strategy, this episode offers a grounded way to think about AI adoption without losing control. 

👉 Subscribe to ClearTech Loop on LinkedIn:
https://www.linkedin.com/newsletters/7346174860760416256/ 

Key Quotes 

“That digital worker right now has the experience of a slightly experienced intern. And you wouldn’t let a slightly experienced intern go off on their own and start doing things without some oversight.” 
— Timothy Youngblood 

“I’ve been in this industry 30 years, and I try to make sure people can learn from all the mistakes I’ve made. And I’ve made many.” 
— Timothy Youngblood 

Three Big Ideas from This Episode 

1. AI should be treated like a junior employee, not an autonomous system 

AI can move fast, analyze data, and surface insights, but it lacks judgment and accountability. Treating AI as a digital co-worker reframes governance around supervision and responsibility. 

2. Guardrails must exist before AI is deployed, not after 

AI agents aggregate data and create new risk simply by combining systems. Governance applied after deployment documents exposure instead of preventing it. 

3. Oversight is the control that enables sustainable innovation 

Human review, scoped access, and accountability are not blockers. They are what allow organizations to experiment with AI without creating consequences they cannot unwind. 

Episode Notes / Links 

🎧 Listen: In player above
▶ Watch on YouTube: https://youtu.be/796uzmuBQE4
📰 Subscribe to the ClearTech Loop Newsletter:
https://www.linkedin.com/newsletters/7346174860760416256/ 

Resources Mentioned 

AI’s Newest Employee: Who Bears the Burden of Your Digital Co-Workers
https://tdan.com/ais-newest-employee-who-bears-the-burden-of-your-digital-co-workers/ 

To Fix AI Governance, Stop Building It Backwards
https://www.reworked.co/digital-workplace/to-fix-ai-governance-stop-building-it-backwards/ 

ClearTech Loop: AI, Trust, and Growth with Mike Britton
https://cleartechresearch.com/clear

Jo Peterson:

Jo, hi, thanks so much for joining the clear tech loop Podcast. I'm Jo Peterson. I'm the vice president of cloud and security for clarify 360 and security. Hi Tim.

Tim:

Hi Jo. How are you?

Jo Peterson:

I'm doing great. Thank you so much for joining and making time to visit. Thanks for opportunity. i

Tim:

Yeah, been around the block for a while, and I think once I got inducted into the Cecil Hall of Fame, and I said, Yeah, I guess that means it's time to end this game. There's nothing else I need to be doing.

Jo Peterson:

That was the that was the high water mark. Yeah, I know you and I were just chatting before we started recording. We met on the cloud security alliances, AI Security Council, so you give your time to volunteer organizations as well, to help better. You know, things for customers and practitioners in the environment. So it's nice of you.

Tim:

Thank you. Yeah, that's one of the things I think. You know, I've been in this industry 30 years, and you know, I love the opportunity to get back when I can, you know, I also mentor a few current operational CISOs. And, you know, just try to make sure people can can learn from all the mistakes I've made, and I have made many in my life.

Jo Peterson:

Well, how do we all I mean, you, you know your approach, you approach technology as with the idea that you're going to have to get some scars and you're going to have to learn sort of through the fire sometimes, right?

Tim:

Yep,

Jo Peterson:

I think so. In case you guys haven't had a chance to visit with us on the podcast before, we're a hot take approach to podcasting, and we're just focused on cyber security, cloud security and AI security, and Lord knows, there's enough there in those spaces. Each weekly episode, we ask our guests three focus questions, and we want to quickly educate our listeners about the security landscape, both risk and opportunities in real time, because we want an of the moment approach to what's going on. So with that said, I'm going to ask Tim three questions in short order here, and we're going to find out what his thinking is on some of these topics. Let me dig in with the first one. Tim, how can cybersecurity professionals leverage generative AI to break out of that traditional tool and tech mindset and drive more innovative thinking and execution in their security programs.\ Yeah, you know, I look at this as old wine and new bottles, so to speak. And you know, I think the traditional tools that are current in the cybersecurity stack for most organizations, you know, still need you still need firewalls, you still need EDR. You still need Sims. You know, those things are not going away, at least at this point. And, you know, I think AI is using those those tools and making them better, and making the people that use them better. And I think that's the power of it, because AI is giving people ability to get answers at a much faster rate than they would have before. And it's really interesting to me to see how this evolution has really taken shape. You know, we've gone from this era of sort of static, sort of generative AI, where it was, you know, about prompts, and, you know, pointing AI to specific data sets to learn from, and then trying to get insights through asking, you know, the right types of questions to try to get answers. To go into this era of CO pilots, right, where, you know, it's more of a collaborative tool where, you know there are certain access and processes. You may program the copilot to go and analyze, you know, insecurity, you know, log data. Or, you know, start an incident investigation. You know, provide some enhanced. Reporting, you know, certain actions take place. It's, you know, combining, sort of the first gen AI with, you know, some of the automation kind of technology you had in the past, like, you know, we all had, you know, use RPA at some degree. And this is, you know, combining those two with the purpose, again, of making humans more efficient and effective at what they do. And I've been really amazed at how we're now entering this era of AI agents. You know, I was at RSA this year, and I literally could not turn my head without hearing the word AI agent everywhere I went. And it is, you know, really the honest buzzword. It's amazing to me because I started doing, you know, a little bit more research around that evolution, you know, the last half of last year. And at that point, I really didn't think that, you know, we'd see, you know, super useful use cases. I mean, there were companies that are very innovative, like in financial services, that already had some form of this going on, but I didn't think we see any kind of widespread use cases until 2026 and it's happening right now. And it's just incredible. The pace of, you know, where you've gone, from the co pilot era to where AI agents, and then that era, AI agents are taking actual actions without human instruction. And it's, it's a fascinating thing. It's a little bit of a scary thing. In some cases, in the cyber security space, there are a lot of use cases where, you know, you're, you're seeing people create agents to take over all the responsibility for a level one SOC analyst, in fact, you know, I won't name the company, but there's a startup that, you know, I've, I've talked to a few times, that that is a service they provide, right? Like, you know, we'll be basically a digital worker on your cybersecurity team, and we will handle all level one incidents. There's a level one incident, this is your new team member, right? And they will go off, and, you know, they will sit there, they will analyze the logs, just like a regular analyst would, and they see an anomaly, they will identify pattern of that anomaly, that they will raise a ticket so that it will go up to a level two analysts to go and assess. And that's happening right now. So I think it's up to, you know, cyber security teams to assess, you know, how well is that going to work within, you know, their organizations, right? And you know, it's, it's, there's some debate on whether is it really replacing or is it continuing to enhance, right? We went out of copilot nominee, AI agent. Do you still need people to be there. And you know, my immediate answer is yes, right now, I don't think that you know the AI agent is mature enough or consistent enough to be trusted, to do it all without any oversight at all, confirm the consistency that you're getting and and to continuously sort of train up that digital worker, right? Because that digital worker right now has the experience of, you know, a slightly experienced intern, right? And you wouldn't let a slightly experienced intern go off on their own and, you know, start doing things without some oversight. And that's the same case with AI. That's a great analogy, and that drops me right into the next question. So thank you for taking me there. There's this balance. How is my organization going to embed security and privacy controls into AI model development without sort of slowing down innovation? And it could. It can be more than AI model development. It can be aI use by the by the employee pool, right? So it can be anything, right? So how do we do that? How do we allow employees to innovate but also keep it secure?

Tim:

Yeah, that's, that's a good question, right? Because I think it's important to allow some some freedom of experimentation, right, but to also have, you know, guardrails right to understand what things may not be happening as you expect them. And I'll give you a great example, when I was a CISO at McDonald's. You know, McDonald's is very progressive. In fact, they have progressed all the way to the stage where they've already deployed AI agents, down to the restaurant level, you know, wow, to help, you know, with with customer transactions. Wow. But when I was a CISO there, they were still somewhat young on their digital transformation, but they had acquired a company. Me to help them with suggestive sale. And it was incredible to see how they could analyze the data of people selecting menu items and then take that data and say, hey, you know you liked, you know, this, this many food item. I think it'd be great if you add Sunday to that, or you, you know, you add this, and with a high likelihood that, yeah, that does make sense. Because that person, like, because of the eating habits, it would make sense for them to do that. The challenge became really, and it was something that I brought up as a CISO, like, Okay, if we really thought through the ethical guardrails of that, right? So you're paying, you know, people's eating habits, and then you're regurgitating back to them what they should have next. And you know, not, especially not every ethnic type eats the same type of food. So, you know, if right, you're providing the same type of offer, you know, for one, one customer and another customer would, well, I'd love to have that, but it never gives me that offer, right? I never get any. You know, seems a little bit off. Yeah, you got to have some garbage for that. So that's just an example. What you know why it's important to kind of think through. The fact that you know when you when you are thinking about your AI program. Think about it holistically. Think about, you know, the ethical guardrails. Think about the compliance issues that may, you know, evolve out of that it's very similar, because we had a challenge, you know, two decades ago trying to get PCI off the ground, and had this issue of connected systems. And people struggled with that, you know, quite a bit, right? Because anything that was connected to cardholder data was part of, you know, the cardholder environment. It had to have all the same type of controls to it. Well, you know, Institute something like AI agents. That's basically what you're doing. You're allowing, you know, some piece of code to go in there and say, Okay, I'm going to connect in and get, you know this transaction. Well, that transaction is part of the cardholder data. Well, guess what? Now that that AI agent is also part of the whole data, and that anything it connects to is also part of the cardholder, right? You know, you've got to give some thought about, you know, okay, what? What is the new sort of compliance footprint for me in, you know, allowing this type of activity to happen, and then what kind of controls I need to put it put in place to make sure things don't happen that put us out of compliance, right, or create, you know, some security issue that you know we hadn't really thought through completely, which I've seen this happen, you know, with some early deployments of AI I recall, you know, when you know, some on Wall Street were deploying, you know, sort of AI models and sort of early versions of AI agents, and they had it making super transactions, you know, of stock trades, right? And they were analyzing that. And as they were doing that, they were creating red flags that the SEC was saying, like, You're making too many trades, too quick, right? That this looks like fraudulent behavior. And it created all these red flags. And it's like, okay, that's not what we meant to happen with this, and we didn't think through, you know, what, what that outcome could have been. So I we absolutely have to create a platform for people to experiment and understand. But also, you know, put into those guardrails of like, thou shall not ever have this happen, and let's make sure we have making sure, right?

Jo Peterson:

And that's why I'm encouraged by the evolution of MCP, yeah, you know. And I really think that that might be a gatekeeper, especially with the, you know, off, you know, with the 2.0 version, where it's tokenized, right? So kind of, it kind of puts these things in place that you were talking about. So it'll be interesting to see how that evolves. Last question, how should the CISO be thinking about AI adoption in terms of its use to secure emerging threats, and then also the other side of that coin from an organizational governance perspective,

Tim:

yeah, you know, that's one of the toughest things to keep up with. In fact, I was just on a call with CISO when we're talking about, you know, this particular breach that had occurred, you know, two months ago. And he immediately said, Whoa, I didn't hear about that. And he said, Yeah, there was a sister release on this is what I reviewed. Sister release. As soon as they come out, it's like, well, here's, here's the challenge, right? Because you're, you're a human, you may have to know on vacation that occurred. Who knows, right? So this is one of those things where you can kind of leverage. Jo, you know, to kind of search and assess threat intel feeds for you and determine which ones apply, you know, to your industry and to your company, specifically so that they pop out to what's important, because it's, I mean, this day and age is so much threat data that's coming in. You know, I think it's almost impossible for a human to do that without some type of high end tool. It's a perfect example of, like, Okay, how do I stay up on emerging threats? This is a tool that can help me, you know, analyze data faster and make me more efficient to do that. And, you know, and I think it's, it's also important to kind of establish that, that governance across, you know, what's used throughout all the organizational departments of the company too, right? Because everybody's kind of experimenting with this in some way, shape or form and and sometimes, you know, they don't necessarily think about the aggregate issue that may happen from what what they're doing. And what I mean by that is, you know, let's say you have, you know, marketing. Marketing is trying to get insights, and some of those insights are in sales, and you know, they also may be part of procurement, right? And so, you know, they they create, you know, an AI model, and, you know, put a wrapper of an agent around it that's going to collect data from all these different parts. Well, you know, when you do that, you just collect, you just created a new data set, right? And that new data set now may have some, you know, compliance issues. So if you took, you know, a customer name and a customer address, and then you put a piece like a product, you know that that customer bought together into this one database. Now you just got, you know, something that could be considered ppi, right? And, and I've seen that happen in the telecom space where, yeah, you know, maybe numbers that are tied to phones, and then that unto itself is not PII. But you know, as soon as you tie an address, you know, and a customer name to it, it is absolutely PII, completely different, you know, governance model and and so I think it's important that, you know, first, security is at the forefront of the development life cycle of all this, and that's, unfortunately, I'm seeing more and more scenarios where security is still the afterthought, because people are moving so fast that they just want to get something out right. And I get it because, you know, there's been a ton of spend in the space, and you got the C suite and the board saying, Okay, where's my results? And you know, the faster you move, the faster risk that you also provide back to the company. And it's one of the things where I think CISOs have got to be a little bit more forceful about I need to know what you're doing, where you're at and then, and then I will take the responsibility and make sure that I have people assigned to help you to make sure we put the right kind of guardrails in so we don't have those kind of consequences occur. It's, it's a difficult thing, because you know you want to move fast and you want to enable the company, but at the same time you know, you know you're going to have to slow them down in order to get good results. Ultimately, that's what everybody wants.

Jo Peterson:

That's fair, and that's fair and and good, thoughtful answer. And thank you for that. And so with that, we'll wrap up the podcast, and thank you for making time to visit today.

Tim:

Thank you, Jo. Anytime you need me, just let me know.

Jo Peterson:

Nice. Okay, bye guys