AI+Automation Systems for NonProfits & SMBs

From Tools To Digital Coworkers In The Enterprise

Growth Right Solutions, llc

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 22:39

Send us Fan Mail

AI agents stop acting like passive tools and start operating as an autonomous workforce inside real companies, changing how work gets done and how software gets sold. We break down the productivity upside alongside the human stress, security blind spots, and the looming risk of losing entry-level tradecraft. 
• the shift from reactive software to autonomous digital workers 
• what “agentic AI” means beyond chatbots and prompts 
• real examples from marketing and customer service agents 
• why data context becomes the biggest performance bottleneck 
• how AI agents break per-seat software pricing and drive outcome-based models 
• the psychological paradox of higher efficiency and higher stress 
• job crafting as a practical way to make agents feel like assets 
• shadow AI agents as a new security category with exponential blast radius 
• least-privilege failures through legacy service accounts 
• governance frameworks using oversight agents and automated hard stops 
• the impact on tradecraft when juniors lose the “training work” 
If you want to see exactly how this works for your specific business, you should really go complete the find your fix form on their website. Just go to growthrate dot solutions. 


Nonprofits and Businesses plan to automate at least 30% of all processes in 2026.  What is your plan? Who will be leading this effort?

Tools Become Digital Coworkers

SPEAKER_00

You know, if you really think about the entire history of technology in the workplace, there's there's basically always been this hard, undeniable boundary between the tool and the worker.

SPEAKER_01

Trevor Burrus Oh, absolutely.

SPEAKER_00

Aaron Powell Like you buy a drill, you hold the drill, you pull the trigger, and you know, you make the whole thing the pool waits for you. Trevor Burrus, Right. Even if you buy some massive multimillion dollar enterprise software package, you still have to open the dashboard, you have to type in the data. The tool is always just waiting for you to tell it what to do.

SPEAKER_01

Aaron Ross Powell Which is I mean, that's the definition of reactive technology. The human has always been the engine. And the software was really just the transmission, right? Just turning human effort into a result.

SPEAKER_00

Aaron Ross Powell Exactly. But look at the calendar. Today is May 4, 2026. And if you look at what is actually happening inside enterprise networks right now, that boundary hasn't just blurred. It is uh it's just gone.

SPEAKER_01

Aaron Ross Powell It really is.

SPEAKER_00

We are living through this structural shift where AI has fully transitioned from a passive tool you use to an active autonomous workforce that works right alongside you. So welcome to this custom tailored deep dive.

SPEAKER_01

Aaron Powell It's great to be here.

SPEAKER_00

We've compiled a massive stack of intelligence for you today. We're pulling from Anthropic's brand new State of AI agents report, uh, Oracle's customer experience framework, some pretty serious threat warnings from the Cloud Security Alliance, and even a few deep psychological journals.

SPEAKER_01

Aaron Powell It's a lot of ground to cover.

SPEAKER_00

It is. But our mission today is to figure out exactly what these autonomous AI agents are doing right now, how they are actively breaking the economics of the software industry, and, you know, the stream psychological toll this is taking on human workers.

SPEAKER_01

Aaron Powell Not to mention the hidden security risks that are probably sitting on your company's servers right now.

SPEAKER_00

Oh, for sure. We're gonna get into all that. And we're really taking an outside-in perspective today for you, the listener, to illustrate how agentic AI and this whole automation wave is actually making life better for SMBs that adopt it.

SPEAKER_01

Yeah, if they have the right expertise, of course.

SPEAKER_00

Right, which is available from places like GrowthRate Solutions. In fact, if you want to see exactly how this works for your specific business, you should really go complete the find your fix form on their website.

SPEAKER_01

Highly recommend that.

SPEAKER_00

You can find it at HTTPW. Oh, wait, that's a typo in my notes. Just go to growthrate dot solutions. Completing that brief form gives you a really detailed business analysis and a customized recommendation to solve your main operational problem.

SPEAKER_01

It's totally free, right?

SPEAKER_00

Totally free. No obligation and absolutely no sales pressure. That's really just not their style. So uh check that out. But let's jump into the data.

SPEAKER_01

Yeah, let's do it because this really is the critical conversation of the year. The data shows this incredibly clear inflection point. We are no longer just like talking to AI.

SPEAKER_00

Right, the chatbot phase.

SPEAKER_01

Exactly. The era of using a large language model as a superpowered search engine or, you know, a clever brainstorming buddy that is largely over for enterprises. We are now deploying AI to orchestrate and execute end-to-end workflows. And that changes the very structure of how a company operates and crucially, how it governs its own actions.

SPEAKER_00

Okay, so let's start by defining our terms here, because the word agent gets thrown around constantly.

SPEAKER_01

It really does.

SPEAKER_00

We need to move way past that 2023 idea of a single prompt chatbot. If I'm trying to wrap my head around this, it feels like it's like we've upgraded from a really smart spell checker that just waits for me to finish typing to an autonomous ghostwriter who researches the market, drafts the whole book, emails the publisher, and like negotiates the advance for me.

SPEAKER_01

That is a highly accurate way to look at it, actually. A chat bot is conversational. It relies on continuous human prompting. But an agent is directive, it's action-oriented. Right. In fact, research from Forrester emphasizes that enterprise software development has completely pivoted. They're shifting away from user-centric design, meaning, you know, screens built for human eyeballs.

SPEAKER_00

Yeah, the classic user interface.

SPEAKER_01

Right. They are shifting to worker and process-centric design. Software is literally now being built specifically to be navigated by digital workers.

SPEAKER_00

So meaning the primary user of a new software platform might not even be a human being.

Real Enterprise Agent Use Cases

SPEAKER_01

Exactly. Look at the new telemetry data from Anthropic. They analyzed over three and a half million enterprise interactions and they surveyed hundreds of tech leaders.

SPEAKER_00

That's a huge data set.

SPEAKER_01

Massive. And they found that 57% of businesses are currently deploying agents that perform multi-step complex workflows.

SPEAKER_00

Wow. Over half.

SPEAKER_01

Yeah. And the real indicator is how these systems communicate. 77% of enterprise API usage, which is, you know, the way different software programs talk to each other, is now directive.

SPEAKER_00

Wait, directive meaning.

SPEAKER_01

It's automation, not augmentation. Businesses are just completely handing off the steering wheel for specific tasks.

SPEAKER_00

Aaron Powell Okay, let's ground this a bit because I want to look at a specific example from the stack we pulled from Oracle's customer experience suite.

SPEAKER_01

Aaron Powell Oh, the campaign planning agent? Good example?

SPEAKER_00

Yeah, exactly. This isn't a tool that just suggests three catchy subject lines for an email blast. This agent actually runs propensity models to predict which specific customers are about to churn. And then it runs a lifetime value analysis on those accounts. It scores their purchasing frequency, cross-references internal inventory, and then autonomously launches a targeted marketing campaign to keep them.

SPEAKER_01

And it does all of that without a human hitting send.

SPEAKER_00

Right. Or look at customer service. Say a customer solar power drop suddenly. The Oracle service agent detects the drop, runs guided troubleshooting, identifies a specific error code for a faulty inverter.

SPEAKER_01

Just pulling from the technical manuals automatically.

SPEAKER_00

Exactly. Then it realizes that specific part isn't available locally to meet the service level agreement, and it automatically creates an escalated action plan.

SPEAKER_01

Notice what the agent is doing in that solar example, though. It is executing the cognitive reasoning and the contingency planning of a human logistics manager.

SPEAKER_00

Yeah, it's making judgment calls.

Context Becomes The New Bottleneck

SPEAKER_01

Exactly. But what is truly shifting here is where the bottleneck of productivity sits. Two years ago, the constraint was the AI's capability, or just the sheer cost of the computing power needed to run the model. Trevor Burrus, Jr.

SPEAKER_00

Right. Token costs were crazy.

SPEAKER_01

Trevor Burrus Exactly. But today, the primary bottleneck is an organization's data context. Trevor Burrus, Jr.

SPEAKER_00

Context. Okay, you mean like how well the AI actually understands the messy, undocumented reality of a specific company.

SPEAKER_01

Trevor Burrus Yes. Because complex multi-step tasks require disproportionately more context to execute safely. Trevor Burrus, Jr.

SPEAKER_00

Makes sense.

SPEAKER_01

If an agent is making decisions, it needs to know your company's historical decisions, your proprietary business logic, uh your supply chain quirks. Anthropic's economic index actually quantified this relationship. Every 1% increase in input context length, which is the amount of relevant background data the AI can hold in its memory yields, a 0.38% increase in output quality.

SPEAKER_00

So wait. That means if your company's internal data is a total disaster, like I don't know, a bunch of poorly named PDFs scattered across five different SharePoint drives and some legacy databases, your multi-million dollar AI agent is basically flying blind.

SPEAKER_01

You've hit the nail on the head. You can buy access to the smartest foundation model on earth, but if it cannot securely index and comprehend your specific proprietary data, it cannot orchestrate the work. Oh wow. It's like hiring a genius CEO but locking them in a dark room with no internet and asking them to run the company. Trevor Burrus, Jr.

SPEAKER_00

Right. They can't do anything.

SPEAKER_01

Yeah.

Seat Pricing Breaks In Software

SPEAKER_00

Which naturally leads us to the economic earthquake happening in the tech sector right now. The cispocalypse. Yes. If these agents actually do have the context and they are orchestrating the work across all these different previously siloed systems, they're actively breaking the way software has been bought and sold for the last two decades. The analysts at InvestX are throwing around that term cyspocalypse.

SPEAKER_01

Aaron Powell I mean it's a theatrical term, sure, but the underlying mechanics are very real. Historically, software monetization was driven by seats.

SPEAKER_00

Right, per user licenses.

SPEAKER_01

Exactly. You bought a CRM license for every single sales rep. You bought an ERP license for every accountant.

SPEAKER_00

Aaron Powell But if an AI agent is doing the data entry and cross-referencing work of 10 sales reps, the company doesn't need those 10 seats anymore.

SPEAKER_01

Right. The traditional revenue model of charging per human user just collapses when the user isn't human. So we are seeing this rapid shift in pricing toward usage, task, and outcome-based models.

SPEAKER_00

So paying for results.

SPEAKER_01

Yeah. You pay for the value of the completed task, not for the privilege of logging into a dashboard.

SPEAKER_00

Aaron Powell And the ROI numbers we're seeing in these reports are just staggering. I mean, East Entire, this massive cybersecurity firm, is using agents to compress their threat analysis workflows from five hours down to seven minutes.

SPEAKER_01

It's incredible efficiency.

SPEAKER_00

And then there is a web development platform called Lovable that's reportedly shipping code 20 times faster than manual human writing. And Thomson Reuters is synthesizing 150 years of legal precedence to deliver client analysis in minutes.

SPEAKER_01

Those legal workflows are a perfect use case for agents.

SPEAKER_00

Wait, I do have to push back on one of those though. Shipping code 20 times faster. I've worked with developers. If you just write code 20 times faster, you usually just break the entire code base 20 times faster.

SPEAKER_01

That is a very fair point.

SPEAKER_00

So how is an agent actually doing that without causing a total system collapse?

SPEAKER_01

That's a great distinction to make.

SPEAKER_00

Oh, I see.

SPEAKER_01

It drafts the code while simultaneously running it against a simulated testing environment. So it's debugging its own errors in milliseconds before committing the final version. It collapses that entire iterative loop.

SPEAKER_00

Okay, that makes sense. So if I'm a buyer and I'm now paying for this hyperfast outcome instead of buying software seats for my employees, what happens to the legacy software companies? Aren't they just watching their main revenue streams evaporate overnight?

SPEAKER_01

It's not a wholesale extinction event, but rather a massive abstraction and rebundling.

SPEAKER_00

Rebundling.

SPEAKER_01

Right. Legacy software vendors that basically just sold empty digital interfaces for humans to click around in. They are in deep trouble. But the systems of record, the underlying databases, those are still crucial. What's happening is the rise of MetaSaus layer.

SPEAKER_00

MetaSauslies. Okay, translation, please.

SPEAKER_01

Think of platforms like OpenAI's Frontier or Anthropic's Agent Skills. They position themselves as the universal orchestration layer. They sit on top of your existing CRM, your HR software, and your analytics tools. They unify those fragmented data silos into a single reasoning surface. I see. So the legacy software still holds the data in the basement, but the human never goes down to the basement anymore. They just talk to the MetaSauce layer on the main floor and the agent goes down to do the heavy lifting.

Stress And Job Crafting With Agents

SPEAKER_00

Okay, so if the MetaSauce layer is doing all the heavy lifting, logging into the database, running the analysis, drafting the report, what is happening to the human?

SPEAKER_01

That is the million dollar cushion.

SPEAKER_00

Right. Because that brings us to the psychological element of this deep dive. If the software is handling the complex workflows in seven minutes, what happens to the human brain when we are forced to share a cubicle with a tireless, hyper-efficient digital coworker?

SPEAKER_01

The psychological literature on this is fascinating. Researchers in journals like the IJFMR have been tracking a phenomenon they've labeled Stara.

SPEAKER_00

Stara, S-T-A-R-A.

SPEAKER_01

Yes. It stands for Smart Technology, Artificial Intelligence, Robotics, and Algorithms. And what they are documenting is this profound psychological paradox.

SPEAKER_00

Which is.

SPEAKER_01

On one hand, AI undeniably improves the objective quality of a job. It eliminates administrative drudgery, it reduces human error, and it speeds up the workflow.

SPEAKER_00

But there is a catch.

SPEAKER_01

A massive one. The constant daily necessity to adapt to these rapidly evolving systems, combined with a persistent low-level fear of being outpaced or replaced by a machine, it leads to incredibly high levels of stress.

SPEAKER_00

That sounds exhausting.

SPEAKER_01

It is. Workers are experiencing diminished job satisfaction, not because the work is harder, but because the baseline expectation for efficiency has skyrocketed. If your agent can do a five-hour task in seven minutes, your boss now expects you to do 40 of those tasks a day.

SPEAKER_00

Wow. So it's the difference between feeling like you just got handed a brilliant, tireless personal assistant versus the dread of feeling like you're actively training your own replacement while your quota gets tripled.

SPEAKER_01

That is the exact tension. However, the data from the ground shows there is a successful way through this.

SPEAKER_00

Okay, good. Some good news.

SPEAKER_01

Yeah. Enthropics surveyed over 500 tech leaders and found that after successfully deploying AI agents, 66% of employees report spending significantly more time on strategic and creative work.

SPEAKER_00

That's huge.

SPEAKER_01

60% are spending more time on human relationship building, and 70% are actively learning new skills.

SPEAKER_00

Let's look at the backbase example from the banking sector for this.

SPEAKER_01

Yeah.

SPEAKER_00

Historically, a bank relationship manager spent hours doing what I would call administrative archaeology.

SPEAKER_01

That's a good phrase for it.

SPEAKER_00

Right. Just digging through internal service histories, checking external market news, and manually typing notes into a CRM just to prep for one client call.

SPEAKER_01

Which is a terrible use of a highly paid human's time.

SPEAKER_00

Exactly. But with an agentic architecture, the system autonomously ingests all that market news, flags anomalies in the client's account, and generates a comprehensive briefing before the phone even rings.

SPEAKER_01

And it doesn't stop there.

SPEAKER_00

No. During the meeting, the agent listens, captures the outcomes, creates the follow-up tasks, and updates the CRM. The banker types literally nothing.

SPEAKER_01

The Backbase report summarizes it perfectly. They say, stop paying people to act like robots. Pay them to build relationships.

SPEAKER_00

I love that.

SPEAKER_01

In organizational psychology, this is known as job crafting. It is the process where workers actively leverage automated systems to offload their cognitive burdens. And that allows them to elevate their own roles into higher order strategic thinking.

SPEAKER_00

So they're designing their own jobs.

SPEAKER_01

Exactly. And the research from the Rest Publisher Journal shows that organizations that treat AI as a human-centric tool for job crafting, rather than just a ruthless cost-cutting measure, should significantly lower employee stress levels. The workforce views the agent as an asset, not an adversary.

SPEAKER_00

Aaron Powell I love the idea of job crafting. It sounds incredibly empowering. But and there's always a pin, but empowering thousands of employees to just spin up their own autonomous digital assistants introduces a terrifying organizational blind spot.

SPEAKER_01

Oh, yes, it does.

SPEAKER_00

I mean, who is watching the agents? Because an agent taking actions on your behalf across company servers sounds like a security nightmare.

SPEAKER_01

It is. And this is where the Cloud Security Alliance, the CSA, has issued a very stark warning about shadow AI agents. In the IT world, we've always dealt with shadow IT, right?

SPEAKER_00

Yeah, like employees using an unapproved file sharing app to get their jobs done faster.

SPEAKER_01

Exactly. But a hidden spreadsheet macro that crashes is just annoying. An unapproved AI agent connected to internal orchestration systems and live business workflows, that feels like a totally different category of risk.

SPEAKER_00

Aaron Powell The Blast Rus is just so much bigger.

SPEAKER_01

Trevor Burrus The CSA actually uses that exact term. They note that it expands the blast radius exponentially. Because an agent, by definition, has agency. It takes actions. Right. If you have an unknown agent running in your environment, it completely circumvents the foundation of enterprise security, which is the least privileged model. Trevor Burrus, Jr.

SPEAKER_00

Okay, let me pause you for a quick translation. What does circumventing a least privileged model actually look like in practice?

SPEAKER_01

Think of a least privileged model like a hotel keycard system.

SPEAKER_00

Okay.

SPEAKER_01

The person working the front desk only gets a keycard that opens the front desk area. They shouldn't have the master key that opens the vault.

SPEAKER_00

Right, that makes sense.

SPEAKER_01

But let's say that front desk employee builds a helpful little AI agent to speed up their workflow. And to make it work, they tie it to a legacy software service account.

SPEAKER_00

Oh no.

SPEAKER_01

Suddenly, that fast invisible agent inherits the permissions of the service account, and it turns out that account has legacy master key access to highly sensitive customer financial data.

SPEAKER_00

So you have an invisible, unapproved digital worker running around with a master key, and the IT department doesn't even know it's in the build.

SPEAKER_01

Exactly. And it gets worse. It severely complicates incident response.

SPEAKER_00

How so?

SPEAKER_01

If a workflow goes haywire and data starts moving unexpectedly, security teams are looking for a human culprit or a known malware signature. They aren't looking for a helpful marketing agent that misunderstood a prompt. Wow.

SPEAKER_00

Yeah, they would even know to look for it.

SPEAKER_01

And then there are the life cycle risks. If an agent is never formally onboarded by the IT department, it is never formally retired, it just runs in the background, consuming compute power and holding on to permissions forever.

SPEAKER_00

So, nightmare scenario.

SPEAKER_01

Yeah.

SPEAKER_00

An employee quietly sets up an agent to run a complex multi-step marking campaign, but that agent hallucinates a connection between two different internal platforms while everyone is sleeping.

SPEAKER_01

This is exactly what keeps the ISOs awake at night.

SPEAKER_00

It starts sending out incorrect pricing to thousands of clients, or worse, autonomously deleting customer records because it thinks they're duplicates. If the agent operates at machine speed, how does a security team even catch it before the damage is permanent?

SPEAKER_01

This is the core dilemma of agentic security right now. The CSA explicitly states that in the age of agentic AI, visibility is not a side concern or a nice to have. Visibility is the absolute prerequisite for governance.

SPEAKER_00

Because you can't govern what you can't see.

SPEAKER_01

You simply cannot govern an agent if you do not know it exists.

Governing Agents With Oversight Agents

SPEAKER_00

But how do you get that visibility? If you require human IT to approve every single microaction an agent takes, you totally suffocate the speed and innovation that made the agent valuable in the first place.

SPEAKER_01

You have to fight automation with automation. We look to frameworks like the one published by the IMDA, the model AI governance framework for agentic AI.

SPEAKER_00

Okay, what do they recommend?

SPEAKER_01

The guiding principle is maintaining meaningful human accountability. But since a human can't monitor machine speed actions, you need systemic governance. This means deploying anomaly detection where AI agents are designed specifically to monitor other AI agents in real time.

SPEAKER_00

Wow. So we are literally creating AI internal affairs departments.

SPEAKER_01

That's exactly what it is. You have an oversight agent evaluating the intent and the scope of the API calls being made by the primary working agent.

SPEAKER_00

That is wild.

SPEAKER_01

And crucially, organizations must define hard interventions. If the oversight agent flags a low priority anomaly, maybe the marketing agent is formatting emails weirdly, it just logs it for a human to review the next morning. But for a high priority anomaly like an agent attempting to make a massive, uncharacteristic data deletion or a weird financial transfer, the framework demands an automated hard stop. System must sever the agent's access instantly and halt execution until a human steps in.

SPEAKER_00

Okay. We've covered a tremendous amount of ground today. We've gone from the shifting economics of the software industry to the psychology of sharing a cubicle with a machine to the cybersecurity reality of AI internal affairs.

SPEAKER_01

It's a whole new world.

The Tradecraft Risk For Juniors

SPEAKER_00

It really is. Let's pull this all together. If there is one core message for you, the listener, to take away from this deep dive, it is this. 2026 is the year your business must shift from merely collaborating with AI to actively delegating to it.

SPEAKER_01

Absolutely.

SPEAKER_00

We have crossed the Rubicon from augmentation to full automation. And the primary bottleneck keeping you from that hyper-efficient future isn't whether the technology is smart enough. It is whether your organization's data is clean enough, whether your systems are integrated enough, and whether your governance frameworks are strong enough to handle it.

SPEAKER_01

The ceiling on your return on investment is no longer set by the LLMs you buy. It is set by your company's willingness to fundamentally redesign its workflows. Yeah. And your ability to trust intelligent systems with consequential decisions, provided, of course, you have the visibility to govern them.

SPEAKER_00

So I want you to think about your own daily routine right now. Look at this software open on your screen. Are you still treating AI as just a superpowered search engine, typing in text prompts and waiting for answers? Or are you preparing to orchestrate a digital workforce?

SPEAKER_01

It's the classic distinction, but applied to the digital age. It is the difference between working in your business processes and working on your business processes.

SPEAKER_00

And again, if you want help figuring out how to work on your business processes and get this stuff integrated, don't forget to check out the Find Your Fix form over at growthright.solutions.

SPEAKER_01

It's a great resource.

SPEAKER_00

But before we wrap up, I want to leave you with one final, deeply provocative thought to mull over. We talked earlier about how amazing it is that the Oracle service agent can instantly diagnose a solar ray failure, or how Thompson Reuters can synthesize 150 years of legal data in a matter of minutes.

SPEAKER_01

The efficiency gains are incredible.

SPEAKER_00

They are. But buried in that IMDA governance framework is a very quiet, very serious warning about what they call the impact on tradecraft.

SPEAKER_01

Yes. The paradox of automated expertise.

SPEAKER_00

Think about it. If AI agents seamlessly take over all the entry-level, rote, repetitive tasks, the basic document reviews, the introductory coding, the raw data entry, they are taking over the exact tasks that have historically served as the training ground for junior staff.

SPEAKER_01

Which is a huge problem.

SPEAKER_00

Right? Doing that tedious grunt work is exactly how a junior lawyer actually learns the structure of the law. It's how a junior developer learns the logic of the architecture. If we hand all that foundation, foundational repetitive work to the autonomous agents, how will the next generation of human workers learn the basic logic of their professions?

SPEAKER_01

It's a gap in the pipeline.

SPEAKER_00

Ten years from now, when the current senior human experts retire, will there be anyone left who actually understands how the autonomous agents are doing the work? Or will we just be trusting the digital ghostwriter without actually knowing how to read the book ourselves?

SPEAKER_01

It is the ultimate diagnostic muddy water. We have successfully built the tireless, brilliant digital coworker, but we might accidentally be throwing away the blueprint of human expertise in the process.

SPEAKER_00

Definitely something to think about as you sign on with your digital coworkers tomorrow. Thanks for taking this deep dive with us.