Crestvale Newsroom

Your staff’s AI use is leaking client data

Crestvale Newsroom

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 6:13
Today's episode examines the rapid rise of employee-driven AI use and the growing gap between how staff actually work and the security controls firms believe are in place. As public models become everyday tools for drafting and research, sensitive information is moving into systems that were never designed to protect client data.This matters for leaders because the main risk is not a sophisticated breach. It is ordinary workflow behavior happening out of sight. Without clear guardrails, firms are exposed to accidental data leakage, compliance issues, and long-term loss of control over client information.We also cover scripted fixes for unpatchable vulnerabilities, a new high-severity flaw in Citrix NetScaler, and research showing where large language models still fail at complex reasoning.Learn more at https://crestvale.io

Support the show

SPEAKER_00

Welcome to the daily audio briefing on AI, automation, and business technology for professional service firm leaders. Today, we're looking at how staff use of AI tools is quietly exposing client data. Employees are now past the experimentation phase. They are using public AI systems as part of daily workflow, and they are doing it faster than most firms can build guardrails. That gap is turning into silent data exposure at a scale most leaders have not recognized yet. Markets closed lower in the previous session for both the SP 500 and the NASDAQ. The mood was cautious, the 10-year treasury yield moved higher, signaling a slight pull toward safer ground. Bitcoin closed higher, showing more risk appetite in crypto, even as equities pulled back. The split is notable, it hints at a market trying to find direction without much conviction. The main story this morning is the acceleration of staff-driven AI use inside firms. Netscope's chief executive says employees are moving sensitive information into public models because it helps them work faster and because there is almost nothing in place to stop them. This mirrors the early days of cloud adoption, when business units moved to online tools long before security teams were ready. The core shift is that AI adoption is being driven almost entirely from the business side. Roughly nine out of ten use cases are happening outside IT. That means your official policies reflect only a small piece of what is actually taking place. Staff are sharing drafts, contracts, client memos, internal notes, code, and sometimes full data sets because it feels normal and because these systems give them instant feedback. The biggest risk is not a sophisticated attack. It is the quiet, everyday behavior that seems harmless in the moment. A prompt here, a pasted paragraph there, each one with fragments of information that were never meant to leave the building. Traditional data loss tools also struggle with this. They were built for documents and attachments, not conversational prompts. They often cannot tell the difference between a safe query and a sensitive one. That is pushing firms toward context-aware monitoring that can see who is sending what, to which model, and from where. If you assume your people are only using approved AI tools, you are kidding yourself. The pressure to meet client deadlines is too strong, and public systems are simply more convenient. Why this matters is simple. If you do not establish guardrails now, your staff will build their own workarounds, and they will build them inside tools that may store your data, learn from it, or leak it. That is how client information ends up training someone else's model. Now, another story this morning looks at what happens when a vulnerability has no patch. Vicarious is pushing firms towards scripted remediation for flaws that vendors will never fix. These include configuration gaps, registry issues, outdated embedded libraries, and legacy systems that are still running mission-critical workloads. The shift here is important. Patching handled most of the past decade, but the next decade will include more gaps that cannot be closed with an update. Scripted remediation gives you a direct path from detection to action. Their engine can run PowerShell, bash, or batch across major operating systems, and it keeps logs that prove what was fixed and when. For firms sitting on large backlogs of unpatchable issues, this is a wake-up call. Unpatchable does not mean unfixable. The firms that automate this work will shrink their attack surface, while everyone else waits for a vendor update that may never arrive. Meanwhile, Citrix Netscaler appliances are drawing active scanning for a new high severity flaw. The vulnerability allows attackers to read memory, which can leak session data and credentials. Citrix has released patches, but security researchers are already seeing automated tools sweeping the internet for exposed devices. Netscaler issues tend to become weaponized quickly. If your devices sit at the edge of your network, the risk is magnified because attackers can use leaked data for lateral movement inside your environment. Firms should patch immediately and confirm that these devices are not fully exposed to the internet. Another story worth noting comes from a new university study on Chat GPT's ability to handle complex business reasoning. The model sounded polished, but often failed to separate true statements from false ones. Accuracy hovered around the high 70s, but its ability to spot false claims dropped sharply. This matters because many firms now use large models to pressure test strategy, evaluate research, or sanity check assumptions. The study makes it clear that while these tools are helpful for early drafts and idea generation, they are unreliable as logic engines. If you treat them like a junior analyst, not a decision maker, you will get better outcomes. Here is what else is worth knowing today. The European Commission issued new guidance on public sector cloud security. It reinforces that simple misconfigurations remain one of the biggest sources of large-scale data exposure. The European Cybersecurity Competence Center is moving forward with more than 1 billion euros in planned funding for AI-driven defense tools. This is a signal that Europe wants homegrown security capability rather than relying on imported tools. Costanoa Ventures highlighted a growing field of startups focused on runtime application security. The push comes from the gap between fast delivery cycles and slower traditional code scanning. Intuit expanded its partnership with a university-based security operations center to help train new cybersecurity talent. This model may help mid market firms that cannot compete with the salary demands of experienced analysts. Here is the takeaway.