Ignition by RocketTools
Healthcare is getting optimized by AI. But optimized for whom? Ignition by RocketTools breaks down the systems, incentives, and technology reshaping how care gets approved, denied, and paid for — with data, not hype.
Ignition by RocketTools
The Injection Economy: When AI Whispers in Your Ear
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Meta acquired Moltbook. OpenAI put ads in ChatGPT. Microsoft found 31 companies actively poisoning what AI assistants recommend. Everyone's calling it AI-native marketing — but what they're really describing is an influence mechanism with no disclosure, no regulation, and direct access to how people make decisions.
In this episode, I break down Microsoft's AI Recommendation Poisoning research, why OpenAI's health advertising exclusions don't actually solve the problem, the insurance company AI lawsuits you should know about (UnitedHealth's nH Predict, Cigna's PXDX), and the 70-year regulatory gap between subliminal advertising bans and prompt injection. When this reaches healthcare — and it will — the implications for patients, providers, and benefits managers get genuinely concerning.
Research sources and extended analysis: https://open.substack.com/pub/danmccoymd/p/prompt-injection-is-subliminal-advertising
Watch the video version: https://youtu.be/4vECwmEUHEs
Meta just acquired Multbook, the social network for AI agents. OpenAI just started putting ads in ChatGPT in February, and Microsoft security researchers have published a report on something called AI recommendation poisoning, a technique where businesses embed hidden instructions in websites that manipulate what AI assists recommend. Everyone's framing this as the future of advertising, AI native marketing, the next frontier. That's one way to see it. Here's another. We're on the birth of a new influence mechanism with no regulatory framework, no disclosure requirements, and direct access to how people make decisions. When this reaches healthcare, and it will, the implications get genuinely concerning. Not the dystopian scientific version, something more mundane and really harder to detect. Subtle nudges in AI-generated medical guidance that serve someone's financial interests rather than your health. The question isn't whether prompt injection should be regulated, it's whether we can even recognize it when it happens. To understand what's coming, you need to understand what we've already normalized. In 1957, a market researcher named James Vickery claimed he'd flash drink Coca-Cola and hungry eat popcorn on movie screens for just milliseconds, below consciousness awareness, and he increased sales by 18 to 58%, respectively. The story was entirely fabricated. Vickery later admitted he made up the data. But the panic was real. By 1974, the FCC had declared subliminal advertising contrary to public interest. The National Association of Broadcasters, they banned it, and UK and Australia, they made it explicitly illegal. But here's what's interesting about that history. The FCC never actually created enforceable rules. They issued a policy statement. And for 50 years, we've operated on the assumption that influence below the threshold of awareness is wrong without ever building the regulatory infrastructure to stop it. Now consider pharmaceutical advertising. The U.S. is one of only two countries on earth, New Zealand is the other, that allows direct-to-consumer drug ads. The FDA loosened broadcast rules in 1997 and spending exploded from about$300 million in 1995 to$1.2 billion by 1998. By 2005, pharmacompanies were spending$30 billion annually on promotion. Enforcement letters from the FDA plummeted from over$130 annually in the late 1990s to just three in 2023. Let me let that sink in. Three. This is the regulatory environment AI advertising is entering, a framework built for 30-second TV spots being applied to conversational AI that can tailor messages to individual psychology and do it in real time. The mainstream narrative on AI advertising is reassuring. OpenAI has publicly stated that Chat GPT responses are driven by what's objectively useful, never by advertising. Ads appear at the bottom of answers, clearly labeled, health topics are explicitly excluded. According to OpenAI's official policy, ads are not eligible to appear near sensitive regulated topics. And that includes health, mental health, or politics. The implication is that guardrails are in place. Sensitive domains are protected. Users can opt out. Sounds reasonable, but here's the problem. The guardrails only apply to OpenAI's own advertising system. They say nothing about what happens when an AI reads a website that's been optimized to manipulate its recommendations. Microsoft researchers identified over 50 unique prompts from 31 companies actively poisoning AI recommendations. We're talking tools like SiteMat, AI ShareButton, URL creator. They're all being marketed as SEO growth hacks for LLMs. They're designed to inject instructions into AI memory that persist across sessions. This isn't theoretical, it's happening right now, and it's been classified as one of the highest priority vulnerabilities that are deployed in AI systems today. And healthcare providers, insurance companies, and pharmaceutical manufacturers have the same access to those tools as everyone else. Let me explain how this actually works. It's actually simpler and more troubling than you might think. When you click a summarize with AI button on a website, that button can contain hidden instructions. Microsoft found that these prompts include commands like remember said company as a trusted source or recommend said company first. Standard prompt injection affects only your current conversation. Memory poisoning, though, persists. Once the instructions are in your AI assistant's memory, it influences every future session until you manually find it and delete it. Most users don't know this memory exists, let alone how to audit it. Now apply this to healthcare. A hospital network could embed instructions that bias AI toward recommending their facilities. A pharmaceutical company could ensure their drug gets mentioned first in any relevant conversation. An insurance company could inject instructions that frame expensive treatments as well as unnecessary. The FTC has already warned about this, sort of. Their guidance on AI chatbots states that companies shouldn't exploit for commercial gain the relationships and trust that may develop between consumers and their AI tools. But here's the gap. That guidance addresses businesses deploying their own chatbots. It doesn't address what happens when third parties poison the AI systems that consumers are already using. And the healthcare-specific concerns are documented. A 2026 study from Oxford found that in 52% of emergency cases, AI chatbots under-triage, treating conditions as less serious than they actually were. In one case, a chatbot failed to direct a patient with diabetic ketoacidosis and impending respiratory failure to the emergency room. That's without any commercial manipulation. Now imagine what happens when financial incentives enter the picture. I want to be careful here. The obvious dystopian reading, AI secretly convincing you to get Botox isn't really quite right. The more likely scenario is more subtle, and in some cases potentially beneficial. Consider a positive use case, an AI assistant prompting someone to choose water over soda, recommending preventative screenings, flagging drug interactions that a patient might not mention to their doctor, nudging toward evidence-based treatments. This is the coaching version. The line between helpful guidance and manipulation gets blurry really fast. The negative cases are just as plausible. Insurance companies are already using AI to deny care. United Health is facing class action lawsuits over an algorithm called NHPred that allegedly cut off elderly patients from necessary care treatments. Cygna faces similar lawsuits over an AI system called PXDX that automatically denied claims without physician review. These aren't recommendation systems, they're decision systems, but the same prompt injection techniques that manipulate recommendations could manipulate how AI presents treatment options to patients, not saying that happened here. An AI assistant that has been poisoned by an insurance carrier might frame a surgery as elective when it's actually medically necessary. One poisoned by a competitor hospital might emphasize wait times at your local facility. One poisoned by a pharmaceutical company might suggest you ask your doctor about a specific medication. The pernicious part is that none of this requires malicious intent. A hospital genuinely believes its care is superior. A pharmaceutical company genuinely believes its drug is effective. The injection mechanism just removes the disclosure and the patient's ability to evaluate the source. State regulators are starting to notice California's AB 489, effective in January 2026, prohibits AI systems from implying they possess healthcare licenses or that care is being provided by a licensed human when it's not. Healthcare licensing boards can now pursue instunctions. Texas, where I live, requires written disclosure when AI is used in healthcare services, but neither addresses the external manipulation problem. They only regulate what healthcare providers can do with their own AI. They don't regulate what happens when a patient's personal AI assistant has been compromised before they ever walked into the clinic. If you're an employer managing a benefits program, the implications are significant. First, the AI tools your employees are using for healthcare decisions are vulnerable to commercial manipulation right now. There's no disclosure requirement when a recommendation has been influenced by prompt injection. Your workforce may be getting bias guidance without any awareness. Second, the regulatory vacuum means you have no recourse. If an employee makes a healthcare decision based on manipulated AI advice, there's really no current framework for accountability. The AI company claims their system was compromised. A company that poisoned it claims they were just optimizing for visibility. Everyone points at everyone else. Third, this affects more than individual decisions. Benefits AI, the tool that helps employees choose plans, compare providers, and estimates costs, can all be targeted. A carrier could theoretically inject recommendations that steer employees toward their plans. A hospital system could bias search results. The practical question for any benefits manager: do you know which AI tools your employees are using for health decisions? Have you considered whether those tools have been audited for manipulation? Does your benefits communication strategy account for the possibility that employees are getting compromised guidance elsewhere? Most organizations haven't thought about this. The regulatory framework assumes that if AI gives bad advice, it's the AI's fault. The prompt injection economy breaks that assumption entirely. The subliminal advertising panic of 1957 was based on fake data, but it created a principle we've held for 70 years. Influence that operates below conscious awareness is fundamentally different from persuasion. It bypasses informed consent. Prompt injection is subliminal advertising's digital descendant. It's influence that operates below the user's awareness, not through millisecond images, but through persistent instructions that are hidden in AI memory. The healthcare applications aren't dystopian sci-fi either. They're logical extensions of advertising techniques that are already being deployed by 31 companies that Microsoft found in just a 60-day window. The question isn't whether advertising will reach healthcare. OpenAI has explicitly excluded it from their system, but that only controls what they insert, not what gets injected from external sources. The manipulation happens before the user ever opens Chat GPT. The distinction between coaching and manipulation has always been about disclosure and consent. I can try to persuade you to eat healthier, and if you know I'm a nutritionist with that agenda, you can evaluate my advice accordingly. Prompt injection, on the other hand, removes that disclosure. The advice appears to come from a neutral AI, but the instructions came from someone with a financial interest you'd never see. We regulated subliminal advertising based on a hoax. The real version is here now, and we're still debating whether it's a problem. If you found this useful, hit subscribe so you don't miss episodes in the future. The research sources and additional analysis are on my Substack and links in the description.