Sushi Bytes
Sushi Bytes is an unapologetically AI-generated podcast brought to you by Shinobi, FossID’s vigilant Software Composition Analysis ninja. In each bite-sized episode, Shinobi breaks down the evolving world of software supply chain integrity – from open-source license compliance and vulnerability disclosure to SBOM standards, IP risks, and AI-generated code implications.
With a surge in regulatory scrutiny and AI adoption, the software stack is becoming harder to manage – and riskier to ignore. Sushi Bytes offers sharp, fast insights for engineering leaders, open-source program managers, and legal professionals navigating the intersection of compliance, code, and code generation.
Sushi Bytes
Software Composition in the AI Era
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
AI is changing how software gets written – but what does that mean for open source compliance and software supply chain security?
In this episode of Sushi Bytes, Shinobi and Gen explore SCA in the AI era. As development shifts from prompts to autonomous agents, tool-augmented workflows, and spec-driven engineering, traditional software composition analysis workflows need to evolve.
They break down the three major shifts in AI-assisted development and explain why SCA tools must become agent-friendly, tool-driven, and embedded directly into modern development pipelines.
If AI is writing the code, someone still needs to understand what’s in it.
Welcome back to Sushi Bytes, the podcast where we break down software security, open source, and supply chains. One byte at a time. I'm Shinobi, and today we're talking about software composition analysis in the AI era. Because AI is changing how software gets written. But here's the real question. If AI is writing more of the code, who's making sure it's secure, compliant, and legally safe? That's where software composition analysis comes in, and as usual, to unpack this, I'm joined by Jen.
SPEAKER_01Hi everyone. And yes, today we're answering the question that inevitably shows up once AI starts generating code. Cool. But what did the robot just copy from the internet?
SPEAKER_00Exactly. So let's start with the bigger picture. Because the way developers use AI is already changing. Fast. Right now, we're seeing three major shifts in how AI fits into development workflows, and each one changes how we think about compliance and supply chain integrity. Let's start with the first one. We're going from conversations to agents. At the beginning of the AI boom, the interaction model was simple. You opened Chat GPT, you wrote a prompt, it gave you code. Then you refined the prompt, then it gave you a better code.
SPEAKER_01Prompt, response, prompt, response. Basically, autocomplete with a personality.
SPEAKER_00Exactly. But that's not where things are heading. Instead of guiding AI step by step, developers are increasingly giving agents a goal. Something like refactor the authentication module, update the tests, and make sure CI passes. And then the AI agent figures out how to do it.
SPEAKER_01Which means developers stop micromanaging prompts and they start reviewing outcomes.
SPEAKER_00Right? The unit of work shifts from a prompt to a task, and the developer becomes more of a reviewer and verifier. Which makes guardrails like tests and policies incredibly important. So let's look at the second shift, going from prompt engineering to tools. This shift is about capabilities. Early on, people talked endlessly about prompt engineering. How to structure prompts, how to phrase instructions, how to carefully guide the model into giving better answers.
SPEAKER_01Which was basically duct taping context into a paragraph and hoping for the best.
SPEAKER_00Kinda. But modern agents don't rely on prompts alone. They use tools. Things like APIs, code repositories, databases, browsers, execution environments.
SPEAKER_01So that means the prompt stops being the whole system? It becomes just one input?
SPEAKER_00Right again. The real question becomes what tools does the agent have access to? Because those tools define what the agent can actually do. Now here's the third shift. From prompts to specifications. This third shift is about development methodology. Instead of trial and error prompting, teams are moving towards something called spec driven development.
SPEAKER_01Translation, stop guessing and start writing down what you actually want.
SPEAKER_00Yeah, so instead of a prompt, the primary artifact becomes a specification. That can include requirements, architectural constraints, acceptance criteria, testing expectations. The AI reads the spec and generates the implementation.
SPEAKER_01Which is honestly how software development should have worked all along.
SPEAKER_00And because specs are structured, they can be version controlled, reviewed, reused, just like code. So now we have AI agents writing code. They have tools. They're working from specs. So where does SCA fit? Turns out, everywhere. Because if AI agents are generating code, then compliance tools need to work with those agents. Not just with humans. This is where things get interesting for us appsec nerds. Traditional SCA workflows assume a human developer. Developer writes code. Developer runs a scan in CI. Developer reviews the results. But in an agenic environment, the AI needs to participate in that loop.
SPEAKER_01Meaning the agent should be able to scan its own code before a human even sees it.
SPEAKER_00Exactly. This is the fun part. The agent writes code, then it calls a tool. The tool analyzes the code for open source components, vulnerabilities, licenses, and the agent adjusts if something's wrong. Another important point. Large language models are great at generating code, but they're not great at provenance analysis. They can't reliably tell you where a snippet originated, what license applies, whether a component has known vulnerabilities.
SPEAKER_01Because LLMs are guessing patterns. They rely on probability and reasoning. Compliance tools are analyzing actual code fingerprints. That's a deterministic task. A totally different job.
SPEAKER_00Exactly. So the solution isn't better prompting, it's giving AI agents access to specialized SCA tools that do those analyses properly. Even with the right tools, the agent still needs guidance because interpreting compliance results requires domain knowledge.
SPEAKER_01Like the difference between small MIT snippet? Probably fine. And oops, you just copied GPL code into proprietary software.
SPEAKER_00You're so right yet again. It's like you're an AI agent yourself, Jen. Humans understand those nuances, but AI systems need that logic encoded as policies and workflows. Put all of this together and you get a new development model. An AI agent writes code. Automatically it scans for open source components, checks for vulnerabilities, evaluates license obligations, applies policy rules, and then it adjusts the implementation if needed. All before the pull request.
SPEAKER_01Which means SCA stops being a late stage gate. It becomes part of how code gets written in the first place.
SPEAKER_00Very cool, right? AI accelerates development, which means verification becomes even more important. Not less. Okay, we can nerd out much longer on this topic, but we have to wrap it up. I have a feeling we'll tackle this topic further in another episode really soon. AI isn't just changing how software gets written, it's changing who writes it. And if AI agents are part of the development team, then security and compliance tools need to evolve too. SCA needs to become agent accessible, tool driven, and built directly into development workflows.
SPEAKER_01Because in the AI era, governance cannot slow development down. It needs to make safe automated development possible.
SPEAKER_00True. Until next time, keep shipping, keep scanning, and stay curious. Thanks for listening to SushiBytes.