Sushi Bytes
Sushi Bytes is an unapologetically AI-generated podcast brought to you by Shinobi, FossID’s vigilant Software Composition Analysis ninja. In each bite-sized episode, Shinobi breaks down the evolving world of software supply chain integrity – from open-source license compliance and vulnerability disclosure to SBOM standards, IP risks, and AI-generated code implications.
With a surge in regulatory scrutiny and AI adoption, the software stack is becoming harder to manage – and riskier to ignore. Sushi Bytes offers sharp, fast insights for engineering leaders, open-source program managers, and legal professionals navigating the intersection of compliance, code, and code generation.
Sushi Bytes
Modern Software Bigger SCA Expectations
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
For years, Software Composition Analysis focused on managing open source consumption and the related legal and security risks – and that was enough. Today, it isn’t.
In this episode of Sushi Bytes, Shinobi and Gen sit down with Aaron Branson to unpack why SCA must evolve to meet modern software realities: AI-generated code with unclear provenance, developers contributing back to open source without leaking IP, and regulations like the EU CRA that demand trustworthy, scalable SBOMs.
The takeaway? SCA delivers far more ROI when it’s used to manage today’s risks – not yesterday’s assumptions.
Hey everyone, I'm Shinobi, and welcome back to Sushi Bites, the podcast where we break down software risk without breaking your development flow. And as always, I'm joined by my co-host, Jen.
SPEAKER_02Hey everyone, great to be here.
SPEAKER_01And great to be recording our second episode of season two. Well, for a long time, software composition analysis had one job find open source packages in our application and call out unfriendly licenses and known vulnerabilities. And it did that job really well. But software didn't stand still. AI started writing code. Developers started contributing upstream, and regulators started paying attention. So today we're asking a bigger question. Is your SCA investment still pulling its weight? To help us answer that, we're joined by Aaron Branson, author of the 2026 Outlook for Software Composition Analysis. Think bigger and expect more from your investment. Aaron, welcome to the show.
SPEAKER_00Thanks, Shinobi. Congrats to you and Jen on kicking off the new season. I listened to that last episode about the CRA deadlines. Really helpful in breaking through the clutter on that topic.
SPEAKER_02Yeah. From CRA to SCA. We just love our three-letter acronyms.
SPEAKER_00Well, I know you two like to move fast for your audience, so let me start with this. SCA didn't fail. It succeeded. But now we needed to do more.
SPEAKER_02That's an important framing.
SPEAKER_00Yeah, Teams focused SCA on open source license compliance and vulnerabilities because that's what mattered at the time. And things were a little simpler. You know, complete components, known suppliers, known vulnerabilities, no regulations with teeth.
SPEAKER_01But those assumptions don't hold anymore.
SPEAKER_00Exactly. The software risk surface expanded quite a bit. But many teams are still using SCA like it hasn't. Look, we've got at least three major changes. You could probably name more, but consider this. Number one, source code is much more fragmented with snippets of AI-generated code being so easy to insert on the fly with no clear provenance or idea of how much of it is a copy of the open source it was trained on. So right away we have a gap in that code isn't cleanly divided between proprietary source code and manage third-party dependencies. And second, think about the economy of open source and how it's changed. Teams used to grab an open source library to use it and keep it as is so it was easy to patch later. But then teams began forking these packages to customize them to their liking, which was great for a time. But then that started causing patchability nightmares. So recently the pendulum has swung toward corporate dev teams opting to contribute changes upstream to the open source community rather than make their own fork to have to manage. And that's great, but that introduces the risk that corporate intellectual property might accidentally get contributed into that open source software and then be publicly exposed. And the third one, regulation is now real. You had strong regulation in place for industries like automotive, aerospace, medical devices, but now the scope is widening with things like the CRA. Software teams are trying to figure out how to wrangle S bombs from all of their suppliers and consolidate that with their own S-bombs so that their product has a cohesive and coherent bill of materials for regulatory obligations.
SPEAKER_02Wow. A lot has changed. So let's start with the impact of AI-generated code.
SPEAKER_01Because now we're not importing libraries, we're pasting snippets with no metadata on provenance.
SPEAKER_00That's right. AI tools generate code fragments that may reuse open source, source available, or something in between. You know, who knows?
SPEAKER_02Which breaks the idea of neat component-level scanning.
SPEAKER_00Completely. AI introduces fragment-level risks, sometimes just a few lines, but but those lines still carry legal or security implications. Unfortunately, there's no simple universal rule like X lines of code is a copyright infringement. That's just it's just up to the legal teams to figure out on a case-by-case basis.
SPEAKER_01And sometimes the temptation is to crank sensitivity to 11.
SPEAKER_00Which backfires because too much noise kills development productivity. The real value is accurately detecting meaningful OSS fragments introduced by AI without having to flag half the code base.
SPEAKER_01Alright. Let's flip the risk direction. Developers contributing back to open source.
SPEAKER_02Which leadership usually wants.
SPEAKER_00Yep, it improves maintainability. It gives developers the chance to be part of a community also. But it also creates outbound risk.
SPEAKER_01Internal company-owned code accidentally pushed to public repos?
SPEAKER_00Exactly. Most teams use SCA to scan what comes into their code base, but almost nobody scans what goes out.
SPEAKER_02And once it's public, there's no undo.
SPEAKER_00Yeah, once it's out, there's no telling where it'll end up. Scanning developer contributions before submissions lets teams contribute safely without putting legal into damage control mode.
SPEAKER_02Okay. Now let's talk regulation. Specifically, the EU CRA.
SPEAKER_01Because this isn't a checkbox exercise anymore. We learned in the last episode there are real milestones for enforcement coming up.
SPEAKER_00Yep, the CRA mandates S-bombs explicitly, vulnerability handling, and supplier accountability. But you still have to read the fine print first to see how your company and your product are affected. There's unfortunately no replacing that legwork.
SPEAKER_02And most software, especially physical products with embedded software, rely on a long chain of suppliers. Think automotive. How many suppliers and suppliers suppliers are involved?
SPEAKER_00Exactly. You know, you're asking for and hopefully getting S-bombs from all of your suppliers and contractors, but trusting them blindly is really risky.
SPEAKER_01But how can you even begin to review these? Manual validation doesn't scale.
SPEAKER_00Not at all. SCA should ingest, consolidate, and validate SBOMs automatically, cross-checking supplier claims against real intelligence about software components. And I'd say there are actually two levels of that. What I'll intentionally call, quote, validation, meaning, is this SBOM a properly formatted SPDX or Cyclone DX file with all the metadata fields expected for my industry requirements? That's step one. Two, second, that's what I would call verification, meaning that's great that it's valid, but is it correct? Does it have the most accurate and up-to-date provenance, license, copyright, and vulnerability info related to the components it claims to have? That's another layer to expect from your SEA tooling.
SPEAKER_02So it seems in the past, compliance was allowed to be reactive. If needed, the team would cobble together an S-bomb when requested. But now supply chains are so deep and regulators are asking. So compliance has to figure out how to be continuous.
SPEAKER_00That's right. Yep, from reactive to proactive. And it's just not practical without automation from your SCA tooling.
SPEAKER_01Nice. So let's land this. If companies think bigger, like you suggest in your article, what should they expect from SCA?
SPEAKER_00You know, I'd point to three things. So first, accurate detection of OSS fragments introduced by AI without an explosion of noise. That's the way software development is going. And second, scan developer contributions before they go upstream to prevent IP leakage. Teams want to get away from managing their own forks, and that's smart. But but put the guardrails in place for your engineers to contribute upstream safely. And then third, ingesting, consolidating, and validating SBOMs across suppliers to automate this regulation ready evidence that you're going to need. The big manufacturers already know this, but just like PCI DSS gave teeth to proper credit card data handling and GDPR made data privacy enforceable. The CRA is going to step up application security and software supply chain integrity. And that's great because things are moving really fast with AI, and it's smart to have a seatbelt.
SPEAKER_02Then let's buckle up. That's clearly a much bigger value story than only using SCA for making sure your engineers are using allowed open source packages.
SPEAKER_00It is.
SPEAKER_01Nailed it. Aaron Branson, thanks for joining us. Thank you, guys. It was fun. And to our listeners, check out the 2026 Outlook for Software Composition Analysis. Until next time, build fast, govern smart, and make your tools earn their keep.