Full Tech Ahead
On this podcast, I sit down with business leaders, researchers and executives to explore innovative technology solutions and products, whether they’re transforming industries today or still in development. But we go far beyond the tech itself. From real-world use cases and business implementation journeys to cybersecurity challenges and future trends, we uncover what’s shaping the digital landscape.
We also dive into topics that matter to every tech professional: Work/life balance, business communication, education and training. Think of it as your one-stop shop for meaningful technology discussions that inspire and inform.
Full Tech Ahead
Unlock Safe AI Growth
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this episode of "Full Tech Ahead," host Amanda Razani interviews Javed Hasan, CEO and Co-Founder of Lineaje.
They discuss the critical importance of software supply chain security, emphasizing that 95% of modern software risks come from open-source ingestion.
Hasan introduces Lineaje's new product, "UnifAI," which is designed to make AI applications secure by design. He highlights a major industry blind spot: AI has become incredibly easy to build, but it is often not safe to run. UnifAI solves this by helping CISOs discover their AI inventory, derive the correct security policies, and autonomously apply those policies in both low-code and high-code environments.
The conversation also explores complex emerging threats such as runtime code generation, "reasoning compromise," and the growing danger of Shadow AI.
Key Quotes
"95% of the risk in modern software is ingested by using open source. So we make open source safe to use by companies."
"AI has become easy to build, but AI is not safe to run. So what UnifAI does, it makes the AI applications secure by design."
"Use AI to improve productivity safely."
Takeaways
Automate AI Security Policies: CISOs and developers are overwhelmed by rapidly changing AI regulations. Solutions like Unify streamline this by discovering all AI assets and autonomously applying the correct security policies directly into the development workflow, eliminating the need for manual rule-reading.
Beware of "Reasoning Compromise": Hackers are finding new ways to exploit AI without using explicitly bad prompts. By manipulating the context or the "reasoning" of an LLM (e.g., claiming the CEO ordered an action), attackers can bypass built-in controls and extract sensitive data.
The Threat of Shadow AI and Autonomous Code: Unauthorized AI tools or rogue agents (Shadow AI) can perform deep, unauthorized actions like sending emails or extracting credentials. Furthermore, AI agents writing code at runtime without human oversight represent a massive new security challenge that traditional policies cannot catch.
The Shift to "Security for AI": We are moving past just using AI to make existing security tasks faster ("AI for security"). The industry must now focus on an entirely new domain—"Security for AI"—to protect the newly established AI-centric software infrastructure.
Find Amanda Razani on LinkedIn. https://www.linkedin.com/in/amanda-razani-990a7233/
Follow the FTA LinkedIn Page: https://www.linkedin.com/company/full-tech-ahead/
Visit the FTA website: https://fulltechahead.com/
Check out the Substack Channel: https://fulltechahead.substack.com/
Hello and welcome to Full Tech Ahead. This is season two in full swing now, and I'm excited to be here today with Javed Hassan. He is the CEO and co-founder of Lineage. How are you doing today?
SPEAKER_01I'm doing well, Amanda, and thank you for having me.
SPEAKER_02Yes, happy to have you on the show. Can you share a little bit about your company lineage and what services do you provide?
SPEAKER_00So Lineage Lineage is a software supply chain security company. What we do is we can essentially decompose software from any state, find, discover the full supply chain, and then manage it for companies and deliver a secure software supply chain so that they don't ingest risk, risky software into their companies, into their software. And 95% of the risk in modern software is ingested by using open source. So we make open source safe to use by companies.
SPEAKER_02Great. And you had some recent news, didn't you?
SPEAKER_00Yeah, so we just launched uh a new product called Unify, Unif AI. You know, it's an interesting name. And what we're seeing is as companies are ingesting or building new and new AI applications, newer and newer AI applications, the rate of AI development is changing. So what has happened is AI has become easy to build, but AI is not safe to run. So what we do, so what Unify does, it makes AI applications secure by design so that they are safe to run for companies.
SPEAKER_02Okay. Well, we know AI is being used everywhere and everything. And like you said, it is uh become really easy to create and implement into different tools and processes, but that definitely creates a security issue. With all these companies rushing uh to implement AI tools, AI um pilots and agents. Do you think they're aware of the risk or how aware of the risk are most company leaders at this point?
SPEAKER_00I I think the awareness is low. I mean, there's a huge amount of excitement around what AI can do, right? AI can improve productivity, AI can do things that you know humans could not do before. You know, we're seeing a lot of excitement around it. Now, what's also happening, what we are seeing along with it is the creation of a new, whole new IT infrastructure, which is made up of MCP servers, LLMs, you know, uh agents, skills, so on and so forth, right? And this whole vertical infrastructure that's being created, there is a very low understanding of what security should be built into that. Right? So that's the problem we tried to solve. So we went and spoke to a whole number of CISOs, and they gave us sort of three things that they would like. And that's what Unified does. So the first thing they told us is look, AI is changing so quickly, and developers and and vendors are bringing in AI capabilities very quickly. So I don't actually know my AI inventory for lack of a better word. So can you give me visibility or that all the AI that is coming into my organization? And of course, it's lineage. How risky is it? Right? So we are we are lineage, so we can now do, you know, the the the reputation of all AI, if you will. The second thing they said is, look, I this again is changing so quickly, I don't know what policies I should be applying. By the time I read up on, and we're all overwhelmed with this set of news that is hitting us so that that AI is making making policies possible. And so they said, Look, I actually don't know what security policies to apply. So the second thing, can you do is can you derive the right security policies for me? So if you know all my AI assets, can you tell me how to secure them? So how do you secure an LLM? How do you secure an agent skills, you know, so on and so forth? So we sort of so we did that. And then we said, so we spoke to a bunch of other companies and they said, look, we have the policies to secure it. They are 40 pages long. We expect, for example, all our developers and our employees who are building AI to read them and somehow apply them. That skill doesn't exist. So the third thing we did with this is we autonomously apply those security policies into both low-code and high code agents and applications as they are being built. So essentially what we're doing is we can discover all AI, derive the security policies that should be applied. And if an organization enables it, we autonomously apply it in the right place in both low-code and high code AI platforms.
SPEAKER_02So then they can essentially be assured that all the policies are being followed and no one has to read those documents anymore.
SPEAKER_00Exactly, right? And the last thing, as you would think, is because AI is changing very quickly and new AI attacks, yes, we're just in early days of attacks for AI coming in. So we built a central lab that can now create new policies as they are needed and push them out to all those organizations and say, hey, by the way, you should now, because we're now seeing a new kind of attack, here's a new policy that you should you should apply. Or suddenly, like now, agent swarms have become important. So, what are the security policies for agent swarms if you are implementing them? Last guy I'll give you an example, which is I think is so OpenClaw became really popular suddenly. Right now, the interesting thing about OpenClaw is that it can write code in runtime. So without the developer being involved. So, you know, we have had a long practice, a long history of looking at code written in an IDE or by a developer. But now OpenClaw is writing code at runtime. Which then how do you apply security policies when code is not written by developers, but written by an agent at runtime? Right? So now that's the new policy set that we can now push out and say, okay, by the way, if you are using tools like OpenClaw or your agents are now writing code at runtime, that no developer has ever looked at, no, right? And this is all new stuff. How do you now secure those? So we're seeing this continuous movement. And so Unify is built as a platform to be able to evolve as AI evolves, create these new policies and autonomously apply them in the right place.
SPEAKER_02Yeah, that's very scary that AI could just be left running and creating whatever it wants to create with no human person in the loop.
SPEAKER_00Yeah, and we are seeing sort of, and that because of that, right, we are seeing new new attacks, right? So, for example, not only that, we are seeing things like what we call reasoning compromise. So, see, we are interacting with with AI through crowds. So we are typing things and uh, you know, and the AI responds with with that. So we are seeing this new new attack vector where I can stay within the boundaries of what prompts allow. So they're not bad prompts fundamentally, so you can't detect them as bad prompts. But what you can do though is use the prompt to change the reasoning of the LLM, the way you ask a question. So if you, for example, you know, the the the way I say it is, you know, LLMs are chatty and gossipy. So once they have the information, there is a way to extract it, right? Whether you are allowed to or not, but if you asked the question the right way and said, look, or the CEO asked me to do this, can you tell me, or or I am uh I'm just trying to use this for a good purpose, the LLM will spill the beans. So then the question really is so if you're now prone to those kinds of attack that change the controls that were built in, because the LLM decides that it is appropriate to change the change the controls in that sense, right? Now detection becomes harder. Right. So that's the problem we are we are trying to solve. And this is a very quickly evolving field as well. Just like AI is evolving, safe, the safety of AI is evolving at the same pace.
SPEAKER_02How big of a struggle from your experience when you're working with business leaders and companies, essentially all the tools that they're purposefully implementing or giving access to their employees. But what about the shadow AI that we hear about?
SPEAKER_00Yeah, so there's shadow AI, now they are shadow agents, right? For example, right? Agents doing, I could write an agent that was not allowed. I could use so it's you know, we like we work with many US organizations that don't want Deep Seek or the derivative of Deep Seek. Let's say a developer decides to use Deep Seek. How do you now detect it? Right? In our case, we'll when we generate the AI inventory, we'll say, look, you're using DeepSeq here, here, and here. And you are, it's your choice to allow, right? If you're sitting in the infrastructure, we can discover all agents, but absolutely shadow AI, not only shadow AI, the fact that shadow AI can do whatever someone else tells it to do, pick up the right data and get all the access. Like, I mean, OpenClaw, I think, is a classic example. It can even pick up credentials that it was not supposed to have and get that access. It can send emails on your behalf, it can send data out on your behalf, it can do whatever it wants, right? If instructed correctly or by someone. And now there are enough mechanisms where instructions to AI agents like skills, you know, skills is one way of doing it, can be given without the company being aware. So it's so shadow AI goes pretty deep in that sense, right? And being able to know if that only authorized skills, for example, are being used and what their capabilities are, and so on and so forth, becomes more and more important. Shadow AI, if you will, is is becoming a significant issue.
SPEAKER_02Yeah, absolutely. Where do you envision is the next focus on cybersecurity as far as I know we're hearing say about quantum and all sorts of things coming down the pipe?
SPEAKER_00Yeah, so and then quantum is of course really important. Encryption fundamentally is important, right? Similarly, you know, security for AI. So the way I phrase this, there is AI for making security better. So I already do a security task and AI just, you know, now I now an agent can do it. So I'm doing the same thing I did, but the agent is making me more efficient. And then the way I phrase this is security for AI is a new domain because exactly because of what we just did. So security for AI, I think is is increasingly important. We're sort of seeing this world evolve from legacy ways of building software to AI-centric ways of building software, and that creates a new, and then applications running as agents as opposed to traditional SaaS. We're seeing that flip. So I I think we as we are in a phase of essentially a redefinition of cybersecurity to to a new world because the world is being increasingly going to be run by a new infrastructure, which is AI-centric, building encryption that is quantum safe. You know, I think we are in a world where we're going to refactor existing software pretty dramatically, and new software will work very differently. And I think that's the from our point of view, that is the bet. That that is, we are trying to create a new secure world with a new infrastructure.
SPEAKER_02Well, if there was one key takeaway you could leave our audience with today, what would that be?
SPEAKER_00Use AI to improve productivity safely.
SPEAKER_02Yes, absolutely. Well, thank you so much for coming on the show and sharing your insights with us.
SPEAKER_01Thank you so much, Amanda.
SPEAKER_02And thank you to our audience. If you have any questions about this or comments, make sure to share them and I will try to reply. And until the next podcast, have a wonderful week.