Full Tech Ahead
On this podcast, I sit down with business leaders, researchers and executives to explore innovative technology solutions and products, whether they’re transforming industries today or still in development. But we go far beyond the tech itself. From real-world use cases and business implementation journeys to cybersecurity challenges and future trends, we uncover what’s shaping the digital landscape.
We also dive into topics that matter to every tech professional: Work/life balance, business communication, education and training. Think of it as your one-stop shop for meaningful technology discussions that inspire and inform.
Full Tech Ahead
How to Help Your AI Team
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this episode of "Full Tech Ahead," Amanda Razani interviews Mery Zadeh, SVP of AI Governance and Risk Consulting at Lumenova AI. The conversation centers on the complexities of building AI governance teams and navigating the "three-language problem" where technical, regulatory, and business teams struggle to communicate.
Zadeh advises companies to stop searching for "unicorn" candidates who possess all these skills and instead focus on cross-training and rotation programs. Furthermore, as AI evolves from Generative AI to Agentic AI (systems that take independent actions), she emphasizes the critical need to shift from annual audits to real-time, continuous monitoring.
Ultimately, she notes that the most successful AI governance frameworks are practical ones that developers actually use, rather than theoretically perfect models.
Key Quotes
- "We're turning AI risk to possibilities."
- "We should stop looking for a Unicorn... start on career paths, start on education."
- "You can't govern what you can't see."
Takeaways
- Solve the "Three-Language Problem": Effective AI governance requires fluency in business, technology, and compliance. Instead of hunting for an impossible candidate who knows it all, organizations should implement rotation programs (e.g., auditors spending time with tech teams) to cross-train their current workforce.
- Integrate Teams Across the AI Lifecycle: From the initial business intake and legal risk assessment to the developer build phase, cross-functional teams must collaborate continuously to ensure the AI system is technically sound, compliant, and fit for its intended use.
- Shift to Continuous Monitoring: As the industry moves toward Agentic AI—where AI agents execute tasks like booking or canceling meetings—traditional annual audits are obsolete. Organizations must implement real-time monitoring and strict permission controls to catch and correct issues immediately.
- Prioritize Practical Governance: The most effective AI governance isn't a flawless, overly complex policy; it's a practical framework with clear controls and evaluations that developers understand, see value in, and actually follow.
Find Amanda Razani on LinkedIn. https://www.linkedin.com/in/amanda-razani-990a7233/
Follow the FTA LinkedIn Page: https://www.linkedin.com/company/full-tech-ahead/
Visit the FTA website: https://fulltechahead.com/
Check out the Substack Channel: https://fulltechahead.substack.com/
Hello and welcome to Full Tech Ahead. I am your host, Amanda Razzani, and I'm excited to have Mary Zade here. She is the Senior Vice President of AI Governance and Risk Assessment at or Risk Consulting rather at Luminova.ai. How are you? Hi, Amanda. Nice to be here. Happy to have you on the show. Can you share a little bit about a little bit about your background and then what does Luminova AI offer?
SPEAKER_00Sure. So I have a finance background. I started with risk and compliance. I worked for KPMG for 10 years, was a director and head of risk transformation in KPMG Norway. And then started in Luminova as SVP of AI governance and risk consulting. So what we do is that we have the unified AI governance, risk and compliance platform. We help organizations govern, monitor, and scale AI systems safely. So basically, we're turning AI risk to possibilities. I love that. I love to say that. So risk to possibilities, Amanda.
SPEAKER_01Wonderful. Well, I'd love to hear more about that. And um, our topic today is about um all things AI. And I'd love to hear more about building AI governance teams and from your experience, where are their issues with a lack of skills and how can those be addressed?
SPEAKER_00It's a very interesting topic because uh, you know, I love talking about AI governance, and I think it's a very important topic to talk about in 2026 because we see that AI governance requires a unique blend of um technical, regulatory, and business fluency. You know, it's you need people that are able to speak these languages fluently. And the problem with that is that we're not creating carrier paths for these specialists. We're just we're just looking for the unicorn that has all these three um, you know, qualifications and um and skill sets. So I see it as a, you know, as I mentioned, a three language problem because they are so the technical teams, um, they have to understand regulation, while you know, compliance teams, they have to understand the model structure, you know, data flows and you know, the business, their perspective is like, what is the risk tolerance? What is the operational impact? But they have to be able to communicate that to the technical teams and to the regulatory teams in order to for the teams to be able to function together and work together.
SPEAKER_01Well, so where you know communication is absolutely key. So, what tips do you have to improve that communication between departments so everyone's on the same page?
SPEAKER_00So I think we should stop as organizations. We should stop as when we're building AI governance teams, we should stop looking for, as I mentioned, a unicorn that has all of it because usually we come from different domains. I come from you know risk compliance and um internal auditing, I'm a certified internal auditor. And even my path was starting there, but I had to uh you know start understanding the technical team's language, right? You we have to, so if you come from that domain, you have to educate yourself on the technical aspects of AI, right? You have to understand model structures, AI systems, why a model drift, what is the trained data, what is you know, why you have to evaluate and why are you evaluating your AI systems on to be able to communicate with the technical teams. So, first don't look for the unicorn, start on career paths, start on education. So, for example, you can have different rotation systems. Like if you come from internal auditing and you know you have to be auditing an AI system, you need to be maybe rotated in a technical team for a while to understand, to learn, you know, to and then if you're if you're a compliance person and you you see frameworks, you see regulations, and you just see a set of set of regulations that we have to comply with, in order to be able to translate that to uh to the technical teams, again, you have to understand what does it feel in in AI, different AI systems. So um we actually in Lumanova, we start building um training programs because nothing existed from from before. But rotation, education, and and invest in career paths.
SPEAKER_01Absolutely. I think that I think more companies now than ever are realizing that they need to provide their own training in order to get the the employees with the skill set that they need.
SPEAKER_00Definitely, definitely. And it's not easy because the AI is uh evolving every day, this the frontier models are evolving every day, the evaluation criteria changes in the sense that sometimes we just we don't even know what regulation are we going to comply with. So if you're going for NIST, this is a jet, this this is just a guideline, right? It's different than if you're looking to um you know implement I ISO 40 2001 or if you're looking at EU AI Act. And interesting problem going forward as well, how like global organizations are able to comply with the different regulation. But going back to the topic, definitely invest in education of um your your already existing uh you know resources and people.
SPEAKER_01What is your in having worked with so many business leaders, what is the best advice you can give them to have success in this area?
SPEAKER_00I think we're we're when we're talking about AI implementing AI governance, I have a lot of um talking points. But in terms of skill sets, I would say looking at different um building different teams and making sure that these different teams, technical, business, and compliance, you know, framework, um, legal people, build the teams cross-functional. Because if you if you just let's say that you you are um you're building AI internally, yeah, systems, it can be a chatbot, right? And you have to go through a life cycle. This is like the best practice. You have to go through a life cycle, you start with an intake process, a questionnaire to making sure which business starts, right? You have an intake process that business starts, and then you you should have a risk assessment that legal and you know risk people, the second um second line people should be involved in. And then be these two have to communicate together to make sure the output is correct. And then you get to the build phase where the developers come in and then, like, okay, so why are we what are we building and why are we building this? So from you know, phase one to which is intake to phase two, which is build, even like in the beginning of um building an AI system, they have to be able to communicate. And as an organization, we have to build the life cycle of an AI system because we know the outcome will be best at the end because they have communicated and they have worked together, making sure that it's technically correct, complying with the right regulation and have the audit-ready documentation, and also it's built for the intended use.
SPEAKER_01Absolutely. Well, the other part of this is that AI is constantly evolving, and so are the regulations in tune with that. And so, what do you envision the future of AI and um regulations, say a year from now?
SPEAKER_00So we have the EU AI Act that is going to be uh enforced in August 2026. And then we have in the US, we have you know, Colorado Act, we have we have different state-level acts, and then we have the guidelines like ISO 42001 or NIST um risk management frameworks. I think we will have more regulation. Um, I mean, in Europe it's different. They have they already have a regulation and they have to comply with it and starting from um, as I mentioned, 2026. But in US, I think the way we've looked at it in the US is a bit different, right? So we started um with less regulation in order to make sure that the innovators, it's an open, like open road for the innovation, right? We didn't want the regulation to be a showstopper. But I think as we evolve, as the AI evolve, we need more regulation on different parts, probably, because we we see that it's it's not the AI of 2023. It's not, we're not talking about, you know, even we're not talking about traditional AI, we're not even talking about the gen AI, where we were worried about the output, like uh right, because um until now we were like, okay, we're giving it a prompt, we have a chat bot, we are worried if the output is correct or not. Is this output representable? Is it uh hallucinated, right? But in 2026, it would be more focused on agent EGI, where we would be worried about if it if it has the right permission, if it's within what we gave the agent to do. For example, if you have an agent that is supposed to book meetings for you and it does it perfectly by canceling your other meetings to optimize your calendar, right? So it's doing its job, but in order for you to make sure that it's actually doing the right thing, you need to have right controls in place. You need to have continuous monitoring. And that's basically one of the key learnings we've seen in 2025. In general, you need to have continuous monitoring. So the annual audit that worked perfectly for traditional AI and maybe for Gen AI a little bit, it's not going to be doable going forward. You need definitely to be more on real-time monitoring and continuous observability going forward. Another thing is that, as I mentioned, it will require frameworks that makes real-time authorization decisions, not just postdoc um reviews. We need to, again, as I'm mentioning, real-time monitoring and having alert, having dashboards that show you how your model is performing, if it's not hallucinating, if it's actually doing what it was intended to do in the right way with the right risk.
SPEAKER_01Yeah, that was so helpful. Thank you for sharing your insights. Well, if there was one key takeaway that you could wrap it all up with and share with our audience today, what would that be?
SPEAKER_00I think they have to start with, in general, what I always say is that they have to start with mapping of the AI systems, because I always say that you can't govern what you can see. So a simple way to start is that you start with an AI inventory, making sure that you have an overview of all your AI systems in place. Also, um, I would say invest in education, invest in cross-functional teams in order to make sure that they are communicating. Because if if you want to succeed in this era, we need to have humans, uh, the right humans with the right education, with the right um, you know, understanding of AI in order to help it to be more reliable and um, you know, trustable. Also, I would say the organizations that will succeed with AI governance in the future, they are not the ones with the like perfect AI governance in place and perfect policies, perfect structure, the perfect tool, but they are actually the ones that are understanding what is required to uh for the teams again to be able to work together. Because we don't want AI governance, we don't want a perfect AI governance. We want something, we want we want governance that practitioners actually use. So we want the control sets, we want the evaluations, we want the monitoring set up from you know legal teams and compliance and AI governance at you know, CI uh COI uh teams that the developers actually use and they follow. So it was a long, you know, closing, but what I would say is that um make start with your AI systems and then make AI governance practical so the developers would use it and see the value in it, basically.
SPEAKER_01Great. Thank you so much for all the tips and advice for our listeners.
SPEAKER_00You're welcome. I hope it's useful.
SPEAKER_01Yes, and thank you to be here. And thank you to our audience. Uh please share any questions or comments you have below in the comment section, and I will be looking and try to respond. And until the next podcast, have a wonderful day.