
The Catalyst by Softchoice
A podcast about unleashing the full potential in people and technology.
When people and technology come together, the potential is limitless. But while everyone is used to hearing about the revolutionary impact of tech, it can be easy to forget about the people behind it all. This podcast shines a light on the human side of innovation, as co-hosts Aaron Brooks and Heather Haskin explore and reframe our relationship to technology.
The Catalyst by Softchoice
The case for open source AI: A conversation with Neural Magic’s Brian Stevens
There wasn’t anything in open source that was more than a toy a year ago, but that’s all changed according to today’s guest. Brian Stevens, CEO of Neural Magic and former CTO of Google Cloud, is helping redefine the landscape of AI.
In this episode, we explore the intersection of AI, open-source innovation, and ethical responsibility. Neural Magic’s groundbreaking software enables businesses to harness powerful AI without the need for costly hardware, all while maintaining strict ethical standards. Stevens shares his insights on the vital role of open-source models in fostering transparency, the challenges of navigating AI’s ethical landscape, and the future of responsible development. He discusses why the democratization of AI should be important for business leaders and how to get started with large language models at enterprises of all sizes.
Featuring: Brian Stevens, CEO of Neural Magic and former CTO of Google Cloud.
This episode is brought to you by Google Cloud. To learn more, visit softchoice.com/google
The Catalyst by Softchoice is the podcast dedicated to exploring the intersection of humans and technology.
This episode is brought to you by Google Cloud. From cloud storage for big data to cloud AI and security infrastructure, Softchoice is your partner in leveraging every opportunity. Speak to Softchoice today to see how we can help you use Google Cloud to break down silos, unlock new insights, and power innovation. To learn more, visit softchoice. com slash Google. You're listening to The Catalyst by Softchoice, a podcast about unleashing the full potential in people and technology. I'm your host, Heather Haskin. As AI continues to evolve, it's becoming a powerful tool that can shape and maybe even dictate our future. But along with this remarkable power comes the fear of misuse and loss of control. Hence the ongoing debate between proprietary and open source models. If you ask my next guest, there is no debate. Brian Stevens is the CEO of Neural Magic, a company that's at the forefront of making AI more accessible, efficient, and ethical. He says the key to creating a better future for all is embracing open source innovations. Brian began his professional journey with aspirations of being a carpenter before transitioning into the tech world. He rose to prominence as the CTO and Executive Vice President of Red Hat, and later as the Vice President and CTO at Google Cloud. Now, as the CEO of Neural Magic, Brian leads the charge in democratizing AI. His vision for the future of AI is not only ethical, but also wide open. Brian, thanks so much for being here with us today. Very exciting to have you on our podcast, The Catalyst. You've been called a change maker. I'm really interested to know your purpose. If you were to put together a purpose statement, what would be yours? Um, I think it kind of is something that shaped itself from like the very beginning when I got out of college and took my first job. And I think what I realized, I always cared about what the purpose of technology was and the impact that it could make. And I always want to be part of that and not just, you know, software development's really exciting stuff, but the making the impact, right. It's a tool is the part that I always gravitated to. So I think it would always be to make an impact But, uh, have fun and do good while doing that. That makes a lot of sense. You are at Neural Magic now. I read a little bit about your background as an aspiring carpenter, then you were at Google and now you're there. Tell me a little bit more about that and some of those passions. We started about five years ago, a technology and professor that spun out at MIT. So some pretty good pedigree and skills, but like the part we first tried to solve for was if AI is so important, it shouldn't just be only available for these high dollar use cases. Meaning like, okay, well, if it's going to save you a million dollars or carve out whole people, like, If it's even useful for the little things, then we should make it available to solve the little things as well. You know, that's part of why, like, if it's open, then it's free or et cetera, et cetera. But the particular part of that that we set out after along that lines was, Everybody in the world has a cell phone, um, not everybody in the world, but a laptop. Every business has existing servers. We wanted to make these AIs work great on their existing infrastructure. And that just wasn't the case years ago. And so we focused really on Intel, AMD, ARM, CPU level, where people weren't. Even attempting to run these AI models on those, because we wanted to like, just bring it to people so they could use it everywhere and not have to go spend 30, 000 on a GPU that wouldn't fit where they want to fit it. And so we really built our technology in that paradigm. And then what happened though, is. When the large language models were invented, um, you know, it really was November of 22. We, we talk like we've been doing this forever, but it was November 2022 when OpenAI published ChadGPT, which was a large language model, and that was the breakthrough that they made. It was really incredible, but there wasn't any open capability that somebody could do to go play with it themselves. Like on their laptop, there were no models that existed. The models didn't start arriving till the following summer. So with the impact of large language models are having on users and use cases is so much more powerful than all the models from the past combined. So what we found though was that we, so we immediately added large language model support so you could run them on CPUs. Then what we found was because these models are so big, they were even really taxing for expensive hardware to run. These, you know, NVIDIA GPUs and other GPUs. And you wouldn't have thought that because these video GPUs are so powerful, they're like supercomputers. But even large language models were hard to run for them. And people were like buying bigger GPUs just to run them. And we just looked at it and we just said, it's right in our wheelhouse. Like the competency, what we have is infrastructure efficiency and model optimization techniques to make the models smaller, but just as accurate. And so with the board's permission, we started on that journey, um, about a year ago. And so now. We bring inference serving, so that's when you put the ais in production. We bring that not just to CPUs, we bring it to GPUs as well. And so what it does for GPUs is it just makes 'em that much more efficient and that more performance. So meaning you can do more requests per second, you can use smaller GPUs where you used to have to use bigger GPUs for the same model. So it's just gonna make it easier for customers businesses to get the most out of their infrastructure as possible. Which is going to be really important in this game, because if AI is successful, it's going to be very costly from a CapEx perspective. And so cutting that cost by two thirds or even more through the use of the software capability, I think is just generally important. It's going to open up where they can use it, even for some of the lower value use cases. Getting the most out of their existing infrastructure seems to be a huge pain point solver. Not everyone can just toss out what they have and start completely from scratch at any given moment when a new technology comes out that yes, it might increase their business outcomes and help them get ahead of their competitors, but with an exorbitant cost. So that seems to be an amazing solving point for customers. An inspiring backstory that you have coming from Google, as I mentioned before, and going into neuromagic, you've had this desire to build. And you seem to be doing so again in the AI open source space, and we've talked about some ethics as well. Um, I'd love to know, do you think this is a battle between open source and proprietary? Do you think there's a winning person? Do you think that's something we need to worry about? I think like I feel like I've seen the movie before at multiple levels and kind of the infrastructure stack I think uh, if you asked me this a year ago, I would have been like it would have been a hope With like some early signs but like even in the past 12 months The difference in the model capabilities in open source and the innovation rate speeding up As well as the platform that serves these AIs, the inference serving part, which didn't exist. There really wasn't anything in open source that was anything more than a toy a year ago. You know, it might be good for one user to use it, but not, not to run it. You know, you know, enterprise workload. That's all changed. in just 12 months. So, so I think the de facto will be open for this. And so there's still a lot to do. And there's a lot to do on just how you steward the community around all of this as well. It's not just technical, because there are every one of these companies that has a business concern. Also, And so business concerns definitely enter in the politics of open source, it's not all kumbaya. But I really feel really great about where open is today with AI. And I don't think it's a zero sum game on open versus closed, but I think what it does do is, closed is going to become that much better. Because it's gonna not go after just the commodity cases, you know, the models that are going to come out have to be really great at performing surgery, you know, and not just good at summarizing documents as an example. And so I think right now we're just, we're in the early part of what like general AIs can do. But I think that proprietary world is going to be like taking those general AIs and making them purposeful. Um, for a particular domain, it's going to be fun to watch and that's not inconsistent because that'll run on top of the open source stack. It'll be fun to watch that develop. Yeah, I think it, it all sounds wonderful. I wonder how it's going to play out, but I'd love to know your thoughts on the ethics behind open source and, and how you see that benefiting the business. I believe like the, what's interesting around the ethics side of Open and open source AI is that it shines a light on the models capabilities, just as I was talking about those evaluation frameworks, right? Like, I think these are often overlooked and really important, but all these evaluation frameworks live out in the open as well. I mean, there's lots of them. And so as open source models are producing, they're tested against these evaluation frameworks and they're measured. And so it'd be a really great framework. To worry about the ethics side of this, obviously there's going to be a debate over what, what is ethical AI and what isn't, but for the parts that businesses and people care about, there's going to be a way to evaluate open source models against that. And I think I would love to see not just. The column around how you did at coding or grade school math. It'd be great to see ones around different ethical evaluation frameworks as well and assembling those data sets and scoring. And I think that would be fantastic for for people to care about that as much or more. Then, you know, the primary purpose of some of these models, Jackson balances is always important having an outside view. And it also creates a trust completely for the platforms people are using just to know that there's someone else looking and saying, Hey, are you doing that right? And they can hold you accountable too, though, because they can do it themselves. Like it'd be really hard for you to run one of these evaluation criterias against one of these served proprietary models because you don't have access to the model. You don't even know which model you're asking questions to, and it could change in between every request and it could change in a week. But that's what's nice about like these open source models being modules where each one is out in the open and that can just like we, we focus a lot on the performance of them and how to make them faster and more efficient on hardware. And then you score that publicly, but you could do the same thing. Like every model that's out there in this case in the Hugging Face Hub. Could have a ethical score associated with it as, and anybody could go run it against that evaluation data set, and then innovation could happen in that evaluation data set as well. Um, because that's out in the open. It's not left to some one agency to decide what that, what ethical AI means. It should be left to a community of people that care about that. Keeping it in the public's eye. That makes sense. Besides the ethics of the situation, how do we get this to everybody? What innovation or technology would bring AI and open source, large language models to everyone? We talk, we talk, we go back into like, what's your purpose statement and it's like impact. And I think in our mission is not just to go make revenue from small set of companies is to go bring this capability to every company on the planet. Well, we're not going to reach every company on the planet commercially, but every company on the planet should have the. Benefits and the access to this capability. And so if they don't have the means commercially and there aren't the commercial companies to bring that to them, then the only way to bring it to them is through open. And so the first part is, you know, to create the open source capabilities that is transparent to them, but then has access. And what you also want to do on that is you want to make it so simple and so easy. And so like with industries like this, often they focus on the hard and not on the easy and the way this is all playing out and developing now, it's so simple that there's a lot of, you know, there's titles in the industry that, you know, machine learning engineers and all these things that didn't exist five to 10 years ago and they exist because it's so hard to use this stuff. But that's changing. So, like, if you can make it so simple that it's as easy as writing any application and that's where it's heading, um, then you really open up the impact of, of who this can reach. I have a teenage son and I'm always surprised how much more technology he knows than me because I, I like to think that I'm pretty aware of, uh, The new and up and coming use cases for AI. And we were talking about some of his homework and he had a question and, Hey, we're all used to saying, Hey, Alexa, I probably shouldn't do that. Cause she might start talking back to me, asking a question, but he pulls out his phone and he opens up his favorite AI app and types in the question and answers. His own question that he'd asked me out loud faster than I could, uh, and, and the answer that came back was actually really insightful. I mean, he was in the process of doing his homework. They're reading the, the great Gatsby right now at school. So, uh, yeah, I was just kind of like blown away with how it's already integrated at the, you know, school level with the kids and their teachers are already teaching them how to use it. So. Having it open to everyone to have all of their heads in the same pot, figuring those problems and challenges out seems to be really exciting to me. It almost sounds to me, though, like a stack in a sense where the open AI might be something that's called to, um, and then there might be proprietary on top of that, depending on the use case. Yeah, I think that's it. You can take, because you can take these open source models and then you can train them on your proprietary data. That's just really what's considered like a fine tuning step. So, and that's what the open community even wants. So they're not basically saying every bottle should be out there and should be free and permissively licensed. They're basically just saying, let's build the scaffolding and the, and the layer for on which even proprietary innovation can flourish. That makes a lot of sense to me. And it seems like a world where we can maybe trust things a little bit further, if we know that that's going on. Ready to supercharge your business with generative AI? Dive into the future of productivity and data optimization with Google Cloud. As an experienced Google Cloud partner, Softchoice is equipped to help you build the data governance processes you need. to ensure responsible AI specific to the Google Cloud ecosystem of services and infrastructure. Speak to SoftChoice today to join a generative AI accelerator workshop to ensure you're deploying generative AI that improves productivity and data optimization without the risks. These workshops also include education for end users along with a setup for proprietary data within the Google privacy settings. Discover how to unleash your team's potential and take your generative AI strategy to the next level with Google Cloud and SoftChoice. Visit softchoice. com slash Google to get started today. This podcast is all about how we're using technology to unleash human potential. I'd really love to know as a leader, how are you using open AI to succeed and to get your team to succeed? Interesting. Flipping it around. Yeah. How I think about that is actually less on how they use AI because they just, because they're, they're like your son and they're technologists and they're just using it in ways. So they don't need my help at all. But what I want to bring them into is the purpose. And like we talked about earlier is like, there's a lot of technologists that work on technology for technology's sake, without really understanding the use cases or the impact. And so what we're really trying to do is don't put the engineers in one room and put the people that talk to customers in another, but bring the engineers directly into. customer scenarios. So they see what they're trying to build, how they're trying to teach models, these new data sets, how they're trying to optimize them, how they deploy them. And so just have a first order effect. And by making them bring that end user empathy into our organization, so that as we build these open solutions, right, we're building for people not based on, you know, our technical decisions that We would have otherwise been making without the, without the voice of the user. Bringing your engineers right in front of those customer use cases and on siloing the two teams. That seems like a hard thing to do. It sounds easier than it is. It's hard, but you know why it's not hard is because it's fun. And, and I think like the customers love it. Certainly, you know what I mean? They could ask a million questions and customers around AI are often very technical. So they win and they love it. And getting that access. And then the engineers always come back excited as well with a much better understanding of what it's all about. Right. And I think that's, I think that's important. I'd say some of my favorite conversations have been when you kind of kind of pivot and you came into the conversation for. One pain point as the engineers in that call, he asked just some discovery questions to help further identify the need and you find it, Oh, you've got this little molehill here, but there's this whole mountain of things I can actually help you with that you didn't ask about that. I'd love to just share my knowledge with you. And those can be some of the most magical conversations because it's not a sales pitch. It's more like, Hey, what are you doing? And here's the best practices and we can see you succeeding in this way by doing these things. And it's a. It's great to, to be a part of those types of conversations. And know what it does too, is that because we always talked about like open being the platform for innovation, at least maybe I did that more than you, but I think you believe, but like, Even if the person you're working with as a customer isn't a developer and isn't contributing to that body of open source code, just through that conversation, they're actually influencing the future of open source by their sharing, you know, of what works well, what doesn't work well, what they need it for. They can actually influence what open source looks like. And it's a really important part about open source isn't just about a community of developers. It's about a community of users and helping get the end users voice into that. Really incredible to shape. I can see that being a big deal. As we look to understand some of these use cases, is there something top of mind that you've ran into again and again from a storytelling sense you might be able to share? A lot of the ones right now are, the hard part, you know, I think we made it simple enough where it's no longer hard to deploy AI efficiently. The challenge is around Yes, there's the chat use cases, but then there's also the ones that integrate and maybe a business workflow and where people are doing a big part of the process. People are reading, people are based on what they read and then they're doing the next action. So there's a lot of use cases around what would an AI do to enable a person to make a higher quality decision and perform an action faster than they would have otherwise. And so in many cases, I might be summarizing some supply chain data. To understand what the next step is, or are there going to be delays in billing the product that's dependent on the supply chain? So there's a lot of these cases that we've seen, like where there's data involved, that's very much proprietary to the customer's domain and what they're doing, person's interpreting that data, making a decision. And as you can imagine, AIs are just great at that. In some cases, they might need to be trained a little bit on that kind of data set, like what's the supply chain order even look like, but that's not hard. It's very much at safe. They fall into the class of like, say, summarization of documents, which is a big part of what these AIs do. So if I'm trying to think through our customers, either at SoftChoice or Google or in the technology zone, how do we start? Where do they begin? Yeah, I think Google's already begun. That's for sure. They were using these AIs and this publicly for their data centers long ago. And one of the use cases I love the best that we would, Talk about for just cooling data centers and making data centers greener and more efficient was instead of building some smarter AC system, which, of course, they do that, too. They basically just used A. I. s to look at all the telemetry and data that was coming off of the cooling systems and then let the A. I. s control the systems after that. And they got like a massive, a massive reduction in energy costs to cool a data center. And that was. A decade ago, thereabouts, the, the big tech companies are so far along, you know, on mastering the technology and applying the technologies. I think now is how do we reach the S& P 500 or beyond that aren't tech companies to empower them. The big tech companies are using it. So then for soft choice customers or organizations that are not quite so large as Google, where do they begin? What kind of innovation would help them? Yeah. And I think that's why, you know, joining kind of this community around open, we talked about like community of users, community developers, like it's very much, uh, addressing the usability and the skills needed to apply it. And that's why I like it. Neuromagic and how we steward the part of the community. So it's really all around come to neuromagic. com and yes, there may be business use cases we can gauge on, but then there's also artifacts there that would create just an open developer that wanted to, you know, explore these large language models for the first time will be presented with all the easy to use resources where they can get started. Large language models. People have these fears of AI and of these innovative, amazing things, but when it comes down to it, it's really the people behind it and what data you're putting into it and how you're using it and who's asking the questions. What have you found to be the most exciting thing as you look ahead that you're looking to, to work towards? I'm a boring and naval enterprise infrastructure and AI for everybody. So I celebrated all the little milestones along that way. And I think like for us in that mission was first, like the capability didn't even exist a year ago. And now we're actually seeing partners and other entities actually starting to build their solutions around this open source capability and even bring it to market. Even bringing it to their customers and companies that are building AI platforms of their own, starting to bundle the capability that we're working on out in the open and bring it to their enterprise customers that already trust them. So the partner organization, one part is have openness. So there's no barrier to access. One part is to have entities like ourselves that can bring the technology and the skills to people, but then a partner network where this capability becomes a de facto standard. And that's really has started to happen over the past couple of months. Boring, but it's an exciting thing for driving the adoption rate up. And it ends up being this flywheel that as that starts to happen, then all of a sudden the investment models in the open capabilities comes back even more. No barrier to access. I love that. How does Neuromagic benefit itself from raising that adoption rate? Are you finding further innovations that come out of that? Yeah, well, it benefits us because by making these capabilities successful, then we also aren't the only company creating these. So if you get them over the bar where they become popular, and then all of a sudden some of the biggest companies in the world are now showing up and investing in the same open technologies that we've been working on. So now all of a sudden the platforms and capability that we bring to customers are getting better beyond just the investments that we're making. And so it's happened before in a lot of areas, but to see it happen in AI where for a while it looked a little bit like it was only going to be a proprietary world. And so to see that's becoming not the case is pretty exciting. Very exciting. Um, as, uh, we think about some of our listeners and the different demographic of people listening for the senior leader out there, the senior IT leader, what advice would you have for them as they kind of look into this world and make decisions? One go open. Um, and then two, I think it's a roll your sleeves up kind of moment. This isn't the thing that a senior leader should just outsource by hiring someone. And yes, they should hire like skills, but you have to develop and, Authentic point of view and understanding of the capability or even the lack of capability, whether it's the open source solutions, preferably, or even the closed source solutions, because today it's very much around the art of the possible what use cases are possibly solved. And then the second is, is your path going to be doing that in a proprietary way using proprietary services or is it important for your business that you're able to do that on an open platform? And I think, um, this is so important that I think every senior leader all the way up to the CEO, you're seeing. Get informed on this. And I'd never seen that in the past. I, the CEOs never cared about open infrastructure. They, you know, and so now all of a sudden you're seeing the CEOs really wanting their companies to drive faster. So I think every senior leader has got to like get involved. The ability to get your arms around it and understand it, all that's out there. Right. And so I think like it's part of the incumbency, you know, is really for any leader, and I don't mean just the engineering side, I mean, even in HR. Right. Like in every organization, there's applications for this. And it's got to be driven by the people that own the leaders that own functions and not just by the people that own technology, the leaders that own functions. So I like to think about. the nitty gritty a little bit and get down and get down in there, roll my sleeves up. If I were to think about how they could possibly ask themselves those questions, what are some of those questions to help them determine which direction to go? I think it all should start at the use case. And so part of what I've advised other companies on is really start to understand what a day looks like in your staff. We're so multitask oriented. You know, even if you're in HR, right? Like, what are the steps that you do on a regular basis? And you start small, like in which of those steps can be further optimized and quicker, you know, I mean, by employing AI capability and especially through around all of the things that deal with languages, obviously. So I think so much of what we do is reading, writing, using that basis to interact with another system, pulling data out of one system and then going and interacting with another system. I mean, I do that. And so really taking a strong point of view around what those use cases are for what AIs are capable of doing today. And all that starts even before you implement anything. It's really just sort of setting some small goals around even just task automation that can really liberate. It's not around obviating a person. It's around letting that person like have the best tools for their mission, which isn't just spending a lot of time assessing documents or next best actions. And so, you know, just like your son, like it, you know what I mean? It really helped him, like, but it can help like pretty much every function. And that's why this capability of the large language models is so exciting. It's, uh, I've looked at that myself at first. Looking into saying, Oh, is this going to take my job? No, it's, it's a matter of being good at asking the right question and utilizing AI as a collaborative tool, getting more done. So if you're going back to starting with a use case and understanding what that day looks like, how does that relate to open versus proprietary though? I think then the next part is you actually have to either use an API service, or you have to deploy an AI model service inside of your organization. You know, the functional owner owns the use case, if you will, and then the I. T. And the M. L. Organizations would support delivering that service as an A. I. Service on either an external platform. Or an internal platform, but you know, whatever makes sense for their business model. I see. So the calling to that API versus having that AI model. I see. Yep. It's very similar when you think about like the parallels, like databases. Yeah. Do I want to like put all my. Data in a hosted database running on somebody's cloud, many people do that, and there's a reason why they do that, or do you want to have your database running inside, you know, on your infrastructure, like your performance and cost controlled by you and all your data is kept private and all your users queries to that database are kept private. So it's really kind of a choice at that kind of level, but the analogy is very much like how you might think about an IT or a business would think about, you know, managing their data. That makes sense. Just for clarification for me, open would not be only on prem. That's right. So open can be on your smartphone, on the edge. It's often, when they're using open solutions, they often have a strong point of view on like, where the AI service is going. It might be near their users, it might be in a certain country because they want to have data jurisdictional reasons. It might be on, low power infrastructure where they don't have hardware accelerants. It might be every company around the world uses cloud. You know, they have their own applications up on cloud. It'll certainly be running on cloud using any of the inferencing capabilities that might be in the cloud already. So it ends up becoming their decision entirely. That really does remove a lot of barriers. I would think, especially if you don't have to start fresh and completely throw out what you've already got existing. Well, as we kind of wrap things up here, I'd love to kind of understand perhaps maybe Google's approach to open source and where that's working in that space and how things are kind of coming together there. Yeah. I mean, Google has always been a really great steward of open source, you know, it's harder to tell sometimes from the outside when I was at Red Hat, but then when I joined Google, like the, the contributions that they made to open source are ridiculous and they always were making a lot, but they weren't necessarily actively involved in the communities, but they did a really important thing for the AI community. And they open sourced a large language model called Gemma, which is amazing. And they've also. You know, one of the things we did, I was there during it wasn't on the AI side, but it was on the open source infrastructure side is we back then donated a lot of the open source code that they were building to a foundation and that foundation is now called the CNCF. So in the land of open source, it's a really important independent steward of open source and open source communities. And that was all bootstrapped by Google's first, um, contribution of, of something called Kubernetes and Kubernetes is the thing that. It powers pretty much every application, whether you're on Amazon, whether you're on Google, whether you're inside a data center, and it's also becoming a critical ingredient for how people manage AI at scale. So they, um, always been fabulous around Open. It's wonderful to hear that we can trust some of these big decisions to know that some of the larger companies out there are also keeping that top of mind. And then as we look down to the end user again, with some of these technology breakthroughs, how do we get AI on the PC? How does that come into play? It's already there. It's already there. So like through the solutions that we've built, instead of having to use GPUs, we run it right on our Mac, at least on a MacBook directly. And then what's also happening, so your CPUs, believe it or not, on your PCs are good enough, but now you're seeing like a wave of specific accelerators that aren't the CPU itself. Companies are building accelerators that are, you know, Great at running inference and part of that is just because the belief that you may not be doing a lot of AI today on your desktop, but in the future, you're going to be because it's one of those cases where if you can do it locally, it's just a better user experience than sending all your data to some cloud service somewhere. And then that's expensive and bandwidth, you know, zoom's already taking a lot of bandwidth away from out of our homes. Now, if we're shipping all of our AIs back and forth. By these special processors to bring it into the local processing of AI on desktops. And there's been a lot of announcements on that. The world of AI, ethics, open source. There's so much to talk about. I really appreciate your time here on The Catalyst and bringing the technology to the people, having people involved in it and having all our heads together in the same room at a round table. Sounds exciting to me. It's great being here. Thanks very much. When it comes to AI, open source innovations can sound almost too good to be true. The idea of powerful AI capabilities becoming accessible to everyone, not just the privileged few, might seem like a pipe dream, but it's a reality that neural magic is actively creating. I loved what Brian shared about solving the pain point of rip and replace. It just makes everything seem that much more accessible when you know that you're not ripping everything out of your environment, replacing it with something else and starting fresh. If AI can solve that challenge, what does the future hold? Thanks for tuning in today. You enjoyed this episode, please leave us a review on Apple Podcasts. See you again in two weeks. The Catalyst is brought to you by SoftChoice, a leading North American technology solutions provider. It is written and produced by Angela Cope, Philippe Dimas, and Brayden Banks in partnership with Pilgrim Content Marketing. This episode is brought to you by Google Cloud. From cloud storage for big data to cloud AI and security infrastructure. Softchoice is your partner in leveraging every opportunity. Speak to Softchoice today to see how we can help you use Google Cloud to break down silos, unlock new insights, and power innovation. To learn more, visit softchoice. com slash Google.