Meru Data's Podcast
There is tremendous value to simplification. To quote Steve Jobs, “Simple can be harder than complex: You have to work hard to get your thinking clean to make it simple. But it is worth it in the end because once you get there, you can move mountains." In this series, we explore how people and companies achieve simplification.
Meru Data's Podcast
Webinar Audio - How to Evaluated AI Use Cases
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Join Priya Keshav and Reena Bajowala on this month's podcast as they discuss simplifying for success in AI. Priya Keshav is the founder and owner of Meru Data, where she helps organizations worldwide simplify and sustain privacy and AI governance programs. Reena Bajowala brings deep experience in AI, cybersecurity, and privacy, advising clients on evaluating AI risks, implementing mitigations, and building compliant governance frameworks. In this episode, they discuss how in-house counsel should evaluate new AI technologies, key AI trends, emerging laws and regulations, their impact on compliance programs, and practical approaches to managing AI risk through effective AI governance.
Simplification requirements. Before we get to our data, my name is Priya Keshav. I'm one of the founders and CEO of Meru data. Our focus is on operationalizing privacy and building privacy by design practices within company. This would be through tech implementation or process optimization. Previously I used to be a managing director of KPMG for engine technology practice in the Southwest US. And I think you can write about privacy and eye governance regulators. Today, our special guest is Reena Bajowala. Welcome to the show, Reena.
Speaker 1:Thank you. Thanks for having me.
Speaker:Could you tell us a little bit about yourself?
Speaker 1:Sure. So I'm a shareholder in the Chicago office of Greenberg, Chorig, and I practice in the areas of AI, cybersecurity, and privacy. I would say the bulk of my work is in the space of AI, which has grown dramatically over the last couple of years, and assisting clients, I'm sure we'll get into further detail, but assisting clients in navigating AI risk and mitigating that risk, implementing governance programs and procedures, vendor management, and compliance with the sort of ever-changing AI legal landscape.
Speaker:So as you mentioned, yeah, our topic today is how in-house counsel should be evaluating new AI. As adoption of AI grows, there is an increasing recognition that enabling trustworthy AI is important. Everywhere you look, I mean it's December, one thing that is very clear is you see a prediction that spending on AI governance will be increasing significantly in 2026. Organizations are shifting their focus from Wild West experimentation to scaling AI more safely and predictably. So understandably, governance is becoming a critical priority for them. But what does a lawyer, AI lawyer, actually do? Take us through a typical work that you do with clients.
Speaker 1:Yeah, absolutely. So a client might contact me and say, hey, we have a lot of use of AI within our organization. Maybe they have an AI use policy, maybe they don't. But they uh, you know, and it's typically the legal department, often the privacy uh group specifically, that has responsibility for uh figuring out how to um ensure that the use by employees of AI tools that are either procured for the company or that they're using um without you know uh bringing them to the company are um don't create legal risks um and business risks for them. So looking at you know exposure of data, um, exposure of trade secrets, uh usage of data or usage of um AI outputs in ways that are are risky. And so clients will come to me and ask, well, what does the first of all the law require in terms of an AI governance program? And also what have you done with other clients to put in place some processes to be able to get our arms around the uh you know kind of ever-growing use cases of AI within our organization. And so we'll then talk through all of the components of an AI governance program. Um so that's a really typical client that is not developing AI themselves. Um, the other flavor of client that I have is uh a client who is developing an AI tool and wants to ensure that, you know, where where that tool sits within the AI, you know, landscape of laws that look at, for example, high-risk AI um systems and and determining what are the compliance measures that should go along with that so that they can set up their terms and conditions appropriately and then also um interface with their customers on where those obligations will lie.
Speaker:So um, you know, you talked about regulations, right? So is um you know regulation for AI new or has it always been there?
Speaker 1:There's there have been aspects of um regulations that apply to AI that have been in place for years, right? So certain things, for example, privacy laws. Privacy laws have talked about automated decision making. So automated decision making has been a concept that's in those privacy laws for years. Uh, GDPR, you have Article 22 that has automated decision making and rules around that. So there's there have been some rules in place, um, and same goes for, for example, intellectual property. Um, however, this proliferation of the new technology has also brought with it specific laws that relate to either what I call comprehensively regulating AI systems or really regulating very specific aspects of AI systems, for example, training data or use of particular types of sensitive data or use cases that are high risk, like HR use cases. Those types of laws that are specific to AI are more recent and really have been coming online over the last few years.
Speaker:So I was gonna ask you about trends, like what are you seeing? But then I think we should talk about the December 11th White House Executive Order as well. But what are some of the trends that you see in terms of the bills being introduced for AI?
Speaker 1:Yes, so a lot of um bills being introduced around generative AI content in various ways. So, you know, whether content needs to be marked in some way so that the um you know recipient of that content is uh you know aware that it is um it is generated uh through an AI system or where there's interaction with an AI system, that there is some sort of disclosure along those lines. So those types of disclosures uh we are seeing also that uh bleed into other spaces, like for example, um marking of uh AI generated content in advertising, you know. So if there is an AI generated person, you know, that appears in an ad, uh we're seeing, you know, we a bill on that in New York. Um, so a lot around generated content in terms of bills. We're seeing some bills around uh training data and transparency around that um aspect, and have also seen some kind of litigation and things like that around that. So training data is an area of interest. Um, and then there's a certain high-risk uses where we are seeing bills get introduced. Human resources is definitely one of those places where there is an effort to try to figure out what place AI has in that recruiting and employment life cycle process. Um, so those are some of the specific trends I'm seeing. Um, there is also pricing. We're seeing some laws and bills coming online relating to uh pricing algorithms, and that has been an area that I know has been of interest for certain federal regulators in the past uh as well. Um, and then, you know, sort of separately, there's been this effort to try to um more broadly define what high-risk uses are beyond HR. So, you know, there are a couple of laws um that are out there now that um separately define, well, here's a laundry list of what high-risk uses are um that might have some significant impact on consumers and that we want to have a little bit of a closer eye on.
Speaker:Um, but how does the new White House executive order kind of change some of these trends? Do you think how will compliance uh look like given the push to sort of regulate at a federal level and making some of those state laws uh not relevant anymore? I mean, I guess there's it's probably gonna get challenged, uh, but it also creates a lot of uncertainty. What would uh what do you recommend people do from a compliance perspective given the uncertainty around the US? And we'll talk about Europe maybe afterwards, yeah.
Speaker 1:Yeah, absolutely. So I think that the underlying um risks and you know, the risks of, for example, being deceptive to a consumer or um having a biased result in your employment practices, those risks are um underlying and exist before or after this executive order. So um, you know, I think that um in the short term especially, I don't see the executive order making any um, you know, changing where clients are focused in terms of putting together a governance program because regardless of the I think specifics of the legislation, we see that we need to um have sort of a taxonomy of those risks that um you know laws may be regulating this risk or that risk, but but the risks are that are underlying are there. Um so certainly, you know, um, you know, keeping a close tabs on on how that progresses in terms of you know what the executive order has um has set forth, but uh, you know, for for the short term, I think that the efforts are going to continue um, you know, apace.
Speaker:So arguably the most comprehensive um AI Act would be the EU AI Act. Can you tell us a little bit about the EU AI Act and what compliance should look like given that it will be fully in effect uh by August of next year?
Speaker 1:Yeah, so there is actually an uh omnibus bill uh before the EU that may kick that a little bit. Um so we're we're keeping close tabs on that as well, um, in terms of the the clients who are um looking at compliance. But so the EU AI Act it has been um taken many forms in its drafting over the years and and actually ended up uh being the second comprehensive um AI law to pass after the Colorado AI Act, which is kind of interesting, um, but does have a risk-based system. So basically what the law does is it says, well, here are some um AI practices that we are prohibiting because they bring unacceptable risk to um, you know, the fundamental rights of uh EU residents. And so some of those prohibited practices um, you know, are are kind of exploiting a vulnerable population or you know, certain subliminal messages. There's all sorts of nuances around that. And there are some publications out, you know, it really requires kind of taking a look at. But so there are certain prohibited practices, um, those cannot be introduced into the EU market. Then you have um certain high-risk practices. And so those high-risk practices are allowed, right? They're not prohibited. However, they've been identified as um areas in which there could be some meaningful uh significant impact on individuals and their fundamental rights. Um, so areas like um education, uh, certain employment use cases, um, the law enforcement use cases, uh, biometric information is an area of interest under the EU AI Act. So, um, and there are nuances um to each of those, but you know, there's a list of high-risk uses, and there are sort of like safety components, other aspects as well. Um, for those high-risk use cases, um, developers can introduce um and deployers, those are the two kind of um parties that the developer is the, you know, the one that that creates, and the deployer is the one that puts it out for use. Um, both of them would have obligations under the EU AI Act in terms of some guardrails. So quality management system, a risk management uh policy, um, risk procedures, ongoing uh you know, different aspects to kind of monitor some data governance and then also some documentation and registration requirements. So they'll have uh you know some additional requirements above um what would be the next category, which would be minimal risk. And so for minimal risk, there are some types of AI systems that will have some compliance obligations. So, for example, if you have an AI system that is interacting with an individual, you need to make sure that there are appropriate uh disclosures. Also, um, what they're calling general purpose AI models, which you might um you know consider as large language models. Um, there are also rules around those. Those are not automatically considered high risk. That was a uh a topic of very serious and heated debate in the EU, um, but uh general purpose AI models um ended up outside of that space. But so for those um minimal risk uh AI systems, there is a lower compliance burden. So there is a lot to do uh in order to align with complying. Uh a lot of those um aspects are already in place. So prohibited um systems that you know, milestone deadline has already come into play. Um the high-risk systems is the one that under the umnibus bill is um potentially going to get pushed because otherwise that would be um coming up um pretty soon.
Speaker:So as you consult with clients, what type of tools do you typically see are in vogue now? And um, what are some of the associated risks with those tools that are being used more broadly?
Speaker 1:Yeah, so you know, I I feel like every week there's a new tool that I'm hearing about. Um, but you know, I think note-taking apps more recently have been so ubiquitous and used and create some really interesting questions around um, you know, if a client has a note-taking app that is an enterprise one, they have more control over what the terms and conditions and the data sharing, the data retention policies around that. Um, if you show up on a uh you know, on a meeting and uh a note-taking app uh appears because the individual you've invited from outside your organization uses that note-taking app, you don't have that visibility into their terms and conditions. Um there is um, you know, in the with the automatic creation of transcripts, there's questions around how to make sure that protected information, privileged information is treated appropriately and classified. Um, also the accuracy of any notes that, you know, I don't know how many times you've gone back and looked at notes that were automatically created. I think it's um probably not as common. Um, but so note-taking apps both within the corporate environment and also outside of the corporate environment in you know, doctor's offices and um just other uh venues. So I think that that is a big one. Um, we are seeing a lot of um, you know, work around um sort of rag tools, retrieval augmented generation tools that are sitting on top of LLMs that are internal. And so those require a lot of investment by an organization because they are more secure, but we are starting to see a lot more um enterprise uh versions that use a you know known knowledge base and then use the LLM for um being able to kind of prompt and pull that that um information from that knowledge base. So um, you know, those are really great tools, but we're seeing you know, HR tools um for reviewing applications, um, for looking at um interviewing and um it's you know some of the interesting aspects sometimes where you have, for example, if you have a um interviewing uh support AI tool that is helping your employees interview candidates, while are they also monitoring your employees? You know, you have kind of a double aspect there. So I think that one of the interesting things is there's no one body of law to consult with AI, right? Like even the note-taking apps. Well, you know, if you say, okay, you can't use note-taking apps, well, there's been an argument, well, under the ADA that some people want note-taking apps and that they need them as an accommodation. So there are several different um, you have to really issue spots so much more broadly than you would if you were looking at a privacy issue and then you have a a certain set of privacy laws that you would be able to apply.
Speaker:No, I agree. And I think um the complexity is only gonna increase with the use of agenda AI tools, which you know you keep hearing about them, and I think they'll probably mature more and will show up more and more as use cases in 2026, which will complicate because they probably have broad access and capabilities to do more things than just uh, you know, some of the use cases we're seeing right now. Yes.
Speaker 1:Yeah, I I um was just talking to one of my colleagues, and he's putting out an article about this soon about um software seat licenses and how does a gentic AI interact with those? And you know, that we're just as much as I spend my time thinking about this, you know, these issues, there's just gonna be more and more um issues at the intersection of AI plus you know, insert other area um that are going to create some areas of of friction to work through.
Speaker:Agreed, yeah. So um how does a company manage uh risk with AI? Um, in your mind, what does a gold standard AI governance program look like?
Speaker 1:Yeah, a gold standard AI governance program would have a centralized body that is a decision-making body who is cross-functional throughout the organization. So you have your legal, you have your IT and information security, you have your developers and research development folks, you have just different folks from across the organization that are evaluating new use cases and making determinations on whether a use case, um an AI use case can move forward, um, and what if so, with what mitigation. So, so having that kind of centralized structure and then having a policy that sets out what the process is for um you know employees and and personnel to um first of all to cabin the unapproved use of AI and then to set forth a process for, well, if you want to use a new AI tool, how do you bring that to the company? You know, what is the the procedure? Um, you know, what I've seen work very well is, you know, having then a list of questions. It's very similar if you've uh put together a privacy um compliance program. These are really a lot of the components, and what I think that's why a lot of times. AI compliance lives in the privacy world as well, in terms of the personnel in-house that are doing that work. But having a list of questions to suss out is this a higher risk system? What are the kinds of mitigations you'd need? So what type of data inputs are they sensitive? Are they regulated in some way? You know, you're looking at all different types of things there. You know, are they protected by contract? Is this, you know, customer information? Um, is it, you know, private information under, you know, privacy laws? Um, what's operation of the uh AI system? You know, what visibility do you have? Is this data training that system? Um, what kind of assurances do you have relating to the kind of training process? And then the outputs and how are the outputs being used? Are they used for some sort of interaction with consumers? Um, are they making, you know, are there are there decisions being made? What type of human oversight, what type of disclosures are being provided? Um, you know, really is output being reviewed. You have to be, you know, it's kind of interesting because there's so many different use cases. So coming up with a list of questions is sometimes a little bit challenging, but you know, have done it because it it has to be fit both, you know, a note-taking app and a um, you know, an internal uh, you know, a gentic AI uh tool that's that's you know going through and doing something operational. So there's a lot of different aspects. So um having that list of questions, having a policy that sets that out, um, and then having ultimately an inventory of the use cases that are allowed and what mitigations. Um, the last two pieces having uh training for your organization on what is and what isn't AI and what um requires going through the process, what are the ways that you can mitigate risk. I think that you know, right now you do see a lot of unilateral turning on of features that have AI incorporated, and somebody might not even actually have any understanding that they're using an AI tool. Um so that's right.
Speaker:I agree. And I think the big difference, the hardest parts are gonna be, like you said, cross-functional. Um one, because nobody understands, and I think at least from my perspective, the question that I always get is um this is a easy, you know, we don't need all of this complexity for turning on this one thing. Um, but the debit is in the details and knowing which ones are really not risky and which ones are another, not so problematic use case, and knowing which ones are problematic, I feel like is an art. And um going through the rigor for the problematic ones uh and skipping the easy use cases requires a lot of judgment, which again, and and the other part is cross-functional, right? Like you said, it touches so many areas, and so if you uh, you know, if you don't bring the expertise from those areas, you're not even looking at those use cases. And sometimes companies like to work in silos, and I'm the intellectual property lawyer or I'm the privacy lawyer, I'm not the ex. So obviously, you know, being able to kind of just having that cross-functional team effectively working together is gonna be a critical piece too.
Speaker 1:Yeah, absolutely. Yeah, yeah. And then the one other um aspect I'll mention is vendor management, right? Having um your processes around how are you vetting vendors? What even if the vendor says that you know they're a software vendor and they say they don't use any AI today, are they turning on something tomorrow? You know, really being able to figure out, okay, well, what's you know, what does the vetting process look like? Um, what does the, do you have um, you know, a separate vendor addendum relating to AI? Do you have the specific terms? Are you often using the vendor's paper and then you have a checklist and a playbook on how to negotiate through and really understand? Because a part of that process, I think, for vendor management is really the back and forth of sussing out what data is is going where and um, you know, what is what is going to be owned by who? And um, how are you going to um, you know, um again, again, manage the risk to your organization because that third-party risk is always uh, you know, ever present. So, you know, most companies have a vendor management, you know, program, at least more larger companies. And so this is, you know, how how does this fit into that uh existing process?
Speaker:Um, you know, yeah, absolutely important. And I think AI should be part of every vendor's question, uh, right, because pretty much AI is being turned on everywhere. Um, but do you have any specific tips uh from a contracting perspective other than some of the things you already mentioned, which is talking about data, where it's going and things like that? So any other specific contracting tips when it comes to vendor management?
Speaker 1:Yeah, you want to look at the data ownership for sure. Um, so you know, whether they're that is parsed out, whether there are restrictions, again, understanding what your um risk issues are internally is really important. So, you know, if if this is a vendor that is going to be um having access to certain, you know, potentially access to certain um sensitive information, you know, what are the guardrails around um whether they can utilize that information? If they can um access it, can they incorporate it into their training databases? You probably don't want them to, but um, you know, that that's something that is is really important to take a look at. Um you want to look at compliance with laws. Um, you are having, you know, because we have a changing landscape in terms of AI laws. Um, and if you're a customer that's utilizing, you know, and leveraging an AI tool, and but the regulator is gonna come to you to ask you questions about it, you want to be able to have access to, you know, the provider of that tool to provide information that is, you know, above and beyond to the extent that you are in particular areas, you're gonna want um, you know, testing records or other kinds of records that uh you could furnish and documentation that you could furnish um, you know, over to a regulator. So those are a couple of the different areas.
Speaker:So we've been talking a lot about, you know, looking at AI from a governance perspective. Maybe we should shift gears a little bit and talk a little bit about, you know, um, almost all of us are being encouraged to use more AI to stay up with the current trend. Um, so as a lawyer who's using AI, what rules apply to lawyers using AI?
Speaker 1:Yeah, so there's a really um good uh ABA opinion that's quite comprehensive on what the ABA rules. Um, you know, we are subject to all of our professional responsibilities when it comes to AI, um, you know, as we would if we're not using AI, right? So um issues like confidentiality of you know, of uh client data obviously is paramount. So that's something, for example, at GT, we have invested a lot in our um enterprise uh tools that don't um, you know, connect to the internet. And so that, you know, it if and even then we get specific client um consent. But, you know, it's very important that we um ensure that we're protecting that confidential information through the security of the systems, through the contractual um safeguards, and then also that duty of candor of talking to the um clients about and making sure they understand what uh kind of tools you might be using. Um duty of supervision is major, right? So uh we've seen this come up time and time again in the news, unfortunately, um, where there'll be a brief submitted. I'm not a litigator anymore, but I do remember shepherdizing those briefs. So something got missed in the process. But you know, supervising if somebody hands you, you know, here's a draft brief, here are citations, those need to be checked. You know, even if you know you got the approvals and you were to use AI to generate um, you know, the initial draft, you need to go back and and validate, you know, as lawyers anything we submit to the court, we're submitting as us having verified that to accuracy and that, you know, rule 11 says, right, we don't cite uh, you know, cite to laws that aren't, you know, a good faith extension of the existing uh don't cite to cases that are, you know, that that are and um and make arguments that aren't in good faith. And part of that is really making sure to um obviously cite real cases and do that work and do that validation work. So that duty of supervision is um really substantial too. Um but there there are um really just like across the board, most of the key um model rules of professional responsibility are going to apply just in certain use cases to AI. So it's gonna be interesting to see how that all continues to play out.
Speaker:So um how is AI complicating the practice of law, do you think?
Speaker 1:I think probably it it's hitting different groups in different ways, right? So I think it's get it's hitting um some people may be utilizing these tools without fully appreciating the risks, right? And so then you do get that when you know the brief gets submitted and it has um hallucinated citations. Um I think you have some that are afraid to use it at all, and so they're they're not going down that path. Um, I would caution against that as well because uh, you know, this is the way of the world we're going. And and um, you know, I've I've personally had clients um tell me that my proposing, you know, hey, we can do it this way, or we can use our you know, internal AI tool to do this piece of it and validate it, that that offering is, you know, a value add that they are paying attention to for outside counsel. Um, so that's really important. Um, but then there's also the complicating factor of junior associates and how do junior associates, if you have an um an AI tool that can spit out a first draft of a motion to dismiss, you don't have that junior associate putting together that motion to dismiss, right? So you have that that there's something there that's kind of lost in the process. So, and I've heard a lot of feedback from junior folks about, you know, concerned about that piece of it. Um, I think we need to you kind of continue to pivot. I think it it'll it'll be similar um ultimately, although you know, at a bigger scale to eDiscovery, right? EDiscovery used to be done so differently. Um and you're like, well, if you don't know how to code a hard copy sheet, then how are you gonna know how to do the document review um on the system? Well, it's just you just learn how to do it um in a new way, and that's the new normal. And that's you know, but we're we're definitely in a transition period. So um I think those lawyers of different um aspects are looking at this differently. And one other um group that I'll mention is judges. You know, I did a presentation to a group of judges uh on a case where a litigant submitted deep fake evidence. And, you know, the feedback from the judges who are like, oh my gosh, I need to, you know, the reliability of the evidence, evaluating that, evaluating sanctions issues, um, already stretched resources, and then sort of more complicated, you know, how do you vet that? Well, you have to ask about the metadata, you need to have a you know custodian of who, you know, supposedly took a video that you're you know submitting as evidence, you need a declaration from that person, different things like that. So I think different um subgroups within the legal industry are gonna be really um dealing with AI in a lot of different ways.
Speaker:But that's kind of similar to pretty much all professions, right? You hear, like I think there's gonna be a lot of confusion because change causes confusion, but change can be good too. Um I mean, you you hear like 30, 40, 50 percent of services you know will be done by AI. It may be that, you know, our jobs will change, uh morph into something different because, like you said, um checking the reliability of an evidence wasn't something that we had to do before, but now you you know you can question almost anything, right? Because of the fake um videos that can be created, um, which makes it more complicated. And that's a new skill which we didn't didn't have, and that's a new process that we didn't have. It may just morph into something different. Um, but it may also give us like I hear you know, we talk about maybe four-day weeks or uh instead of you know, so maybe we'll all work a little less because some of the work will be taken over by AI. Who knows? I think to some extent we'll we have to get comfortable with changing. That's probably the big message. So um what are some tips that you would like to give to other lawyers who are kind of looking at uh practicing AI, um becoming an AI governance or AI uh you know, looking into practicing AI law as a expertise?
Speaker 1:Yeah, I mean I think that um, you know, there are a lot of um interesting training programs out there. Certainly I haven't participated in any of those. I kind of came into it uh more organically, but I think that as time goes on, we're gonna start seeing more classes that are um covering this topic. You know, I know A IAPP has a certification. Um, there are different aspects like that. Um, but I think that the most important thing is really to pick a couple of sub-areas because this isn't gonna be something where, you know, right now I'm I guess an AI generalist. I foresee just like privacy, we're gonna start having specialists in different areas. Um, and so I would suggest, okay, well, I'm really interested in the use cases that apply to infrastructure companies, or I'm really interested in, you know, these types of AI tools. And I would pick a couple of those because you just don't know what's gonna be in vogue in the future. But I would say go really, really deep to understand some things. Um, so so having like a surface level knowledge of kind of the landscape, but then really diving deep into certain areas will make you indispensable because this is changing so rapidly. And so to have, you know, someone who, oh, I can um put together a really good, you know, AI contract. Okay, that is a specific skill, you're gonna be really marketable, or I really understand um uh bias testing for HR AI tools. That's you know, if you can really carve out a niche, um, you're really gonna be valuable going forward.
Speaker:So what are some of the big, as we sort of look to wrap up um our podcast, um what are some of the big hard button issues when it comes to AI governance uh in 2026 that you think would be relevant?
Speaker 1:I think vendor management is gonna continue to be major and just really um making sure those processes are working as um as desired. Um I think that uh the continued efforts to capture new tools, there's so much shadow, they call you know, shadow AI usage going on within organizations, and I'm getting more and more um clients reaching out where you know the cat's already out of the bag, right? Somebody has uh, you know, uh off their policy, gone and uploaded data and you're wanting to try to bring it back, and and that's really difficult. So I think that um continued education of individuals to not go rogue um internally on the usage of AI is going to be really important. Um, and then I think the um third area, and this is more of a kind of type of AI that I think we're gonna start seeing more and more attention to. Um, and it started this year, is you know, companion AI and uh really the role. I mean, you know, when we talk about agentic AI, you talk about companions, you talk about AI therapists, or you know, all of these, what you know, more fundamentally where we uh how we interact with these AI bots, I think is going to be um more and more integrated into our lives. And so those are gonna create a lot of um of questions and um ethical questions and um the need for additional guardrails to ensure that you know everything is kind of um you know minimizing risk to the extent possible while still retaining all the benefits of AI.
Speaker:Any other closing thoughts before we wrap up the show?
Speaker 1:Um well, it's an exciting area to practice in. So that for those that are interested in pursuing it, I would encourage it. Um I think there is going to be a lot of interest, but there's gonna be a lot of work in the area because we're gonna be um facing some really thorny issues. And as we're seeing more global legislation pass, um, there's going to just be uh you know different areas of focus, different needs, and a lot of um decision making gonna need to be required on kind of how to proceed for companies um on some really cool uh new technologies.
Speaker:Agreed. Yeah, it's it's a great area to to focus on. So thank you so much, Reena, for joining me. Happy holidays to you. Thank you, and same to you. Yeah, look forward to continuing this conversation in twenty twenty six.
Speaker 1:Absolutely.
Speaker:Thank you. Bye.