Follow The Brand Podcast with Host Grant McGaugh
Are you ready to take your personal brand and business development to the next level? Then you won't want to miss the exciting new podcast dedicated to helping you tell your story in the most compelling way possible. Join me as I guide you through the process of building a magnetic personal brand, creating valuable relationships, and mastering the art of networking. With my expert tips and practical strategies, you'll be well on your way to 5-star success in both your professional and personal life. Don't wait - start building your 5-STAR BRAND TODAY!
Follow The Brand Podcast with Host Grant McGaugh
You Can't Govern What You Can't See: The Shadow AI Crisis with Daniel Ikem
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Responsibility breaks where AI moves fastest, and that’s exactly where we go today. Grant sits down with Daniel Ikem—strategic operator at the intersection of emerging technology, intellectual property, and public policy—to unpack how shadow AI, data limits, and legal gray zones collide inside modern organizations. From boardrooms pushing Copilot to teams quietly pasting prompts into other models, we trace how governance cracks form and why documentation, auditability, and accountability must evolve as quickly as the tools.
Daniel shares firsthand insights from big-tech partnerships and from founding the Diverse IP Alliance, where he’s helping HBCU and underrepresented students build fluency in AI and IP. We examine the core challenges leaders face: capturing tacit knowledge that models can’t see, preventing biased historical data from influencing outcomes, and defining ownership of outputs when proprietary data mixes with external systems. We also tackle the jagged frontier of agentic AI—who’s liable when autonomy kicks in—and the geopolitical reality that makes “slow down” easier to say than to implement.
You’ll walk away with pragmatic steps to act now: set clear policies on approved models and data access, capture critical processes that were never written down, design human-in-the-loop review for high-impact decisions, and build a living risk register that survives model updates. We compare U.S. uncertainty with GDPR and the EU AI Act to show where global benchmarks can guide you before domestic rules arrive. Above all, we make the case that governance is not just compliance—it’s strategy, trust, and long-term resilience.
If you care about AI governance, IP risk, bias, and building a talent pipeline that reflects the communities your systems will serve, this one’s for you. Subscribe, share with a colleague who’s wrestling with AI policy, and leave a review with your top governance question so we can tackle it next.
Thanks for tuning in to this episode of Follow The Brand! We hope you enjoyed learning about the latest trends and strategies in Personal Branding, Business and Career Development, Financial Empowerment, Technology Innovation, and Executive Presence. To keep up with the latest insights and updates, visit 5starbdm.com
.
And don’t miss Grant McGaugh’s new book, First Light — a powerful guide to igniting your purpose and building a BRAVE brand that stands out in a changing world. - https://5starbdm.com/brave-masterclass/
See you next time on Follow The Brand!
Framing AI And Accountability
SPEAKER_00Welcome everyone to the Firebrand Podcast. This is your host, Grant McGall. I'm out here in Omaha, Nebraska. We're going to have Daniel E. Kim. He's out in Washington, D.C. He's telling me right now it's colder in DC than it is out here in the Midwest. Believe it or not, true story, it has happened. And we're going to have a conversation today about some things I think are very, very important. I want people to understand this. Technology doesn't fail in a vacuum. It fails at the point where responsibility becomes unclear. We need to understand this as AI accelerates. Leaders everywhere are being asked to move faster, to innovate harder, to compete globally, often without fully understanding the legal, the ethical, and the strategic consequences of the systems that they are deploying. And the risk is very, very real when it comes to a lack of governance, clarity, and foresight. So today, my guest, Daniel E. Kim, a strategic leader, works at the intersection of emerging technology, intellectual property, and public policy. And he has led global partnerships, advised organizations on AI governance and go-to-market strategy, and is now deeply focused on expanding access to AI and IP education, particularly for communities that have historically been left out of these conversations. So, Daniel, would you like to introduce yourself on the Follow Brad podcast?
SPEAKER_01Yes. And of course, thank you for having me, Grant. So, yes, as you mentioned, my name is Daniel Lee Kim, based here in Washington, DC. Serve as the founder for the Diverse IP Alliance, which is really just focused on supporting and educating diverse students, particularly those in HBCUs, and just understanding like the intersections between technology and the law and policy as well. Just because, you know, as you've mentioned, the world is changing. I think it's changing faster every day. And, you know, really just want to make sure that those who have historically been left behind or are um in this ever-evolving landscape. So yeah, thank you for having me.
SPEAKER_00This is important. I want people to really kind of frame what was happening as alias come out, and we've heard about we're gonna get some more governance around this, we're gonna have some more laws and regulation around it. But every time you start trying to frame something together, the technology goes beyond it. Like, wow, we didn't really see this happening or that happening. This is changing how I think law is practiced, how certain you know, government uh governance systems are put together. And what you said earlier about, hey, in our world, if you're in a certain community, you're not completely uh uh been represented in some of these things when it comes down to law and it comes down to strategy and where AI and where you sit. I think right now there might be a big misunderstanding that leaders have about AI governance. Can you just help us frame the landscape from your lens of what this looks like?
Daniel’s Background And Mission
What AI Governance Really Means
SPEAKER_01Yes. So I think the best way to describe it uh is really like the challenge that big tech companies are having right now. Um, you know, I previously worked at Microsoft um managing partnerships between Microsoft and other technology companies. And when ChatGPT came out, it's almost as if a bomb went off in the room. Because there was a lot of different ways to kind of view the situation. Um, and this all ties back to AI governance because you know, when we think about just the word governance, we think of the government, we think of just regulating a system. But the challenge with artificial intelligence, you know, even at just the LLM level, um, is just that it can have an impact on every and all you know, decisions, strategies, you know, points of contacts, or just you know, points of uh touch points. Um, and I think that's the biggest challenge. When you say AI governance, or when anyone says it, we're I think people just go to the models like, okay, we're just going to use this AI model for these use cases. But the challenge, you know, at least in an organizational standpoint, is you know, people may come in with their own AI models. It's like, okay, we're going to use Copilot here, or they're going to tell us to use Copilot, but really I'm using using Chat GBT to get my work done. And, you know, that's, you know, the term for that is called shadow AI, as in the leadership is saying, yeah, yeah, I can use it, but let's tell us how you're doing it. But then everyone is really doing their own thing. And then the challenge with that, in addition to just the misalignment, is you can't really track what's happening because none of the work that's being done is in the systems that the leadership is expecting, you know, middle management and what have you, to you know, provide in terms of like review documentation. And that's just from a work hierarchy standpoint. There's still the aspect of depending on how large, how old, or even how complex your organization is, we have to think about how all of this data is being collected and used to then you know fit into this AI model. Like if you have the tool or the system through this AI model, depending on everyone's using it, you still have to be feeding it this information that existed before. And there's really maybe two to three problems with that currently. Um, as I just touched upon about age, complexity, and just size organization. Not everyone has access to all the information within the organization, not even the CEO. CEO can't know what every single employee is doing. And that ties more to both age and um size, but not everything we do, and this is even more general to AI as a whole, is written down anywhere. Um, the best example I like to give when explaining this to people is just, you know, explain to me how you know how to read and write. Like write that down on a piece of paper. And, you know, this that's one of the one of the I guess current short gaps or just gaps in AI capabilities, is everything is really just based off of images and text. But you know, the world is more than just image and text, there's a whole 3D, you know, world that we live in. Um, and it's going to take some time, um, and I guess a lot of thinking resources to kind of break through to that space. I know I believe the gentleman's name is Lonnie Lacoon, who works at Meta or used to work at Meta. He's always saying like LLMs are not the way to go. And I think he, you know, as someone in the field understands everything can't just be based off of text because you need you just need that physical physicality, you know, which kind of step uh ties back into like robotics and whatnot, to really have a full picture. So, you know, I would say like that's the second major issue in regards to just AI governance. You can have people who are aligned, you know, to address the first part, but then you still have to understand where are the real limitations or considerations in terms of the data. Now, the third thing I would say, and this ties directly to complexity, um, is just it it ties back to the first part too. It's just you have to now consider everything that your organization does, even the things you take for granted. Because post or pre-Chat GPT or pre-LLM drider, you know, it's just like, oh, well, someone can do this, or maybe no one is doing it, but you don't have to worry about like a system, like in terms of like an agenc AI model, maybe doing something automatically um with limited oversight. And so now you run into the issue of like, well, what are we missing, or what are we not considering? And how do we then apply that to every basically every job that is done within your organization? Um, but then even more so, how do you balance this in regards to, well, how much AI or how powerful do we need this AI to be to what we are doing today, to what we want to do tomorrow? Um, and then how do you balance that with like improvements as you just touched upon to the models? Like, because I think initially that was kind of shown as a benefit for AI models, it's like it does it's said to do as much as the company say that it does. Um, but you don't have to worry about like, oh, and now we need to update our systems like no, once it updates, it just magically knows how to do everything you asked it to do even better. But the problem is you still have to go through everything you've said it to do, and there's still a jagged frontier component as to well, what does it do well versus not well? Because we also have to keep track of that.
Shadow AI Inside Organizations
SPEAKER_00I think, I think, Daniel, we're we're getting this it's uncharted territory. Yes. We we've done things a certain way, and I want people to think about it like this. The world at one point in time, some parts of the world are still like this, they're very agricultural, they're agricultural society. And then we did it a certain way for in the laws and regulations and everything were where the language that was used was geared to that agricultural economy that was taking place. Then we moved into a manufacturing world, and that those laws are now changing, and we've become more urbanized. And if you really start tracking law and governance, you start to see, well, these are radical changes. They had to they had to come up with new laws, basically, because there were no laws for for things that are happening in a manufacturing world, let's say. And then and then as we got into what you call the information age, it changed even more, you know. Uh, and it's like, wow, this is really morphing and growing. Now with agent AI systems, things are these are autonomous robots basically that can think and act and change behavior. This is different, folks. I want you to understand what an agent AI system is is gearing to become. Uh uh, we start looking at you know artificial general intelligence or AGI. Now, who's responsible for this robot? All of a sudden, like, no, I'm not gonna go left just because you told me to go left. I'm gonna circle around and go right, left, and and and jump out the box. You're like, whoa, whoa, you know, who's responsible? Now you get in there like a and I think this has been challenged in in a court of law when it says something like this. What if your AI has to make a choice between running someone over or avoiding an accident? Uh, let's just say, like, who's responsible for that? Is it the car manufacturer, is it the robotic manufacturer, is the person who shouldn't have been in the way? I don't know because these are laws that've never been litigated before. And then and then, and to your point, speed. Because, all right, let's say you come out with something that makes sense. We've now we've got governance, we're ready to go ahead and put it in that into play because it's always in a reactive mode. Now you've got something completely new that's been released, let's say, into the wild that doesn't behave like anything else that has come out. So we're in this this this hyper-advanced age of things just changing rapidly, you know, speed, responsibility, governance. I mean, here's the question: how do we either govern something like that, or do we we slow innovation, or what do we do that you think that we need to put in place that that can kind of help us as we grow into this new reality?
SPEAKER_01Yes, I mean ideally the answer would just be to slow down, but there are too many factors in play that make that it's not impossible, it's actually very possible, but it's not aligned to goals, both from a political and geopolitical standpoint, and even like a business standpoint. Like, you know, living here in Washington, DC, talking to people or going to events where people are having these same questions, you know, you can't say, well, you can say let's slow down, but the the response to that is always gonna be, well, what about China, or what about some other country that is also developing AI, but in most cases it's about US versus China. Um, and then you know, how that even plays to where the economy right now is, you know, most of the economy, and apologize for background noise, um, is really driven by the magnificent sevens of you know, Microsoft, Nvidia, Amazon, and the others. So even if we did slow down, that may trigger a recession, unfortunately, just because of how slowly the rest of the US economy is, especially as we see articles about the development of data AI centers, you know, everywhere. Um, so I like we're kind of in a race where like what's the movie Speed with um Keanu Reeves, where it's like you can't slow down or the bus is going to explode.
SPEAKER_00That's about yeah, it's interesting, it's an interesting situation that we're fighting ourselves in. We can't break, we can't slow down. We've got to kind of hope that maybe human ethics and human morality will begin to self-govern, maybe to a certain degree. Because let's get back to that one point that let's say in a business, in a business, they're gonna have their own AI, which it happens even today. A company has their own proprietary data. This is the company's data. When you sign on to be an employee, even has language within that contract or services, hey, whatever you know, content that is created that belongs to the business. You know, you're doing it in relation to the business, right? And then there's personal data, like they don't care what's in your personal computer as long as it has nothing to do with you uh impinging upon you know the rights, the copyrights, the trademarks, whatever is deemed intellectual property of that company. Now we have AI. To your point, you just said earlier. Now you're coming to work. I'm a chat GPT guy, you say I'm a Claude guy, let's say, but they're utilizing co-pilot. Like, no, we use copilot, we don't use anything else. Of course, they're gonna be co-mingling the data between the international, the intellectual property of the company, as well as your personal data. And how do you discern what is what?
SPEAKER_01In terms of the outputs, or like in terms of like the usage?
SPEAKER_00Yeah, just who owns who's the owner of that. So that's a okay.
Data Limits And Missing Tacit Knowledge
SPEAKER_01I mean, realistically, I mean the laws still exist as exists. So for things like trademarks or copyrights or even patents, it is what is registered for that company. But to the point um that I think you're getting at in regards to like, well, said tool, ChatGPT or Cloud or whatnot, created something that looks exactly or very similar to what our you know IP is. And I know like in recent cases, um, this year, um, there were holdings by courts in favor of like anthropic, I believe, um, in regards to like, oh, well, you know, we bought the books and we scanned all the books and like ripped out the pages and all this stuff. But the court held in a favor for um anthropic because of how transformative the output was. Um and even said, and this was actually the first time I had seen something like this, that the plaintiffs made the wrong argument. Like they went about this the wrong way. Because they focused more in regards to you took our books and then you used it for your data and your training. When the better argument, at least according to the judge, would have been the use of your technology impacts our um industry, like you're overloading the industry with the work that you're outputting. So I think a better analogy to look at this, at least from an external standpoint, like outside of the business, is to, you know, if you're dealing with, for example, countries such as China who develops a lot of products, and you're trying to make stuff, you know, domestically, then it's like, well, we need something that can help us against the glut of whatever product this is, so that we can maintain our model, uh, our um internal market. Internally, I think it's more so just a matter as to, you know, is this I guess it's still to the same extent if we're thinking about it as to how does a company first learn that another company is using their product. Um, and I think you know, potentially that may just be, and I think you know, they have been doing this. It's just if you're Marvel, for example, and you're using like a uh model that creates images, you know, putting in a prompt to say, well, create an image of Iron Man and see how close to fidelity that is. And then it's like, here's exhibit or evidence of what we're claiming in court. Uh, because realistically, you know, this all you know, everything's connected. You have to be able to prove what you're going to court about. Um, and you know, for some things it's a lot simpler for other things, such as text, because that is just communication, it's going to be a lot harder, especially given the non-deterministic nature of AI, in the sense that you type in something, but you can type it in five times and get different answers. So it's not one-to-one input and output.
SPEAKER_00If you just luck that you you know got something that was similar to somebody else's copyright, but that intellectual property, I think there's gonna be new laws that'll be written, is still some things that we'll we'll see what it all looks like. I'm gonna change the conversation a little bit because you do work, especially around HBCUs, things that for people that maybe haven't been exposed to some types of technology over time, but now you You have this kind of uh opportunity, uh, and price is usually not a barrier, you know, uh finance is not a barrier, it's there. What are you doing? What is your goal in that respect for what you're doing?
SPEAKER_01Oh, so in regards to my goal, you know, currently I want to support at least a thousand, you know, diverse students, whether they're law students or just students in general, um, and just understanding how this technology works and technology as a whole, because obviously AI will impact you know existing and new technologies, but then also, you know, what are your rights, especially from like a data and privacy standpoint? I obviously data and privacy have been major issues for a long time, just with the advent of the internet and social media and your phone and people thinking if it's listening and watching you and all this stuff. But AI automates that, as we've just talked about. Um, but then even more so from like a societal standpoint. You know, having worked at a at one of the largest technology companies, and also currently being a law student myself, I understand where the gaps are in terms of like, you know, people like myself being aware of what's going on. You know, we're very underrepresented in a lot of professional fields, or at least a lot of high-paying fields. Um, and, you know, I even not just as students, but even as business owners, which is what I focus on at Microsoft, you know, so really wanting to close this gap so that we're addressing this, you know, across the board. It's not just once you're ready to start a business or you're going to law school, but you know, even as you're considering what you want to do or how the world is changing around you, you're starting at the same footing as everyone else, so to speak.
Autonomy, Liability, And Speed
SPEAKER_00You know, I think that's very important what you just brought up. I like how you brought up the pipeline problem. Uh, I know you're deeply involved in that. You want to expand AI, AI, or I'd say IP education for law students. I think that's very, very important because we when we don't have that information in the in the data, in the database, people understand that AI it it draws everything that it does through a database. But if your information is not in that uh database, it can't just come out with answers that are more culturally competent, I would say, and things of that nature, or understand all the nuances of different um societies. So it's not just coming out like you've heard of things like British law, what British law is for the people that typically live over there in the circumstances that are there. But that same law is not universal because there's different circumstances in other parts of the globe. So these are the things that we need to understand uh because situations and variables are different. Uh, and you you know, you're asking, even though a right now we're we're enamored by the AI and what it can do, but understand that it all comes from a data set, and that is its limitations of what that data set means. Let me give you a case in point. This happened that there was an organization, I think in either Alabama or something like that, for a uh court case. And uh so they asked the AI to come up with um some exhibits you know for this particular uh case. But because of the data that they build it on was built um on data from the 1960s, the 1970s, 1980s, it was very biased. It showed a lot of um inconsistencies, but it you know, the AI doesn't know this. So when they start asking, like, wow, you know, uh, should you know Grant be charged with his crime? Well, it's coming from a very biased uh standpoint where uh you know traditionally black people were were definitely you know overcharged for different circumstances, you know, the law was just not a balance. And because now they're they're utilizing this AI bodice, has nothing to do with AI, it has a lot to do with the data. If we don't clean the data, you don't have data integrity, you're gonna still you're gonna you can potentially recreate a bigger monster that supposedly was solved 30 to 40 years ago and releasing it back out into the wild. No, so we I think it's important for what you are are doing and understanding from a legal standpoint and a regulation standpoint, and I guess my question for you is is what what is one thing that business leaders should start doing now before they're before the regulation forces their hand?
SPEAKER_01So I will like to at least just say one thing before answering the question. Given that the United States does not have any federal um regulations, there is going to be a need to understand, especially if you're global business, how different countries around the world, especially the United, sorry, the European Union, you know, deals with um data, especially from GDPR and the EU AI Act. Uh, just because, you know, here in the US, there's no federal laws. There are a few state laws, and as I'm sure you've seen, the president has just come up with a executive order saying that states cannot create AI laws to support innovation. Um to answer your question, uh, because you know this is all tied um to given this tech and a legal standpoint, uh, what businesses need to do um is they need to figure out what it is that they want to be. Like even if uh the technology gets better, or maybe it doesn't get any better, what was it that you're looking to be through the use of this technology? Because what AI, at least what it's what its claim to fame is, is that it's you know, it's a genie. You ask it a question and it can answer for you. But there are still a few issues, even in that concept. One, you need context, like it needs context, it needs to understand what it is you want it to give you. Um, and you have to be more specific than you have ever been in your life. Uh, you know, even when I use AI, I know, I know, like I'm mentally prepared to have to go through something multiple, if not dozens of times, just so it fully understands what I'm looking for. The other thing is, as we know, AI systems hallucinate. So you can't just go off with whatever it gives you. And I'm not sure. Do you know? Have you heard why models hallucinate?
SPEAKER_00Oh, yeah. They they have there's model drift in in a lot of models because it has uh especially there's a lot of factors and variables. There's there's timing that that has to take place. So if it can't get an answer quickly or information, a bit of data uh uh fast enough to deliver the total output, it'll make things up, right? Yes, and then I also understand this over time, it'll drift. So if you might have had a point of interest that yesterday that you were uh a line of thought that you were working on the next day, it does not always have instant memory, uh knowing exactly what it is that you were talking about, unless it's trained properly to look back at that data and to bring it forth. Context is probably one of the biggest problems right now in AI modeling, you know, because it wasn't built. It's not it's remember, it it at the end of the day, AI is a calculator. It's a it's that's what it is, you know. It just takes bit, it takes everything and turns it into data, bits and bytes very quickly, and then and puts it using correlations and patterns, right? Correlations and patterns, not context, not consciousness, right? It's a little different. It might stimulate consciousness, but it's definitely not that. But it can get you in certain in use cases, that's very good. In other cases, that's very bad, especially in the legal aspect, what context of how things happen or your interpretation of certain things can be very you know miskewed. Um, so you gotta be very careful.
Geopolitics And The Pace Problem
SPEAKER_01Exactly. Um, because you know, even in addition to you know just the AI drift, especially as it gets more data, but data that came from its own systems, there's also the con or the issue of AI models are designed to give an answer under any circumstance. So AI models can't say I don't know, because it wouldn't be it wouldn't be able to provide the right answer, and it can't say I'm gonna look into this because it's not a real answer. And then what I'm talking about. And then even to your bias standpoint, uh, which is true, there's the component of how old is the data and how frequent does the data come up in the training, you know, in the in the training. Um, so you know, something that's very new um may not necessarily you know come up as something that's old or at least has been in the data multiple times. And of course, there are tools such as RAG, you know, to search the internet to then pull that information, but that's a twofold issue in regards to some of you know AI is being used to update the internet, and then not everything on the internet was true to begin with. So there's bias and this information. So, you know, for businesses, um, and I guess even just for individuals, um, you know, within organizations, um, you know, you have to really start thinking as a researcher, or at least operating as such, because you now have to really verify and confirm every piece of information that comes across, you know, your eyes, your ears, um, you know, whether it's for your day-to-day or for school or just any situation where you're not reading something that was printed before the year Chat GPT came out. Um because that's just that's just the reality we're gonna be living in. So, you know, as a career standpoint, you know, we're still gonna have jobs. A I can't take everyone's job because how does anyone get paid? Where does the economy exist? Um, but in terms of like the skills that people may need to have and just you know how that escalates up to um businesses, you know, it's just research because to the point you had made before, and I I really wanted to touch upon this, from going from agricultural to industrial to where we are today, it's just been a matter of like abstraction. We're just taking ourselves out of the equation in a direct manner. Like we're not on the we're not in the fields, uh, we're not on the farms, we're not in the factories for the most part, and now we're trying to take ourselves, at least as a society, out of maybe the research or the decision-making standpoint or just the cognitive standpoint. So, you know, will we have to kind of be managers of the AI tools? Yes, but we kind of already are because we're using these things to answer questions for ourselves. So it's kind of like having an assistant or just a junior analyst, so to speak. Um, but you know, I think that just means if you have all the free time to not have to dig into all the details, how high level of a strategy can you now come up for the tools you're using, you know, to get to the point that you want to be in? And you know, that's the context of the situation.
SPEAKER_00I think it's all this interesting times, the level of creativity should should um escalate. Uh because now a lot of more people than before, I can get just a for instance a Harvard grade information and and coursework to a certain degree that can help me to teach myself. No, YouTube now is like a little micro learner. I can learn how to like, you know what, I did it the other day. Hey, you know what? I'm gonna change out my headlights on the on the truck. How do I do that? Well, I just pop up so show somebody actually doing it, and then I go out there and then I do it myself. This is amazing technology. Point is we have to find the balance. I think is uh is the human race that evolves with this. We have to find how we coexist together. This is like the training world. This is like the when you're learning to ride a bike when you're little, right? You gotta first get the training rooms, you're gonna fall a couple of times, you're gonna scrape your knees, you're gonna get back up, and then over time, you're actually gonna start riding that bike, you know, you and the bike together. I think that's where we're at with AI. AI is the bike. We got some training wheels on it. Maybe that's the governance, right? You know, we're gonna fall over a couple of times, but we somebody's gonna help pick us back up. And then, you know, eventually we get the wheels off and we'll just be rolling and we'll be doing our thing. But that's a process over time. I always thought I I want to leave people with this because I think it's very important. When you look back at movies in science fiction, and you look back at Star Trek and Star Wars, and you see in our imagination that we're already operating with super intelligent uh robots, let's say, or super intelligent computer systems. Sometimes we use our fears and these things go off the rails and they become the the enemy. Sometimes they're the best thing ever. Everybody remembers C3PO or R2D2, you know, and how they operated, you know, with the uh their human component, right? And uh and even in Star Trek and things like that. I think we're getting to that. We're gonna find a balance and a harmony on how to use, as you said earlier, uh, AI as an assistant. But we we have to understand we can't hold on to the old and then expect the new to just work like it always did. The law and governance and all these copyrights and and and trademarks and things that how we define these things may change over time.
IP Ownership And Model Training
SPEAKER_01Yeah. No, I mean, it's funny you brought up the bike example because I was a bit older, like maybe eight or nine when I first learned how to ride a bike, so I didn't even have training wheels. But even to um your point about just like how these, and I apologize, you had just said something very poignant that I really wanted to touch upon, but you know, these tools are going to keep changing. Oh, actually, I do remember because you talked about like just regulations in regards to like well, pretty much every other field. And I have a background, um, or at least a degree in like biotech regulations. So I understand like how the FDA was created. Then it was primarily just to prevent surprisingly, people from selling things that were not what the label said they were, like realistically. Um, but for AI, that didn't happen. Like, once they created Chat GBT, or at least ChatGBT free, we just let it out in the wild. And I remember attending a session once where someone had made this analogy that I'm making as to every other industry is regulated to an extent, except for AI for some reason. Um, and I think that's what has really caused a lot of the challenges as to we can't slow down. It's not well, we can slow down, we have not been given the opportunity to really analyze how any of these tools work. Um, and I think this even hurts the AI companies themselves because they're kind of stuck in a permanent race to compete with one another. Um, but it's like a feedback loop. Like we have to create a new system, which everyone has to now understand before we've gotten any rules or regulations out in place, or just even a full understanding of the tools. But that doesn't help the business because you can't keep creating new products because you still need to make money off of the old products, and then you have to get rid of that.
SPEAKER_00Yeah, we have a capitalistic society. That's the thing. We that's how we grow. You know, we we make new things, you don't, and you know, we suppose it's like, well, you don't want too much government oversight, you know, in the things that you're doing at the same time. Like, look at our social media. It took, I mean, it really came on the market what 09, you know, 2010 really got explosive. It's been 15 years, and we're still just getting to where kind of regulating it. I think uh I think it was at Australia or New Zealand or something like, hey, you got a teenager can't even utilize social media. And I think a lot of that is not because of the tech, it's because of society itself. We have to take some ownership around accountability and responsibility and maturity uh of what exactly are we talking about and saying and doing and things like that. Are we mature enough to your point earlier about the bicycle thing? Can we ride a bike or can't we? You know, have we been trained to to to to ride it responsibly and then to do it at some people? Yes, you can do that. Some people, it's a challenge, you know, it depends on what that is. AI, it was released, you can generate incredible text, incredible uh uh uh video, incredible audio, pictures. It's it's an amazing thing. Would that take a lot more time, but the laws just aren't at that level. I mean, call it what it is.
SPEAKER_01Yeah. No, no, I agree. I mean, that's primarily the reason why I decided to go to law school, you know, and primarily in regards to biotechnology, but now because of AI. Um, because you know, there is always going to be that gap, but at least from a policy standpoint, we can create frameworks that could be adopted into law. Uh, because there was there was this one, I think it was a congressional hearing, um, where someone had asked Mark Zuckerberg. So, like, how does Facebook make money? Um, and Zuckerberg just leans into them like like he's kind of confused as to why he's being asked that. And he says, We we sell ads. Sorry. And you know, I think like you can't like that should be the bare minimum of like understanding what you're dealing with. Like, how do you make money? Like, how do you not know what every website on the internet does? I mean, just to be honest. So, you know, it's going to take a while, but I think, you know, with all the current generations and future generations, we at least have a better place, like to your point, it's been 15 years. And but I think everyone makes that analogy um or comparison to AI and social media. It's like we are still trying to figure out how social media works, but we can at least see where our shortcomings were with short uh with social media as it now applies to AI, potentially. Um, so yeah, no, I mean it's I don't know. It's it's like there's this there is like because to your point, like in the capitalistic system or just working, you do want things to be perfect, but to you also understand like the problems is actually how you get paid at the same time. Like you said, the law, lawsuits are extended. We're like, this is how lawyers make money. You know, you know, we have to be we have to be skilled to you know really create um sustainability. Within our world. So, you know, there's always going to be things to solve for. As I've said in other conversations, we just have to start figuring out what we want to implement now so we can, you know, have these starting blocks sooner than lower.
SPEAKER_00It's going to be an interesting time as we go forward. I want to thank you again for being on the show. You got to let people know how to contact you. And I want you to restate one more time exactly what you're doing, specifically for HBCU, so they understand why they should be getting a hold of you.
Educating HBCU And Diverse Students
SPEAKER_01Yes. So, you know, the best way to get in contact with me, despite being like in the tech space, I only use LinkedIn, which was also kind of required at my previous job. So you can just follow me or reach out to me on LinkedIn. I'm you know very open to communications. Um, in regards to what I'm doing now, um, you know, just as the founder of Diverse IP Alliance, uh, which is a nonprofit to help with the education of students, primarily law students, but students overall, because this does impact everyone. Um, in regards to the intersection of technology and um law and policy, you know, especially as it um impacts diverse and underrepresented community. And I like to just end on this last note, uh, because we did talk about the United States and to an extent, you know, the EU and China. But one thing that to consider is, at least in what I've seen, when we talk about how AI is being used, even at a national standpoint, for countries such as the United States and China, they're very focused on being cutting edge, like creating the biggest models, best technology. But when we consider countries in the global south, so you know, just countries that are not as developed or just not as big economically, they're more focused on how do we use this technology to solve issues. So even as we look at all these things, you know, we might see that you know, smaller countries may come up with better frameworks because they're they're not focused and maybe can't be focused on creating these big technologies. So they're going to focus on well, how do we survive and exist in these systems? So yeah.
SPEAKER_00Interesting dynamic, you know, as things change. If you really follow history, as new technologies are released, geographies change. Yes, change. And you just well, I don't know what they change into, but that's what happened. So I want to thank you again for being on the show. I want to encourage your entire audience to see all the episodes of Follow Brand on my website at Five Star BDM. That's the number five. That is Star S T R B for Brand, D for Development Informasters.com. I want to thank you again, my friend, for being on the show.
SPEAKER_01Thank you. Thank you for having me.
SPEAKER_00Oh, you're most welcome. You take care. I'll talk to you soon. All right.
unknownBye.