The Macro AI Podcast
Welcome to "The Macro AI Podcast" - we are your guides through the transformative world of artificial intelligence.
In each episode - we'll explore how AI is reshaping the business landscape, from startups to Fortune 500 companies. Whether you're a seasoned executive, an entrepreneur, or just curious about how AI can supercharge your business, you'll discover actionable insights, hear from industry pioneers, service providers, and learn practical strategies to stay ahead of the curve.
The Macro AI Podcast
Navigating AI Laws: From Global Rules to State Realities
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Gary and Scott explore the dynamic legal landscape of AI and machine learning, offering business leaders worldwide a roadmap to navigate compliance and innovation. This episode spans global regulations, U.S. federal shifts, and state-level actions—packed with practical strategies and tech insights.
Global AI Legislation:
Gary starts with the EU’s AI Act (2024-2027), a risk-based law banning mass surveillance AI while enforcing audits for high-risk systems like hiring tools—fines up to €35M or 7% of global turnover. Scott details its tech demands: SHAP for audits, LIME for transparency. The EU’s chaotic trilogue birthed a deal balancing startups like Mistral with big-tech scrutiny. China’s 2023 rules mandate ideological AI and watermarking ($14M fines), while South Korea, Switzerland, and the UAE diversify the race. Gary urges scalable governance; Scott flags audits and APIs.
U.S. Federal Government and AI:
Scott notes that there is no federal AI law—FTC’s Section 5 ($2M chatbot fine) and Title VII tackle bias instead. Gary unpacks Trump’s Jan 20, 2025, EO, axing NIST’s risk frameworks to boost dominance, tied to Project Stargate’s $500B data center push (OpenAI, xAI). Defense AI surges, but Biden’s safety EO is out. Congress stalls—NO FAKES Act passes, accountability lags. Scott warns of agency curveballs; Gary pushes self-regulation with SHAP to dodge scrutiny.
U.S. State-Level AI Developments:
Gary highlights Colorado’s 2026 AI Act ($20K fines) for high-risk AI fairness tests. Scott dives into California’s 2025 laws—SB 1047’s deepfake ban ($10M fines) uses ResNet for detection, AB 2013 demands LLM transparency. Illinois protects teachers, New York audits lending, and Massachusetts’ H. 4123 targets 10 exaFLOP giants like Colossus for safety (misinformation risks). Gary touts TensorFlow Fairness for bias sweeps; Scott stresses modular policies and Dioptra testbeds.
Gary and Scott close with a call to agility—embed compliance, leverage tech, and lead amid lawmaking chaos. Join the global conversation at macroaipodcast.com!
Send a Text to the AI Guides on the show!
About your AI Guides
Gary Sloper
https://www.linkedin.com/in/gsloper/
Scott Bryan
https://www.linkedin.com/in/scottjbryan/
Macro AI Website:
https://www.macroaipodcast.com/
Macro AI LinkedIn Page:
https://www.linkedin.com/company/macro-ai-podcast/
Gary's Free AI Readiness Assessment:
https://macronetservices.com/events/the-comprehensive-guide-to-ai-readiness
Scott's Content & Blog
https://www.macronomics.ai/blog
00:00
Welcome to the Macro AI Podcast, where your expert guides Gary Sloper and Scott Bryan navigate the ever-evolving world of artificial intelligence. Step into the future with us as we uncover how AI is revolutionizing the global business landscape from nimble startups to Fortune 500 giants. Whether you're a seasoned executive, an ambitious entrepreneur,
00:27
or simply eager to harness AI's potential, we've got you covered. Expect actionable insights, conversations with industry trailblazers and service providers, and proven strategies to keep you ahead in a world being shaped rapidly by innovation. Gary and Scott are here to decode the complexities of AI and to bring forward ideas that can transform cutting-edge technology into real-world business success.
00:57
So join us, let's explore, learn and lead together. Welcome to the Macro AI Podcast. I'm Gary Sloberg joined as always by my co-host, Scott Bryan. We're your guides to cutting edge AI solutions, speaking directly to business leaders worldwide who are transforming their industries and competing on the global stage. Hey everyone, welcome back. Today we're going to scan the legal landscape shaping AI and machine learning. A critical topic for anyone deploying these tools to compete
01:26
locally and globally. And as expected, lawmakers are racing to catch up and the stakes are high for your operations from compliance to innovation. So there's a lot to pay attention to. Exactly, Scott. We're diving into three layers today, global efforts, U.S. federal moves and state level action here in the states. While we talk through an overview of the activity, business leaders should think about how the wave of activity impacts their strategies.
01:55
Yeah, interestingly, our podcast, the MacroWebA podcast is getting a lot of international downloads. So an early thanks to our listeners out there for subscribing and please keep the comments coming so we can keep improving the content. We're always open to topics that you find interesting and pertinent. So whether you're in Milan sipping some espresso or bourbon in Kentucky, let's get into it, Gary. Yeah, absolutely. I echo. Thank you to all of our listeners.
02:24
I saw that eight new countries were added this week. So thank you. Appreciate it. Yeah, it's good stuff. So let's talk globally, Scott. You're probably aware of the European Union's AI Act. It's probably the most notable. I believe it was adopted last year in 2024 and it's phased through 2027. So it's a risk based framework or unacceptable risk AI for such things as real time facial recognition for mass surveillance.
02:54
That's completely banned outright. Obviously, there's some narrow exceptions for law enforcement, uh you know under judicial oversight uh But you know, then there's the high-risk systems such, you know as AI has around hiring medical diagnostics or even autonomous vehicles so They're facing a gauntlet of rules and the EU's uh AI act is kind of coming to play. So kind of what are your thoughts there?
03:25
Yeah, certainly, Gary. think so high risk AI requires pretty market conformity assessments, ongoing human oversight and, uh and then even registration in a publicly accessible EU database to ensure accountability. So they actually have to go in and register your machine learning system. Right. uh Yep. The penalties are pretty steep. up to 35 million or 7 % of your global turnover, which is a synonym for gross revenue here in the States.
03:55
Um, businesses are really scrambling to keep up. So compliance could mean revamping, uh, entire AI pipelines. Technically it's about traceability. So logging, training data sets, uh, validating model robustness against adversarial attacks and using explainability tools like SHAP or LIME to justify outputs. If you want to look those up, it's a SHAP or shappily additive explanations, which is great for audits or proving systemic fairness.
04:25
ah or the other one is LIME, local interpretable model agnostic explanations, which is really simplifies case by case transparency. So Gary, want to talk about how the EU got into that current state? Sure. Yeah, absolutely. I mean, if you think of the EU's journey, it's really ultimately a masterclass in policy chaos. It started ah the commission's 2021 proposal. I think that was a 600 page outline.
04:54
of an ambition to join innovation and ethics together. But really in the end, Parliament wanted uh tighter surveillance bands. They were citing biometric misuse cases like the Clearview AI scenario, which we've seen in the news. And actually last fall, the Netherlands government had come after them for illegally scraping facial images online. I believe I last read ah
05:24
trying to rack my brain a little bit here, but you know that their internal counsel at Clearview had argued that they don't have any business in that country. ah So they felt that uh things like GDPR really doesn't apply to them because they're not doing business in country. Now, I haven't followed it closely, so I'm not sure where it stands as of today. But if you uh look at other countries such as France and Germany, they've lobbied to protect their AI unicorns or unicorn. m
05:54
know, Mr. Al and KI, for example, and fearing that over regulation essentially would cede the ground to to US or China. So, uh you know, you see, you know, Mr. announcing last month, it's private investments of 109 billion euros. I believe the government commits uh to that organization. Well, actually, take it back. don't think the 109 billion was actually private investment, but the government is actually providing
06:24
infrastructure, so power needed for the co-location of those workloads. So you're seeing this, it's not just in one specific pocket, it's becoming a of a global focus at this point. Yeah. The EU negotiations were definitely pretty brutal, obviously getting a lot of different countries together to agree can be a challenge. But I think it was back in December of 2023, so pre-2024.
06:52
There were some 36 hour sessions that finally gave birth to the deal that we know of today. uh Inside that deal, a lot of the smaller firms get a break. So they have a lot much more simplified reporting. But if you've got over 45 million EU users, so users inside of Europe, like Meta or Google, you're under the microscope. uh So enforcement of the new regulations starts in mid 2025. uh
07:21
for bands with full compliance by 2026. So businesses are going to face a pretty hefty cost tsunami with legal teams, third party auditors, and fines that could be 10 to 20 million euros for a Fortune 500 player. Plus the act is extra territorial. So if you sell into the EU, you're reliable. So even from New Hampshire or Silicon Valley. Yeah, that's a point. mean, contrast to
07:50
China's approach right there. 2023 interim measures for Gen. AI services really prioritizes the ideological alignment. Think banning dissent via AI chat box like Brock down rogue. Watermarking is mandatory. Providing must scrub data sets for illegal content, pre-training. So for businesses, it's a high stakes bet. Access to
08:17
1.4, 1.5 billion consumers offset by constant audits and fines up to a hundred million yen. It's about 14 million US dollars. So it's pretty hefty. Yeah. Technically China's kind of like the great firewall of China on the internet side. They're, pushing real-time content filters. So if you imagine a natural language processing models that are screening outputs on the fly, uh so those real-time content filters, um
08:47
Elsewhere in Asia, South Korea is drafting an AI Act for 2025, which is largely inspired by the EU, but a little bit softer on the startups. Switzerland's national AI strategy leans more on voluntary ethics codes like ISO standards. And in the UAE, they have decrees that actually fast track AI. So a little bit different approach. They're to fast track AI via tax-free zones and projects like Falcon, LLM out of Abu Dhabi.
09:16
Um, but geopolitically, uh, for the major players, it's really a three-way race. It's us innovation, uh, China, tightly controlling data and then EU, uh, really wrangling with their, values. Yeah. Yeah. That's a good point. You know, I, I, caution businesses not to sleep on global coordination. If you look at the council of Europe's AI convention, they're targeting 2026 aiming to lock in human rights protections. uh
09:46
believe it's across nations. And for business leaders, this plays really in a unified AI governance framework. So think of scalable policies, can be aware across border data flows. It's really the human dignity and individual autonomy and kind of part of their construct there is, I think they've got several initiatives and kind of mission statements around that, but it's really equality and non-discrimination, uh safe innovation.
10:15
And transparency and oversight are a couple of the big ones. I would say technically, prep for audits, adversarial testing, bias metrics, and compliance ready APIs are really your new best friends. ah It also has language in their convention too ah about a notice that no one is working with an AI system and is not working with a human being. So having that transparency ethically
10:44
ah So if you called up an organization and there was a disclaimer saying you're not speaking with a live person, ah organizations may want to think about that. And what does that imply for their business and also their customers? Because there's going to be different generational folks that may want that, may not want that. Right. Okay. Yeah. So let's take a look at what's going on in the U.S.
11:11
This is really what's playing out in front of all of us on the news. Obviously there's a little more under the covers there. uh In the US, there's no sweeping federal AI law yet. Really we're still in a scenario where we're working under our patchwork of existing laws. So leaning on existing laws and how they might impact a machine learning AI system like FTC Section 5, which was from 1914 for unfair practices in or affecting commerce.
11:41
or Title VII from 1964 for workplace bias. So obviously technologists need to consider output bias. It's obviously a key concern in any AI system. ah And as we're hearing now that the Trump administration's 2025 pivot is really all in on deregulation. um So it's really America first for AI supremacy and just competing. And that's interesting.
12:10
Interestingly, it's developing a little bit differently from the way the internet came about in the early 70s, where DARPA led the way with the groundwork for the internet. In this case, you've got the public sector leading the way with AI, and then the laws and the government are going to be catching up. Yeah, that's a good point. mean, you recall just in January with the executive order, that was the...
12:36
removing barriers to American leadership and AI. had to write that one down to make sure I remembered it in its entirety. But it essentially removed an axed federal tape, uh like NIS voluntary AI risk management frameworks. ah And that order specifically calls out for the development of artificial intelligence action plan, so an AI action plan. And that establishes a policy to ensure
13:03
the US maintains and strengthens its global AI dominance, kind of what you're referring to above. ah Also, if you recall too, Project Stargate, which was a big announcement from the White House, a corporate venture as well. So it's part of that investment. I believe it was really uh following the day after removing the barriers order. So this was a $500 billion investment announcement over a decade partnering with OpenAI.
13:30
powerhouse public cloud providers such as Oracle and lastly, XAI. The focus is less about rules but more about outpacing China's tech innovation and chip stockpiles. So think of the H100 GPUs, which I believe list out at around $30,000 per GPU built on those billions. I think it's like 80 billion transistors. So this investment should fuel various growth and multiple verticals here in the US.
13:59
Yeah, think about defense. You've heard recently some politicians calling for a Manhattan project style uh focus for AI driven warfare. And that would be everything from autonomous drones, predictive logistics. Really, there's endless applications in defense. But let's rewind to Biden's executive order in 2023. It was called the Safe, Secure and Trustworthy AI Act.
14:28
It mandated red teaming for models above, I think it was 10 exaflops and data sharing with NIST. So a totally different approach. And as expected on day one, Trump's team rescinded that one. Yeah. Yeah. I remember that. I always defer to history to keep all this in perspective. The 2020 National AI Initiative Act still invests, I believe, around...
14:54
think it's about a billion dollars annually into NSF and DARPA research. So it's still seeding the breakthroughs like quantum machine learning. The challenge I think we'll continue to see is in the branches of government. Congress, they're completely gridlocked on regulation. The Algorithmic Accountability Act reintroduces in 2023 some of the detailed impact assessments of automated decision systems. So say, for example, why did your AI reject a mortgage?
15:25
And then you have to inform the consumers of how these decisions were made. At a very basic level for years when consumers were denied a credit card, the lender has 30 days to provide the consumer an adverse action letter, for example. However, some states have passed their own specific laws. Colorado and Utah passed consumer protection laws, and even New Hampshire passed a law that protects its citizens from AI surveillance by their state. uh
15:51
And then there's the the no fakes act, which has been in the news a little bit lately. That one's got some legs too. So, you know, think of cloning Taylor Swift's voice, you know, protecting individual creators is what the focus is of the no fakes act. That one passed the House in February of 2025 with bipartisan support. So really no objection there. But, you know, looking forward into 2025, a lot of the current buzz is about online harms.
16:20
Proposals will likely be in the form of tweaks to Section 230 that we heard a little bit about over the last couple of years. It'll be tweaks for AI misinformation and child safety, like banning targeted ads via reinforcement learning models. I think that the agencies are going to be the wild card. So the FTC has already fined firms for overhyped AI claims. There was recently $2 million against a chatbot vendor in January.
16:51
And then the SEC is already probing into and sniffing around AI driven trading algorithms too. So think back to things like the flash crash that we heard a lot of back in 08, 09 and how those risks could be amplified with today's technology. Yeah, that's a point. I think from a CIO perspective or other business leaders, it's presently a light touch fed landscape. I mean, I think he'd agree with that. So.
17:19
My recommendation is to really focus on self-regulation, but prep for agency curveballs, because we know that they've come out. Historically, we've seen that. So you really need to be aware of the legislation will occur, and having an ethical foundation will be paramount at the end of the day. Technically, black box scrutiny is rising. Decision trees on CHAP, for example, could save you in that FTC probe.
17:47
But why not build a compliance organization now, be prepared to scale it uh if or when places like Washington, DC get their act together. And you kind of mentioned some of the states from a regulation standpoint, Scott. Maybe we should talk what some of the states are doing. Obviously, some states have really kind of dived into the compliance and ethics approach around AI, but not all have.
18:17
Maybe we should talk a little bit about that. Yeah, certainly. it's obviously different per state. think of the states like the Wild West right now. There's over 700 AI bills that were proposed in 2024. There are 40 plus of them are active going into 2025. So there's no federal anchor. And that means that governors and attorneys general are flexing on consumer protection.
18:45
economic equity and public safety. And sometimes they'll flash with each other. Yeah. I mean, the Colorado AI Act, were kind of talking about this earlier, I think just you and I specifically offline. That was signed in 2024. It becomes effective in 2026 and it kind of targets what we talked about here earlier on the podcast. You know, it targets those high risk AI categories in employment, housing,
19:15
education, think of, you know, anybody listening when you've submitted your job application through resume screeners or even tenant scoring that can occur at that time of application. ah So you really need to, you know, have those annual impact assessments and consumer opt outs in this in this particular category here with Colorado and the AG's enforceable penalties are up to twenty thousand dollars per violation. So if you can imagine that you're processing
19:45
a slew of applications, hundreds or thousands for whatever you're doing in your business, that could add up pretty quickly. And it's also mimicking something similar to what the EU has been doing. ah But it adds a little bit of a US twist, really explicit disparate impact tests, like running fairness metrics across racial or gender cohorts, for example. So you're going to continue to see, I think, some of these types of areas, as you mentioned as well, Scott, around that consumer protection.
20:15
uh by not just having AI just run completely free. Yep. uh Then, well, California, obviously a tech heavy state, a lot of activity there and a lot to consider for all the tech companies and companies doing business in California. uh 2025 laws, including one, it's uh AB 2013, which mandates generative AI transparency. So think of uh
20:43
disclosing your training data and your training data summaries for your LLMs. So ah that's when you think about transparency, be prepared to disclose. ah SB 1047 bends deep fakes in elections. They do that with water marking requirements that are enforceable by $10 million fines. ah On the technical side, what that is, is that's convolutional neural networks for detection.
21:11
So think about scaling a, you know, ResNet from Microsoft research from back in the day to spot fakes in real time. And that's, that's not simple stuff. Um, then, uh, this one, AB 2085 that protects kids. Uh, that's capping AI ad targeting and that, and that's done via reinforcement learning. There's a, there's really niche rules everywhere. mean, uh, if you look at Illinois, for example, um, HB 5877.
21:40
that bans AI from replacing teachers doing specific roles. So think of grading algorithms sidelined here for an example. students are putting in their work, but another AI engine is completely grading them. So that's something that has been on the docket. then there's also- people might be happy about that one. Yeah, yeah, yeah. I can think of a few students that probably would be okay there.
22:08
There's also some legislation out there in terms of stopping insurers from denying claims via AI without a human review. perhaps the machine learning out of that environment understands that, hey, you're either in or out for that insurance claim or that policy that you're applying to, but final say by the actual insurance certified agent. So uh those are just a couple in that state. New York.
22:39
Uh, when I was researching this a little bit, kind of what they've been doing, it's funny. I had to double check some of the, statutes, uh, on their docket because they, they named the, the year upfront. New York's 2025, a seven, seven three, which is not to be confused with the 2023 dash a seven. Seven three. Uh, that one, that one, I looked up that one, Scott, uh, the 2023 a seven, seven three refers to.
23:09
toxic pest chemicals, uh pet chemicals. So if you look that up, I'm not incorrect. It's the 2025-A773. ah So I had a little challenge there. So, you know, this one, uh you know, it basically essentially outlines wanting banks to audit AI lending for bias. Think, you know, logistic regression outputs under a microscope, for example. And there's some, there's some
23:37
pretty stiff language in there where the state of New York can come after you and provide action and litigation ah based on those types of areas being ah violated if it is passed. So just to read some of it off, ah it calls out, uh it generally may initiate an investigation if a
24:00
ponderance of the evidence, including the summary of the most recent disparate impact analysis required pursuant to paragraph B, which kind of talks a little bit about the subdivision of the article or the statute. This section establishes a suspicion of a violation to the attorney general may also initiate in any court of competent jurisdiction, any action or proceeding that may appropriate or necessary for correction of any violation issued
24:29
pursuant to the section, including mandating compliance with the provisions of this section, and it goes on and on. So I read that because... You gotta be prepared. Exactly, I read that because both you and I, we do a lot of things well, we're not attorneys. And so many organizations, you have your own internal counsel, however, I'll get to this at the end, but it's very important that you have a team that's focused on
24:58
compliance and ethics around AI, because the attorney that you may have on staff or that you work with, they all have specialties. And this one just kind of calls out that it's very important. these legislations in various states, probably before the Fed will be coming. ah Yeah, so your team has to be, they need to be fluent in paying attention to what's going on in the EU, Asia PAC, federal government, and state by state. Absolutely. mean, even Georgia has, they have a
25:27
a bill out there SB164 and Illinois does as well SB2255 to cap AI wage setting sparked by the gig economy uproar. So to your point Scott, even just domestically here in the States, you have individual state requirements, but if you are a global company, you're an e-commerce organization, you have to worry about not just protecting data, which we've all heard about.
25:55
for many, many years now around GDPR, there's other facets. If you're starting to employ AI, uh, in your business and you're spot on, like you, you have to, you have to nail this now. Okay. Yeah. And I pulled out a few more about some other 20, 25 trends that we're seeing. We'll just go through a few more. Yeah. What do you have? All right. So, um, you've, you've probably dialed into something and you're not really sure if it's AI, right? So hi, Gary. Yeah, of course.
26:25
Now it has to say, Hi, Gary, I'm artificial intelligence. So that's what Utah's SB 226 is demanding. It's chatbot disclosure. So if you don't do it, $5,000 fines per violation, so per use. ah Illinois's SB 1792 is tackling AI's energy hog, which of course are the data centers, and it's requiring that they must report carbon footprints um like
26:53
you know, 500 megawatts for a single GPT run. Um, and then, uh, frontier models are also under the microscope. Uh, Massachusetts, uh, just south of us here, uh, uh, H 41 23 is dying public safety risks. Like misinformation or existential threats from super intelligent systems. Uh, so that would be from very large systems exceeding, uh, 10 exaflops like, uh, X AI's Colossus.
27:22
It's essentially a compliance maze, right? I really think businesses have to think modular. So plug and play policies by state and essentially like I was just saying a little while ago to have the correct team for governance and compliance simply, you know, retaining internal counsel. I don't believe we'll cut it. You will definitely need a privacy team who understands the complete tech stack in your organization in parallel with your legal team. So being able to
27:52
Take the technical environment and infrastructure and breaking it down for your attorney or your attorneys. ah And from a technical standpoint, standardized fairness, ah TensorFlow fairness for bias sweeps. As you know, think uh some of you use that. ah supports the ecosystem. TensorFlow supports that ecosystem and aligns with the principles of responsible AI. ah It focuses on fairness, privacy, accountability, et cetera.
28:22
uh Many say or ask, you know, is that losing steam? uh TensorFlow? And I don't think so. I think teams who use Python, which many will argue it's the original GOAT, greatest of all time for any non-US sports fans. That's what we refer to folks here. ah Python was originally a programming language for ML, but now TensorFlow can use it with JavaScript, C++, Java and others.
28:51
So PyTorch may take over that from a Python perspective. But so I digress, but you know, these are being used for those audit trails. You have testbeds like NIST, um Dioptera ah that can simulate state specific risks. So orgs who conform to NIST standards can take advantage of Dioptera. ah Really having the agility in your edge in this patchwork environment enterprise, uh they'll
29:21
that's really where you have to focus on. Yeah, yeah, your teams are going to have to know the tools to perform the audits and to be compliant. So you have the legal landscape and then you need to have the tools to comply. Yeah, I mean, you have to stay nimble. Yep. All right. So I think that's a good wrap. That's a lot of ground that we covered. Again, from the EU's cautious action to Trump's
29:49
Remark it flex and then you've got the states that are just kind of running wild. uh AI law is going to be tough to navigate. Yeah, I totally agree. think. You know, for organizations, you really just need to keep pushing the envelope, embed that compliance early. The laws are playing catch up. You're the one steering the ship at the end of the day. If you feel something just doesn't feel, you know, just doesn't seem right and you should pause, you know, and question the ethics. Do that. Take that pause. uh
30:19
You don't have to give up trade secrets, but seek guidance within your peer sets or others in the industry. The last thing you want to do is deconstruct the house if the foundation is faulty.
30:30
Yep, absolutely. All right. So thanks for joining us on the macro AI podcast. Feel free to send us a note. We've been getting some really interesting ones. You can send them right through the tool at macro AI podcast.com and send us comments, suggestions, topics, whatever. Gary, final thoughts? No, I would just say keep compliance and privacy and ethics within your AI environment. Top of mind and we'll catch you next time.
31:00
See you, Same.