The Invested Dads Podcast

Should Artificial Intelligence Be Regulated?

August 03, 2023 Josh Robb & Austin Wilson Episode 190
The Invested Dads Podcast
Should Artificial Intelligence Be Regulated?
Show Notes Transcript Chapter Markers

Have you been curious as to the world of artificial intelligence and its potential for regulation? In this week's episode, the guys' simplify the complexities surrounding AI's ethical use, safety and security measures, accountability and transparency mechanisms, fair competition considerations, and the vital aspect of public trust. Listen now to explore the pros and cons of AI regulation and where the future will take us.

For the transcript, show notes, and resources, visit theinvesteddads.com/190

Sign up for our exclusive newsletter here!
The Invested Dads: Website | Instagram | Facebook | Spotify | Apple Podcasts

Welcome to The Invested Dads Podcast, simplifying financial topics so that you can take action and make your financial situation better. Helping you to understand the current world of financial planning and investments, here are your hosts, Josh Robb, and Austin Wilson.

Austin Wilson:

All right. Hey, hey, hey. Welcome back to The Invested Dads Podcast, a podcast where we take you on a journey to better your financial future. I'm Austin Wilson, Co-Portfolio Manager at Hixon Zuercher Capital Management.

Josh Robb:

And I'm Josh Robb, Director of Wealth Management at Hixon Zuercher Capital Management. Austin, how can people help us grow our podcast?

Austin Wilson:

We would love it if you'd subscribe if you're not subscribed, and subscribe to our weekly newsletter via our website, where you can get an email each and every Thursday when a new episode drops with some show notes and a direct link to listen.

So, today, we are going to be talking about the very popular topic of regulation in regard to artificial intelligence. We're going to be looking at five pros, why it could be good to regulate artificial intelligence. And five cons, five challenges-

Josh Robb:

I like it.

Austin Wilson:

... to this. Very balanced. Five and five. 10 total. Josh, come on. It's great.

Josh Robb:

Good. You learned your lesson.

Austin Wilson:

Because regulation does come with both pros and cons. Advantages, disadvantages, both sides of the story. We want you to be informed.

Josh Robb:

Yep.

 

[1:15] - Five Pros of Regulating Artificial Intelligence 

Austin Wilson:

So, the pros of regulating artificial intelligence. Number one is really the ethical use case of artificial intelligence. Regulation can really ensure that artificial intelligence is developed and designed to be used with an ethical manner in mind. So it can't be used to do bad things, right? If you regulate artificial intelligence to some point, you can set standards and guidelines to prevent AI to be used to harm things, harm people, discriminate, invade privacy, surveil without consent, those sort of things. What do you think about that?

Josh Robb:

Yeah, there's a precedent to this in that when new technology comes around, there's kind of an early stage where there's not much, and then most new technology then develops either internally its own regulations, or externally through government or whatever kind of guidelines. So, I think, when the internet first really became popular, it was kind of this-

Austin Wilson:

Oh, yeah.

Josh Robb:

... new frontier.

Austin Wilson:

It's a free for all. It's the Wild West.

Josh Robb:

And then, since they've added on regulations to it, and Bitcoin and digital currencies are under that same thing, it started out and it was not regulated at all, and there's discussion on whether that will be more regulated.

Austin Wilson:

Yep.

Josh Robb:

So I think, whenever you have new technology that can be abused, you want to make sure there's some sort of safeguards in place.

Austin Wilson:

Number two, safety and security could be a benefit, a pro of some regulation AI space, because regulations can focus on making sure that AI systems are developed with the backing of measures that prevent them from being hacked, prevent them from being manipulated, and then you can actually have guidelines and sets out there of how to test and validate artificial intelligence to reduce the risk of failures, to reduce the risk of accidents, all kinds of things around that. So, I think that the safety and security aspect is another pro, because right now, there isn't a lot of that.

Josh Robb:

No, and when you're relying on the information that's being gathered, and I could do a Google search and I know where that's coming from, but AI gathers it for me, but I don't necessarily know where it got that information. So you're right, from a safety standpoint, we want to make sure the information it's compiling is right and not been manipulated in any way.

Austin Wilson:

Absolutely. Number three: accountability and transparency. So, regulation, if you're thinking about it as it pertains to AI, you can use this to establish accountability mechanisms, which can make sure that organizations who are utilizing artificial intelligence, that they're held responsible for the actions and decisions made by those systems. So, obviously, everything has to flow back to some responsible party. You can't say the AI made me do it.

Josh Robb:

Yeah.

Austin Wilson:

That's not a good answer.

Josh Robb:

It doesn't work.

Austin Wilson:

So some sort of accountability system is good there. You can also use regulation to enforce transparency requirements, making sure that their algorithms used in AI, the models are explainable or understandable so that people can use this and understand how it's coming up with what is being prompted, turning into what is being spit out. One thing that is in this sort of realm is ChatGPT is based on an open source AI code right now, and that's kind of the most popular one, and I think that that's the foundation of the rules going forward.

Josh Robb:

Transparency.

Austin Wilson:

Are going to be a transparent open source sort of thinking around AI.

Josh Robb:

Yeah, I like that. It's, again, when it's coming back with information, to be able to understand how it gathered that is important, to know if you can trust it.

Austin Wilson:

Number four, fair competition could be a pro of some regulation in the space. One way that this can happen in the AI space is by preventing monopolies in terms of artificial intelligence companies kind of not allowing startups and smaller companies to have the equal opportunities to innovate and succeed. So that's something that some sort of discussion and governmental oversight is going to prevent some of the big companies from squashing, opportunities from smaller companies that ultimately can turn out to be way more innovative because of the way that they operate.

Josh Robb:

Yeah.

Austin Wilson:

So that's something that you could get with some regulation. This could also benefit the AI ecosystem by allowing it to be more diverse in terms of more companies represented, more dynamic. There are a lot of different options in the space versus just a couple through that.

Josh Robb:

No industry does well when there's not open competition.

Austin Wilson:

Absolutely. And number five is public trust. This is one that is similar to what I think of when I think of cryptocurrency, because I think a lot of these regulation thinking can be applied to that space as well. Kind of the Wild West there, it's kind of the Wild West in AI. But public trust is huge, and some regulation can really help the technology to be trusted by the public because the public is going to know that some of those concerns have been addressed. Some of those risks have been mitigated. They're going to have clear guidelines and oversight so that people have confidence in what's going on with artificial intelligence. And that's really going to allow more people to want to adopt and accept the use of AI in their daily lives.

Josh Robb:

Yeah. Yeah, no one's going to adopt or grow to a more widely accepted if there's no trust built into it.

Austin Wilson:

Absolutely. So that is my five pros for regulating the AI space.

 

[6:22] - Dad Joke of the Week 

Josh Robb:

Let's do a dad joke here-

Austin Wilson:

Bring it.

Josh Robb:

... and then we'll come back to the other side, the cons and maybe how regulations may hurt artificial intelligence. All right. So they're starting to incorporate this into more and more things. Do you know what you would call a luxury vehicle if it had AI built into it?

Austin Wilson:

Uh-oh, I do not.

Josh Robb:

You would call it Alexus. It's kind of like Alexa, but, and Lexus.

Austin Wilson:

Oh, like Lexus?

Josh Robb:

Yes.

Austin Wilson:

Like the car, but Alexus.

Josh Robb:

But Alexus. You call it Alexus. You smoosh them together.

Austin Wilson:

I get it. I see it.

Josh Robb:

When you read it-

Austin Wilson:

Yeah.

Josh Robb:

... it pops right in your head.

Austin Wilson:

That makes more sense.

Josh Robb:

But that's where it is.

Austin Wilson:

I like Lexus cars. They look good.

Josh Robb:

They're nice. Their Toyota's underneath, which means they last forever.

Austin Wilson:

Yep.

Josh Robb:

Got a love them.

Austin Wilson:

Good service too.

Josh Robb:

Oh yeah, absolutely. My grandma had one and anytime they need something, they'd come get it, leave her a loaner vehicle and bring it back.

Austin Wilson:

And those are the ones to buy. Buy the grandma-owned Lexuses.

Josh Robb:

Oh yeah.

Austin Wilson:

They were not driven hard. All right, now we're going to turn the page over. We're going to look at five cons.

Josh Robb:

Ah, yes.

 

[7:19] - Five Cons to Regulating Artificial Intelligence 

Austin Wilson:

So the downsides to artificial intelligence regulation, and first and foremost...

Josh Robb:

Terminator.

Austin Wilson:

Terminate... Oh, yeah. They could take over the world.

Josh Robb:

Yeah.

Austin Wilson:

No, that's actually the pro would be-

Josh Robb:

Regulation.

Austin Wilson:

... prevent AI from taking over the world.

Josh Robb:

Oh, maybe that's-

Austin Wilson:

That didn't make the top five.

Josh Robb:

... Oh, there it is. Okay.

Austin Wilson:

Don't think we're quite there yet. The number one con, the downside that I think of when I think of regulating artificial intelligence is stifling innovation.

Josh Robb:

Yep.

Austin Wilson:

So innovation is very important for, especially up and coming industries like artificial intelligence, and excessive regulation could potentially burden some requirements on development for AI. It can slow things down, which we're going to get to in a little bit. But I think that especially when you're looking at striking a balance between regulation and innovation being critical, because you need progress, you need to be able to continue to move things forward as AI continues to transform the way we do things. And I also think that when it comes to keeping up with, I'm thinking of countries like Russia and China who aren't going to have sort of the same restrictive oversights in terms of what we're talking about here, potentially bad actors in general, this could put us at a disadvantage as a country.

If we're too tight on regulations and restrictions for AI, we could fall behind, and then these other enemies, potentially in war and things like that, or economic threats, could use AI that's more advanced of their own to outdo us in certain areas competitively. So, I think that that's a risk, stifling innovation.

Josh Robb:

Yep, I agree. When you have the people creating regulations that maybe aren't the experts in that field, you may, again, slow things down by making it too burdensome because they don't fully understand what they're regulating to begin with.

Austin Wilson:

It's like the cryptocurrency thinking.

Josh Robb:

Yes.

Austin Wilson:

You have these 80 year old congressman asking questions about...

Josh Robb:

They don't know.

Austin Wilson:

They don't know how to log into the Facebook, much less know what a blockchain is. So, I think educating is key, and we're going to talk about that here in a little bit.

Josh Robb:

Is AI part of my AOL email account? That's what their question is.

Austin Wilson:

Yeah, exactly. Or like the woh.rr.com, like the emails everyone got.

Josh Robb:

Yes.

Austin Wilson:

Number two, another disadvantage to regulating AI is slower adoption in general. Overregulating things may slow down the adoption because all of these complex regulatory frameworks can create unnecessary barriers to entry, which can discourage organizations from really investing in AI altogether because of all the costs with compliance and things like that, as well as the uncertainties around legal obligations that you could have because of the compliance side of things.

Josh Robb:

Yep.

Austin Wilson:

So I think those are some risks. It could slow the adoption and the use of technology in general if it's overregulated.

Josh Robb:

Oh, for sure. I think, and again, when we're talking about regulations, and you're going to see these in a lot of your cons, is just who's regulating really matters-

Austin Wilson:

Absolutely.

Josh Robb:

... to kind of the downside of all those, adoption being one of those.

Austin Wilson:

Yep, and third kind of ties into what we were just talking about, with compliance challenges, because compliance is super important and it can be extra challenging, especially when it relates to AI for smaller companies with limited resources, and this can really add a lot of costs. A lot of smaller companies are going to need to hire specialized expertise, and that's going to slow down smaller companies and hinder their ability to compete in the market because they're going to be having to spend comparatively more money on a proportional basis versus the larger companies. That's kind of the way I see this happening.

Josh Robb:

Yes.

Austin Wilson:

These larger companies that are driving the AI ship right now. They're going to be lobbying for regulation to keep things in their favor, and they have the money to do it.

Josh Robb:

Yep. And being involved in compliance myself, whenever you get regulations, you get additional compliance issues. And so, the government complicates things, and there are good regulations and good compliance that has to happen in many industries. But, again, when you're moving fast and you're growing, you may overburden people with trying to comply with the rules you created with maybe not the tools needed to actually comply.

Austin Wilson:

Right. Number four, difficulty staying current if you have a lot of regulation going on. So, because of the nature of artificial intelligence and how it's rapidly evolving, regulations could actually struggle to keep up with how fast you...

Josh Robb:

Unless you have AI creating the regulations for AI.

Austin Wilson:

I think you could just, that's a billion-dollar idea right there.

Josh Robb:

Circle logic.

Austin Wilson:

But this could lead to outdated regulations that are unable to address the challenges that are going on now, or even the ones that will be coming up, or to fully grasp the potential risks that are going to be associated with some of these modern and more cutting edge AI applications. So, I think that it's important to realize that this technology in particular, it's kind of a Moore's law thinking, like the amount of computing power and the amount of speed of artificial intelligence that is being continuously brought online today is literally exponentially more than it has been at any other point in history. And you need to be able to regulate at the same pace, ideally ahead, so that what's coming.

Josh Robb:

Yes.

Austin Wilson:

But it's so hard to do that even right now, especially with our legislative framework and the way that we make laws and regulations here in the United States. It's not fast.

Josh Robb:

Oh no, by no means.

Austin Wilson:

Yep.

Josh Robb:

I like that. Last?

Austin Wilson:

Last, number five, international harmonization. So thinking about regulation and tying it in with other regulations in different countries around the world, making sure we're all kind of on the same page, like an AI oversight, globally. Very tricky. Everyone's going to look at this a little bit differently, but to get some sort of different economies and different countries being on the same page in terms of regulation would be pretty key because this will avoid conflicting or fragmented standards across different jurisdictions.

But one of the difficulties this will give you is the achievement of consensus on regulations because it's going to be really complex, some countries are going to want to do things differently than others. It's going to be very time-consuming. All of this stuff is going to potentially delay regulations further and put you even more at a difficulty of staying current, because by the time you got agreement on things-

Josh Robb:

Yes.

Austin Wilson:

... it's going to be old news. So, there's a couple themes here we've talked about here in terms of the cons. A lot of it relates to innovation and adoption, but a lot of it is also going to relate to just keeping up.

Josh Robb:

Yes.

Austin Wilson:

It is amazing how fast AI is working, and regulation has to attempt to do that or else it is going to be obsolete immediately.

Josh Robb:

Yes.

Austin Wilson:

So, I think, in general, it's very important to strike a balance. This is why I talked about pros and cons between regulation and innovation, because there's a lot of benefits to AI. And we talked about that in our recent episode where we talked about AI.

Josh Robb:

Yes.

Austin Wilson:

Some of the potential new areas for innovation and things there. We'll link that in the show notes. If you have not listened to it, please go do so. But we have to admit, there are definitely some risks here, and a lot of flexibility and adaptivity is going to be very key, as well as ongoing dialogue between policymakers, researchers, industry professionals when it relates to regulation. That's something we're going to be, it's take time for it to play out.

Now, recently, a bunch of technology company leaders, including Elon Musk, Apple Co-founder Steve Wozniak, they signed an open letter in March, which was calling for a six-month pause. Now they're saying a pause, I don't know what exactly a pause means, but that was on advanced AI development. That has not happened, by the way. AI has continued to develop very, very quickly since then. But then in May, OpenAI, which is the ChatGPT company that runs that.

Josh Robb:

Yep. The big one.

Austin Wilson:

And again, it's open source artificial intelligence, their CEO, Sam Altman testified before a senate judiciary subcommittee about the potential dangers of AI and the need for more government intervention. Now, ironically, Elon Musk actually used to be involved with OpenAI previously.

Josh Robb:

Okay.

Austin Wilson:

And now, they're having a little bit of differences of opinions, because OpenAI used to be a not-for-profit, and now it is for profit, and there was a bunch of tension between all of that. So anyway, he was even arguing for more government intervention. But I kind of go back to one thing we just talked about was, the leader in the space right now, of course, they're asking for intervention so that no other people, no other companies can kind of encroach on their territory there.

Josh Robb:

Yep.

Austin Wilson:

Also, recently, I got an email from Emerging Tech Brew, it's a public link to the article, and I'll link that in the show notes as well. But they said that the path to regulating AI is a challenging one in part because legislators themselves have a lot to learn about this type of technology. And according to Aram Gavoor, an Associate Dean at George Washington Law School who focuses on AI policy, he says that while lawmakers and their staffs will be beefing up on knowledge of AI in the coming months, that the May hearing was unusually bipartisan and collaborative.

Josh Robb:

Oh, there you go.

Austin Wilson:

So that's something we don't hear very often.

Josh Robb:

Not very often.

 

[16:29] - Josh & Austin's Opinions on AI Regulation 

Austin Wilson:

So senators on both sides of the aisles generally agreed that we need some sort of regulation, and the witnesses pointed that out as well. So, you said that that's not something you see a lot in Washington as we've just kind of had a lot of drama on the back and forth on the debt ceiling and all of that. Having some common ground here, I think, could be something that is a good thing. So that's interesting to know as well.

But Josh, I'll turn it over to you first. What are your final thoughts on regulation of AI?

Josh Robb:

Yeah, I think, with a lot of new technology, especially this type of technology that, in a sense is self-learning and growing on its own, you do need to put safeguards, parameters, regulation in place to understand what's happening.

Austin Wilson:

Yeah.

Josh Robb:

So, I just think it's inevitable that we need that. Now, what that regulation is, I'm not smart enough to really know what is all needed. But I think there are enough experts that are available to help craft that if the people crafting regulations are smart enough to acknowledge maybe we're not the ones that should be putting all this input in. Get the expert's opinions and then draft it from there.

So, I think it has to happen, it needs to happen. I think it will happen too. I think there's enough people calling for it that you're going to see some come through. But again, like you mentioned in your pros and cons, it's changing so fast that by the time they get the regulation, it may be close to outdated based on what's happened since. So, yeah, it is going to be an ongoing battle, for sure. What do you think?

Austin Wilson:

Yeah, I think, one approach that I would have in this discussion is to keep regulation at a high enough level that it can adapt. So don't make it so strict that it is based on what we know today, focusing on all of the available technology right now, we have to know going in that this is going to grow rapidly, this is going to advance rapidly. So making the regulation high enough level to adapt over time is key, and that way, there'll be less of an impact from a legislative approach in terms of when the bills can get drafted to when they go into actual effect. Well, you could just have something that's kind of evergreen.

Josh Robb:

Yep.

Austin Wilson:

It's the artificial intelligence framework of the United States of America or whatever. I think that that's key. I also think that it is going to be very challenging, globally, to be doing something like this.

Josh Robb:

Yeah, that's going to be the hard part.

Austin Wilson:

However, I think that we've got enough common ground from some of our bigger allies. Think of places like Europe, they're also seeing the same thing we are right now.

Josh Robb:

Yeah.

Austin Wilson:

And I think that we could be, at least with some of the more developed nations and more of our friendly trading partners I guess you could say, on the same page with a lot of stuff. So, I have hope for it. Again, I'm one with you. I think we need it and I think it's coming. I don't want it to be too restrictive that we're going to cause more issues than we fix.

Josh Robb:

Yeah, makes sense.

Austin Wilson:

So, that's something that we're going to work towards over time. So, yeah, that's our thoughts on regulations and AI. Thank you for listening. As always, we'd love it if you'd share this episode. Maybe you had someone asking about regulation AI. Share this with them.

Josh Robb:

Maybe your ChatGPT bot was asking.

Austin Wilson:

Oh, plug this link in there. That's what I'm saying. And always, subscribe and leave us a review on Apple Podcast or Spotify, and follow us on social media, Instagram, Twitter, or Facebook. And until next week, have a good one.

Josh Robb:

Talk to you later.

Austin Wilson:

Thanks. Bye.

Thank you for listening to The Invested Dads Podcast. This episode has ended, but your journey towards a better financial future doesn't have to. Head over to theinvesteddads.com to access all the links and resources mentioned in today's show. If you enjoyed this episode and we had a positive impact on your life, leave us a review. Click subscribe and don't miss the next episode.

Josh Robb and Austin Wilson Work for Hixon Zuercher Capital Management. All opinions expressed by Josh, Austin or any podcast guest are solely their own opinions and do not reflect the opinions of Hixon Zuercher Capital Management. This podcast is for informational purposes only and should not be relied upon for investment decisions. Clients of Hixon Zuercher Capital Management may maintain positions in the securities discussed in this podcast. There is no guarantee that the statements, opinions, or forecasts provided herein will prove to be correct.

Past performance may not be indicative of future results. Indices are not available for direct investment. Any investor who attempts to mimic the performance of an index would incur fees and expenses which would reduce returns. Securities investing involves risk, including the potential for loss of principle. There is no assurance that any investment plan or strategy will be successful.

 

Five Pros of Regulating Artificial Intelligence
Dad Joke of the Week
Five Cons to Regulating Artificial Intelligence
Josh & Austin's Opinions on AI Regulation