Tech 4 Thought

The Crypto Comeback & AI's $2 Trillion Future

• Sahib, Chris, Jacob, and Aidan

Send us a text

Welcome back to Tech 4 Thought, your weekly breakdown of AI, finance, and technology headlines.

In this episode, we explore:

🔹 The FDIC’s reversal on crypto restrictions—what it means for traditional banking and digital assets
 ðŸ”¹ TSMC’s bold $2 trillion prediction for the AI market by 2030
 ðŸ”¹ Claude for Enterprise: Anthropic’s challenge to OpenAI and Microsoft
 ðŸ”¹ Adobe’s new GenAI music tool and the creative disruption it's sparking

🎯 Deep Dive: Is Institutional Finance All-In on Crypto?
We unpack the regulatory shifts, ETF approvals, and blockchain buzz shaping crypto’s next chapter.

🧠 Plus, our takes on the trends nobody’s talking about (but probably should be).

Links to featured articles:

💬 Have a question or take on this week’s episode? Let us know in the comments or on socials!

Support the show

Speaker 1 (00:29)
Welcome back to another episode of Tech Thought, the podcast where we break down the latest headlines and trends shaping the future of AI, finance and technology. In our last episode, we had the opportunity to speak with Ben Vollmer, an expert in AI automation, low code, no code landscapes, et cetera. And we explored how AI is impacting the global energy infrastructure and the rising demands, especially with like new data center costs and...

the rising just general cost of data centers in general and talked a bit about that and it was super cool. So today we're going to talk about a bunch of things that has been happening in the finance and AI world. And so we'll get straight into it. Chris, you actually sent an article out, I think in the chat about Blackwell and we both, we all talked about it. We thought it was super cool that BlackNVIDIA was able to release this new chip.

Blackwell Ultra AI chip where it offers a 50 % performance improvement over its predecessor. And it's really just designed to support the more complex AI workloads that we're seeing now. My question was like, how will the Blackwell Ultra chip influence the development and the deployment of like AI applications? Like, what do we think it can do?

Speaker 2 (01:52)
Um, well, like I said, it gives a 50 % performance increase. I think, first off, I think that's just a, a massive bump from what we're used to seeing on, on just a regular, you know, to you think of technology in general, what is the average? think it's like a 7 % performance increase over what, like two to five years. I might have that number off, but to have a 50 %

increase in performance, I think is just insane. think Nvidia being at kind of the center of the I mean, you think of like the AI boom, the crypto boom, all of this stuff that really kicked Nvidia, you saw what was the stock value of it went skyrocketing what like 2019 2020.

Speaker 1 (02:45)
Oh yeah, that did take a big, I don't remember exactly, but I know what you're talking about.

Speaker 4 (02:49)
Yeah, I'm pretty

Speaker 2 (02:50)
Yeah.

So I think they, kind of heard recently, they were having a lull. The other companies were, guess, catching up and then they come out with this. And I mean, it's just provided such a heavy boost to not just their own processing capabilities, but kind of the foundation, and what these companies can build on.

for their infrastructure. mean, we talked, like you said last week about all these like hyper scalers and data centers. I think when you get into, we're kind of just, hitting this crazy boom in AI, AI capabilities, use cases, everything, right? AI is such a broad term. I think it's, it's always, because we kind of get into this AI world, it's important to break it down into

kind of the use cases more so for AI. So AI, mean, we know it can be used for basically anything, right? I was, I was actually having a conversation recently. There's a, you know, I think, didn't you go to like a hackathon recently?

Speaker 4 (04:04)
Yes, I did.

Speaker 2 (04:06)
so I was talking to someone recently about there's a company. I can't remember what company it was, but they held something similar to that. They, they basically hired a bunch of people to come in and try to basically get through anything they could try to hack their way through things. and they recorded all of their actions live, as like data points of how people would approach getting through their system.

even if they weren't able to successfully get through. And they used all of that to train a model. in addition to like your typical data, they use that to literally train the model to recognize these kinds of things immediately, like very quickly. So they use it kind of in a different sense of typical hackathons for my understanding. mean, I've never been to one. I know the general.

purpose. But you know, like if somebody is successful in getting through your system, the whole point of that is so that they can identify it and build out on that, right, that specific thing. That's kind of the whole point of a hackathon, right?

Speaker 4 (05:22)
Well, it depends. mean, the hackathon I went to was more of like kind of like a hacking together of development to basically showcase a specific how well actually how AI can help these nonprofits for a specific use case that they had. And so we had like two months to put together this like AI development.

Ours personally was like a chatbot that could then take the session data from user interactions and create an analysis on all of that and do targeted campaigning for donors and volunteer opportunities and things like that. there are like two different types of hackathons. One's purely, I guess, just for hacking. And then one is for like actually creating like a development.

Speaker 2 (06:12)
Well, this one was specifically for collecting data on people trying to hack their system. it wasn't, but typically you just look at how like this person made it through. How did they make it through? Okay. We know to build out that part of the system. This is we had 50 people try to hack through the system. They took X amount of different approaches. These got further than these. And then they're able to just take all of that data and train their system on recognizing it.

Right. And so I think you'll see, and we'll talk more about kind of security stuff today, but, something like this, this blackwell chip, I mean, you, you think of the hyperscalers and what they'll be able to do with this. I wish I had some of this data a little bit more, ready to go, but there's, there's a, I was researching kind of hyperscalers recently on how much money they make on, like crypto mining.

Speaker 1 (07:11)
Okay.

Speaker 2 (07:11)
And it's an insane amount of money, like an absolutely insane amount of money. So you think of like how these hyperscalers can leverage a chip like this for crypto mining or data centers for you know, you think you can you can use more data, you can leverage more data through something like this and and add a more efficient and a more efficient manner, right?

So like you think of chat GPT, and this could be a good example, bad example, I'm not sure here, but you think of chat GPT and how much data it's able to access, how long it takes to respond, its ability for like natural language. know, when you get to something like this, it can recognize more data, it can identify more nuances, and it can adjust rapidly.

I think you'll see this used in a lot of ways. think it's going to really, really, really make a lot of their, lot of these other companies, technology, not completely obsolete, but you'll see such an explosion of this. It's just going to be hard to build on. So I'm curious to see kind of where they go with it, you know, where companies take it. think, it's going to be incredibly complex and it's going to create.

more of a skill gap than there already is with AI. think just the ability to enable more in depth systems is going to create larger skill gap. We'll see how it kind of I think anything AI related right now is it's new, right? I mean, it's not to the level that it's at right now is new. You know, 2015, you had to be really deep in the tech world to really understand what

AI is I mean, I know that it's been a technology since I think then when we talked to him last week, he was even talking about like the 60s or something, you know, but it was never it hasn't really been to the scale and in public face as it is now and it's it's just exploded and now you're talking, you know, we went from from it kind of being regular and now we've got things like co pilots and in chat bots and things like that for AI.

Um, and you know, you're starting to get into LLMs and stuff like that, but something like this blackwell ultra ship is just going to create. I mean, you think of a 50 % performance boost. mean, that a 50 % boost in capacity and everything all around, right? Like any industry that touches it, any technology that touches it. So if you're not familiar with how to use.

Speaker 1 (09:47)
Insane.

Speaker 2 (10:02)
how, like the value of AI. and even like, always tell people to start using chat GPT. So you understand how something like this works. I think it's just, I think it's very important for people to understand what that capacity is going to bring. And I know you guys are more familiar with the hardware side of it than I am. I'm more of like the purpose. I don't understand exactly how, you know, I, when I was reading on all of it, there's like, I'm not a big chip guy.

You know, I'm not super wise in that. So I'd have to defer to like, Saib for that kind of thing.

Speaker 1 (10:33)
us.

Speaker 4 (10:38)
Yeah. I mean, the thing is, that sure. Blackwell ultra offers what? 1.5 times performance of their original black black wall. Um, but increased power performance is, all good and well, but the, the issue still remains is that energy is a scarce resource right now, especially with all of these data source data sources. Yeah.

data centers with all these data centers coming out, and they're building there's, and the amount of energy it takes to kind of run these models. is this, does this chip offer better, better performance as it doesn't take less power? Not that I see. So while performance gains are great, I think we need to start looking at actually being able to take these chips and

make it in a way that make them in a way where it doesn't take that much energy to run these giant models on these chips.

Speaker 1 (11:45)
Yeah, I remember you mentioned to me in conversation that the way DeepSeq was able to kind of they use dilution. They were able to like run these larger models by using by like you kind of mentioned like they were distilling it from a larger model or something. Like how does is that what you're kind of talking about?

Speaker 4 (12:05)
Or kind of, mean, we keep, keep making these models bigger and bigger and they take a lot more resources to run. deep sea did essentially did the same thing with less resources. and got the same amount or even better performance in terms of the type of, benchmarks that they had in terms of math scores, reasoning scores, et cetera, et cetera. So.

does this chip mean that we can deploy less of them so that we don't have to use as much power? I'm not sure. If we keep increasing the size of the models, does it really matter then? Because then we're just going to need more of these chips. I think the future we need to look at is utilizing less resources to either create the same amount of production from these models.

or create smaller models that are more tailored to specific use cases so we're not using the amount of resources that these massive, massive models are using. Or if we are using the models, figure out a way where they use less resources.

Speaker 1 (13:19)
Gotcha.

Speaker 2 (13:19)
Yeah, I think it's all about optimization. I have a great example to me would be like cars, right? Automotives, automobiles over the years is you go more and more and more power up to like V8s and stuff like that. And in where they're a lot more powerful, a lot of them are less efficient. And so then we've kind of pivoted to hybrid engines that are just as powerful but

Now they're more efficient, you know, and then you kind of move on to, guess, electric vehicles, which is a whole different argument. But I think hybrid power train versus like a typical internal combustion engine is a great way to kind of look at that. If you're more familiar with the car and its evolution, you know, it's just the same thing here is yeah, we have this crazy, more powerful chip, but yeah, is it, is it scalable, you know, or.

Or is it gonna eat so much energy that it's just not worth it, you know? And I think that's what I like that England is doing, or the UK is doing their their build baby build thing with the nuclear. You know, they're heavily leaning into nuclear right now, which is a lot more efficient and clean energy than oil, typical oil stuff.

So I think, I think, in America, specifically, we need to start looking at energy that can scale with this new technology, because I know, you know, we talk in America about wanting to be AI leaders, right, as a country. But we also are currently in a state of really being behind in energy infrastructure.

You know, we, we really need to not look at now and the past when you just start looking kind of toward the future and efficiency. I think it's, it's definitely going to be an optimization thing. think these, these giant ships like this are going to have a lot of concerns around energy consumption, kind of the, the economic implications, ethical safety kind of concerns. So.

I think I'm base level.

Speaker 4 (15:45)
I mean, even, the, we got reports of the 50 nineties where they sure they were big and over the 40 nineties. and they didn't offer that much improvements, but because they were using so much energy, the Nvidia didn't take into account the PCI connectors. And so the PCI connectors are actually melting on the, on the mother blogs.

So yeah, we definitely, we definitely need to figure out a way to not only sure we can increase the, the performance and things of these chips, but maybe we can do it in a, in a more energy efficient way rather than just going for power, power, power. need to go for, efficiency, efficiency, efficiency, and then maybe some power.

Speaker 1 (16:38)
Yeah, yeah, I mean, that's pretty.

Speaker 2 (16:40)
there is like, you think it's kind of a situation where they they chase the power until they find the power cap. In the past that we found that power cap a little bit earlier, and with technology accelerating as fast as accelerating right now, we're not finding that cap on power. So they're just still chasing that before they try to optimize it for energy usage.

Speaker 4 (17:06)
I mean, sure. Like that could be the case, but, um, everybody in the world knows that we don't, just, we, we don't even have the infrastructure to, to power, uh, the entire world, really. I mean, we still have what 700 million people in, Africa alone that don't have access to power. And then now we're, spending all this money on trying to, I'm trying to on.

and our finite resources on trying to power these massive data centers. But eventually we're going to run out of ways to power these massive data centers if we don't figure out a way to make them more efficient or come up with, or on top of that, and on top of that, come up with better alternative power sources.

Speaker 1 (17:56)
Yeah. I think that's kind of what Ben was talking about last week when we talked to him or a couple of weeks ago. can't remember. He was like a pen yen, pen yen in New York is where like nice egg is provides a lot. Yeah. But then he was saying like, you can go to Oregon and then like you get hydroelectric for a fraction of what it would cost to run a data center and you're able to power it super easily. Mike, I haven't like, my question is like what challenges we kind of already talked about it before.

Speaker 4 (18:08)
Right.

Speaker 1 (18:25)
But like what other challenges might arise from like the rapid advancement of AI hardware just in general, like this chip or like say like another company comes out with something similar. What challenge might arise from something like that?

Speaker 4 (18:40)
I mean, with massive power, I'm assuming the price also skyrockets. I'm assuming, um, with open AI, trying to, to float the idea of having these PhD level assistance for $20,000 a month, is insane. We're going to see other companies trying to, trying to absorb the costs of these, of these massive chips. Um, and that's going to

go to the consumer because then the consumer is going to want to use these. And we're just going to, yeah, we're just going to hit this point where they're going to need to figure out either figure out a way to absorb these costs or lower the costs or, push the cost onto the consumer. And a lot of times these companies like to go the easy way and push the cost onto the consumer.

Speaker 1 (19:34)
Yeah. mean, that's at the end, at the end, I feel like that's always what happens. It's like you gotta, you gotta increase the price in order for the product to, you know, give you an ROI. And when you get that, you have to take into other things that kind of like you mentioned, the growing increase of the amount of money it takes to like run a data center to even run these models is probably like, like a lot. And so I don't know if it's sustainable, but there definitely needs to be a different way.

when it comes to like powering just AI in general, because it seems like it's going to be a really big concern later down the line.

Speaker 4 (20:11)
Yep, for sure.

Speaker 1 (20:12)
Yeah.

But yeah, so we'll move on to the next one. saw Nat West collaborates with OpenAI to kind of help out with enhancing its digital banking tools and internal systems. Specifically, it's helped with, they're gonna help by upgrading their chat bots that they have, like Quora and Ask Archie to help with like speed and accuracy of like client queries, whatever they have.

as well as fraud. It's going to help a lot with like prevent fraud in like real time. And it seems like it's like a big NatWest is a really big bank over there in the UK. They really started in well, NatWest became in 2000s in the early 2000s, National Bank and Westminster Bank kind of merged together and then they formed NatWest. And so it looks like now they're really

diving deep into AI with lot of companies. so my question is like, what benefits and risks come along with like integrating open AI tech into banking services? Like, how do you guys see that working out?

Speaker 4 (21:20)
Yeah, mean, it's so generative AI and AI in general is great, especially when it comes to recognizing patterns and things like that. at the same time, you want to be weary because AI can a lot of times hallucinate. you definitely want to have that human in the loop kind of thing. But in general, think by integrating

AI into their financial financial fraud systems and things like that. It really makes it easier for them to to detect fraud because now you're reducing the amount of man hours that need to go into looking at all of these transactions trying to figure out figure it out. You're probably able to find more fraud transactions. And not only that, once you find those fraud transactions and you

you realize that they're fraud, then you can use that to further train them, train the model to keep getting better and better. so I think, I think the, it's benefits are that, less man hours are needed to focus on fraud and more in serving their customers and, and making their banking systems better. But the risks of course come with maybe there might be some hallucinations and, where, people

where their customers might ask these chat bots, these questions and they'll be getting wrong answers. So, yeah, there's, definitely risk and they're both reward risk and reward. But, yeah, I mean, I think overall it's great, but I think a human in the loop definitely definitely still needs to be a thing.

Speaker 1 (23:02)
Yeah, kind like a mediator, almost looking at everything, just making sure everything is good. I know specifically in fraud, there's typically like an officer, like a bank secrecy officer or something that manages the fraud mitigation at the bank and whatnot. so there's typically like a third, there's another person involved into that. So I'm curious to see how this kind of develops, how OpenAI

you know, develops this, they won the partnership and then their technology, I wonder what it's going to use. But their stock price went up after the news. So, you know, that's good news.

Speaker 4 (23:42)
Yeah. mean, I'm sure, I'm sure it'll be easier for them to, kind of handle these fraud claims. Like they'll probably implement some sort of, kind of base value where if it's, if it's this month, this amount of money, the AI can just automatically, accept these fraud requests or whatever.

Speaker 1 (24:06)
Yeah,

Yeah, they're called the red flags. Yeah, red flag indicators. They're typically like a bank what they'll do like when they do like fraud and money laundering, they'll have like red, they're called red flag indicators that the, the anti money laundering model, you know, goes out. It's usually like, like cash spikes of in excess of 10,000. And so like that account will have, or that transaction will have like a red flag on it. And then there'll be like some due, due diligence on the account.

that an officer or like anybody like a reviewer of some kind that reviews the fraud and just makes sure that everything is okay. There's like, it's called due diligence and then CDD, EDD, have like, typically at banks, they have the whole process laid out in their operating procedures. And then somebody looks at it and then like, like, okay, yeah, this was right. This person had an excess of 10,000 because they were laundering money for the cartel or something.

Yeah, dude, was cool. was actually one of my managers, there was a manager that I worked with previously. They were doing, they're working at EY and they were on site at, I want to say like some country in Asia, but she was reading text messages of the actual cartel like threatening.

This person's life and they were like developing a paper trail for like yeah financial crimes Vincent and they were going through all that and she was just like telling me describing it was like this is like the scariest thing because like Once you get once you hit there. It's like damn this stuff is real, know It's like it could it really get anybody and then she's like actually viewing the financial records and stuff and seeing it all and Happen it's like I need

Speaker 4 (25:38)
Yeah.

Speaker 2 (26:00)
I almost actually took a job doing that when I lived at our old house while I was in school. I'd done that internship as a financial advisor. And then when I was finishing up school, I was kind of applying at banks because kind of the direction I thought I wanted to go. And one of the job openings that I'd actually looked at was and went to the interview for was fraud detection. And this is in

like central kind of south central Texas and huge thing there is a cartel traffic and kind of And when they told me that they like actually call the people like, and they'll they'll talk to cartel members. I was like, I don't want this job. I don't want to be the guy detecting the fraud and being like, hey, cartel leader.

Speaker 1 (26:38)
Yeah.

Yeah

They're coming for you.

Speaker 2 (26:58)
I know what you're up to.

No,

no, I'm not dead

Speaker 4 (27:03)
Maybe this AI will be better and that just detects the fraud automatically.

Speaker 1 (27:07)
Yeah,

that's right.

Speaker 2 (27:08)
I

was actually going to say, like, wonder if that's kind of one of those things. Cause, cause as someone that would be looking at that job, I'm like, I don't want to, I feel like I'm putting myself or my family in a position that like, it's not guaranteed and it's probably not likely to happen. Like I don't want to be anywhere near this.

Speaker 1 (27:26)
Yeah.

What you gotta do, you gotta get that fraud AI model, or the one open AI is helping that one develop, and then you gotta go to Boston Dynamics, and then you gotta create the next RoboCop.

You know?

Yeah.

Speaker 2 (27:55)
the

Speaker 1 (27:56)
Yeah, that'd be crazy. That'd be crazy.

Speaker 4 (28:00)
I can't even imagine like, let's say you're randomly the cartel comes up to you and he's like, and they're like, we're going to use your, your laundromat or something to launder your money. and then, and then you're just like a random person. And then all of a sudden you're like being arrested for fraud and you're like, it wasn't me. wasn't

Speaker 1 (28:21)
It

wasn't me, he said he wanted to do his laundry.

Speaker 2 (28:26)
some Breaking Bad type stuff, you know?

Speaker 1 (28:28)
Yeah, I mean, that's the one of the best places to do it. I typically see a lot of laundromat because it's very cash heavy business. So like when you report it, it's like seems legit. But you could be like cleaning little money at a certain amount of time. But if anything over 10,000 has to be filed, a CTR has to be filed with the bank. like that's just like a just to help build a paper trail no matter what. So like, you'll see a lot of people

Like one of the directors actually, he told me about a story about him working in Miami and he was like reviewing CTRs and SARS and all that stuff. Basically what that is, CTRs like currency transaction report. I said, yeah, that's what it stands for. And the SARS suspicious activity report. So that's what, when a bank fills out on reports, potential like fraud activity of an account to FinCEN, that's what they do it for. But yeah, he was just reviewing.

like CTRs and SARS in Miami. And he was saying, yeah, like people would send their family like money to Cuba all the time, like just loads of money to Cuba. So that made sense, you know.

Speaker 4 (29:36)
Yeah.

Speaker 2 (29:37)
It's a big use case that they sell like cryptocurrency on. It's a third party review. Like what's that company that people send money through all the time? Western Union?

Speaker 1 (29:41)
yeah.

yeah, the clearing house, yeah.

Speaker 2 (29:53)
So yeah, like they use like the cryptocurrencies like, you know, if you need to send money to your family, say Cuba, for instance, like, yeah, they can just take the cryptocurrency. It's kind of funny to hear the stories of, it's like, imagine being in like Columbia or Cuba, you've got like, I've got X Bitcoin. Let me buy some tomatoes.

Speaker 1 (30:15)
Yeah.

Speaker 4 (30:16)
of that. I remember when Bitcoin first came out, think somebody used it to 20,000 Bitcoin to buy like three pizzas or something. But when Bitcoin was like really, really cheap, that guy must be like hitting himself on the face.

I think like around 100k or something?

Speaker 2 (30:43)
I remember a few years ago when it was at 20k and I was like that's insane. It'll totally drop There's no reason for Bitcoin to be at 20,000 and a couple years ago at 60K am like oh my god And it's it's always like so expensive that you're like, there's no way it'll keep going up There's no way it'll keep going up if it's like at $100,000 right now. Oh my god

Speaker 4 (30:51)
It's the

It's crazy because like that that stuff's not even backed by anything it's just like One day could just like drop and you're just like screwed because you put your entire life savings

Speaker 2 (31:11)
No, that's not I'm saying.

Speaker 1 (31:16)
there.

Speaker 2 (31:17)
Yeah, like, look, I love blockchain technology and you know, I like to kind of watch cryptocurrency because it's interesting. Yeah.

Speaker 4 (31:28)
I mean, there's tons of use cases for blockchain technology for during it. You're in agriculture and you're using blockchain to track the, your perishable goods through the supply chain. Sure. That's a great use case, but, but stuff like this cryptocurrency and generally if you, if you're trying to do it for nefarious ways, like it's a lot, people use it for, for, for black market stuff and

Speaker 1 (31:53)
yeah, so grow. I know I do.

Speaker 4 (31:56)
You know this is being recorded, right?

Speaker 1 (31:59)
You're on video. that out. Dude, have you seen, speaking of Black Market, have you seen that there's like a documentary on the Silk Road? You guys heard like the, anyways, it's super like, supposedly there was this, you mentioned Black Market and I just thought of the Silk Road. It's like a place where you could like basically do a bunch of illegal stuff on. It's all through the intranet, but it's a whole completely different topic. thought it'd be cool.

Speaker 2 (32:01)
Look at that out.

I

definitely do not think of that. think like way back in the day. Yeah, No trading routes. Yeah.

Speaker 1 (32:37)
Grab it.

Yeah,

it's called the so yeah, that's the that's the OG Silk Road this one I maybe I haven't read up on it in a while but because it was so crazy when it came out and I forgot all about it, but it was definitely it was like People could do anything on there. Like you could literally do anything on there

Speaker 4 (32:57)
Yeah. I I remember, I remember watching this like documentary on the, on the black market and it was, it was like a website where you could hire hit men. It was like a, they, they caught this, this lady trying to, trying to put like a hit on her husband. And the way they did it was the website is like, if you, was like a plant website. And if you bought a certain plant for like $50,000, that was like, you're, you're paying for a hit.

Speaker 1 (33:07)
yeah.

yeah, I think I've seen that one, yeah.

Speaker 4 (33:27)
Crazy.

Speaker 1 (33:29)
That's a plant?

Speaker 2 (33:31)
I

didn't even find out about that thing. I had no idea either. in a cafe and you're like, God, I want my husband to be dead. And they're like, drop $50,000 on this little plant on the black market. And trust me, trust me, somebody will reach out to you. Sounds like just the easiest scam ever. I have a shop on the black market. You're going to drop $50,000 on a

Speaker 4 (33:45)
because somebody will reach out to you and

Speaker 1 (33:51)
So you

I'm a dream

Speaker 2 (33:59)
Just trust me, trust me, trust me.

Speaker 4 (34:01)
It's the best succulent ever.

Speaker 1 (34:04)
Succulent

snitch.

Speaker 4 (34:08)
But anyways, going back to the AI in finance, and I do think that it will set a new standard for financial transactions and things like that, because now you don't need a lot of people or a lot of human eyes looking at that, because like I said, AI is good at tracking patterns and stuff like that and automating a lot of this stuff.

Yeah, I think this will definitely will set a new standard. mean, we were already seeing it, especially in the past. We know that AI isn't new, but the scale of this, of AI and the amount of resources going into these operations, yeah, I think definitely will set a new standard.

Speaker 2 (35:01)
Yeah, I definitely think the biggest point of AI in like the, FinTech, the finance industry is going to be around fraud detection and security. think you're going to see, like cybersecurity lean so heavily into AI. And I was actually having a conversation with my wife earlier about this of, you know, we think about, kind of going back to the hacking thing.

You know, AI, they can leverage AI to hack or, or get through something way easier than they have in the past. And to, to combat that, and it's just how, how cybersecurity is always work, you know, new technology, new way for hackers to gain data. you know, now we have to create our own way to combat that. I think AI being

used now by hackers. Um, it's like 10 fold, like it's 10 times easier for them to get through modern systems. So you kind of have to have, um, you have to implement AI into your cybersecurity in some way. I definitely think fraud detection, um, data privacy and security is going to be way more enhanced by AI. I think.

I think if you hear that your bank is investing in AI, should feel a lot more comfortable.

Speaker 4 (36:30)
Right, right. So Aiden, what's this here about? What's this I hear about Yahoo and AI?

Speaker 1 (36:39)
Yahoo and AI so they got a so they got the well they had the CEO Jim Lanzone he was kind of like he was working with Ask Jeeves before their share price was like less than a dollar and I've never heard of that company but okay yeah I don't ask Jeeves

Speaker 4 (36:53)
It's an old company.

Speaker 2 (36:57)
ass

Speaker 4 (36:59)
Yeah,

Speaker 1 (37:02)
No, I well, I may be. Ask. is that Ask.com?

Speaker 4 (37:03)
Just two tongue.

Speaker 2 (37:09)
Think so, right?

Speaker 4 (37:10)
I don't know if it was ask.com, but ask.js was basically just like another search tool you could use. it was just, I think on the front page, kind of like Google where it's just like the search bar, but it had like a butler on it. it was funny. But yeah, it was just like another search tool people could use.

Speaker 1 (37:31)
But yeah.

Speaker 2 (37:32)
That's hilarious. I don't like hearing you say these things,

Speaker 4 (37:36)
Y'all

Speaker 1 (37:38)
Dude I had no idea You're younger than me I think I'm 25

Speaker 3 (37:39)
Alright. What? No. I'm not ready.

Speaker 2 (37:40)
You're

right.

Speaker 4 (37:42)
Yeah, old are you?

Speaker 2 (37:43)
Hell-

Okay, I'm 26. Yeah, we're like the same age.

Speaker 1 (37:50)
so you are older than me. No way.

Speaker 2 (37:53)
Yeah, but surely you know that.

Speaker 4 (37:54)
Yeah, I'm also 20, 25. Yeah.

Speaker 2 (37:58)
Yeah, yeah, we all know side hit 42. We got it.

Speaker 1 (38:02)
But yeah, so Jim Lenzoni, this guy, hopefully I'm saying his last name right. But when he took over the company at Ask Jeeves, it was less than a dollar. then now he built it back up to now it's like valued at 1.85 billion. they bought it for, IAC Corp bought it for 1.85 billion. So, you know, he had a pretty good resume. think Yahoo saw that. And now they're, you know, they're very pivoting heavy into AI where...

they're going to help. It's going to help power their content customization across Yahoo News, finance, and email sources. I think we're seeing a lot of that too. I go on Facebook and I'm able to look at a video and then I see Meta AI is able to now, it's able to give you a synopsis of whatever character is on screen. For example, I was watching an Assassin's Creed video and it was of the character. And then one of the

I see the meta AI kind of on the bottom llama three, 3.4, I think one of the, one of the llamas and it's their model that they use to basically it was able to synthesize all of the information, like the motives of the character, like what the character wants, what's its goals in like seconds. And then you're able to just find out all this information. It's like, okay. Now I could watch this video with context.

And for a person who doesn't know anything about the video game or anything, it's like, wow, it's a game changer if you're interested in that. I thought that's definitely the future. I think Yahoo understands AI. Yahoo acquiring different AI startups just to stay competitive is super important. do you think, can AI help older platforms like Yahoo remain relevant?

like with news, finance and whatnot. you guys think?

Speaker 4 (39:58)
Yeah,

mean, yeah, I think it's important for these platforms because AI is so big and booming. And I think if they don't use AI and start implementing AI, they're going to fall further and further behind. can see that when we, I mean, of course, it's not like a software or anything around AI, but if we can look at BlackBerry, right? So BlackBerry,

They had the phones with the physical keyboards, et cetera, et cetera. But once these other phones started coming out with the virtual keyboards, Blackberry was just like, we're just going to keep doing physical keyboards. And where's Blackberry now? Nowhere to be seen, right? Yeah. distant memory. A fond memory.

Speaker 2 (40:48)
Yeah, exactly. Yeah. yeah, the blackberry.

Speaker 1 (40:51)
I felt so cool when I had that keyboard. It was the best.

Speaker 4 (40:56)
Yeah, so I can definitely think that AI will help these platforms remain relevant. It can help curate the content that they have on their website. can help edit the website, stuff like that. So they have less man hours on figuring out how to essentially put the...

content on their website, they could focus on actually creating the content, working on strategy on other things to add to their platform and stuff like that. And AI can really take the underlying load of all of the repetitive tasks that their employees were doing. Now AI can handle that and then they can focus on more important things.

Speaker 1 (41:48)
Yeah. I mean, even synthesizing like finance news, like that would be crazy. Yeah. If you can have

Speaker 4 (41:53)
Pulling

it all together. Yeah. Scraping.

Speaker 1 (41:55)
That would be like super helpful for anybody that's trying to get a foot because I know a lot of people who like like a lot of investors like just normal people like guys like me and you that just like You know just look at the news and just want something that's all aggregated into one place would be super helpful I mean just like stock prices and whatnot But like what do you so like what does it take? do you guys think is it it takes to like successfully integrate AI across?

like legacy platforms like this, seeing as that you guys have experience integrating Power BI platform, is that Power Platforms? Am I saying it right?

Speaker 4 (42:34)
Yeah, mean, there's Power Platforms, Azure, stuff like that. Yeah, think a big thing, I mean, my company RSM, one big thing they do is before implementing like a cloud system, like they're like a Microsoft ERP system or a CRM system, they first bring in our MIS team, which is our managed infrastructure team to first kind of do an assessment to see if the...

Speaker 2 (42:37)
services, those kinds of things.

Speaker 4 (43:03)
company can even handle these cloud systems, the amount of data that needs to go through to internet, to the internet and things like that. they kind of do like a infrastructure check. And then if the infrastructure needs to be updated, they kind of help with that, things like that. So I think when it comes to successfully integrating AI into these systems, these platforms need to look at their whole value chain holistically and see

can my actual infrastructure, my data centers, my servers, can they handle the increased load of this AI system that's constantly running to kind of do all these transactions and stuff like that? Can I handle it? Then you also need to figure out like, hey,

Are these business processes really, really good, or am I just trying to integrate AI into them? just to integrate it because AI AI can't fix bad business processes. They can. If you just like bad data, if you put bad data into AI, get bad AI. so really they need to look at their entire, entire system and their entire, their entire.

let's say data, data flow or content generation system and, and really look at it. if, if the business processes that they, that they have are bad, they need to first build that from the ground up and then they can start integrating AI into their system because otherwise it's, they're going to fall into the same.

Speaker 2 (44:46)
And I think as like consulting, see it a lot of like kind of people, companies really think that, you know, upgrading to a new system just fixes everything automatically, but you have to take into account their old, you know, flows and automations, what, could have been automated, what isn't automated, how are their current automations running? How can they be optimized? You know, when you move to a new system,

A lot of times, you know, old systems needed to be automated for certain things that are, now auto out of box options. And there's no, you know, you have to, whenever you're migrating to new flows and things, there's no reason for some of these old, workflows or automations because they exist in a new platform. think stuff like that exists with AI, right? Just saying, we're integrating AI. Again, we kind of go back to what we touched on earlier of.

AI right now is used in such a vague term everywhere that these companies can just say, we've integrated AI, look how modern we are. But they don't explain how are you leveraging AI? What is this AI doing for your company? What is it doing for me as a consumer? think too many times. Yeah. But it's just everybody's just, you know, we, I think you see that a lot. Yeah, you just,

Speaker 4 (46:00)
check. Thanks.

Speaker 1 (46:01)
Yeah.

Speaker 2 (46:11)
It has to be implemented correctly. You have to do it. Like you said, I like that you said, holistically, it's a ground up thing. You, have to train everything based on good data. You have, takes time, right? You need to be able to look at your stored procedures, everything that you're the data that you have, on hand, how is it laid out? How are you collecting data? what are you doing with the data? Once you've collected it.

Like all of these things are needed when you're looking at putting out an AI, anything like if you're using a chat bot, right? If we're doing an internal company chat bot, right? It leverages AI. What are you feeding it? Right? You're going to feed it all the company data that's relevant. If I want to go in and say, Hey, I want to make sure I'm following compliance standards around this. You know, the, internal chat bot could.

Speaker 1 (46:54)
Yeah.

Speaker 2 (47:08)
could provide me the information I need. But if it's trained poorly, if I ask for compliance standards and I get the wrong standards or nothing, whatever it may be, I may find myself out of compliance and myself and the company are not going to be in trouble, right? And so that's kind of an internal example, but that goes for anything. If you've got bad data, if you've

not taking the proper time and precautions like that. You know, you need a way to update those compliance standards in the, in the model. So everything has to be looked at. Like you said, holistically, I like that. And once you build on it, it's proper and be clear about what the heck you're doing with the AI. I get so tired right now. just like, like iPhone, right? They're like, look at us. We added AI into our iPhone. And you're like, okay, well, what can I use it for?

Speaker 4 (47:51)
So yeah.

Speaker 2 (48:07)
It's like, well, you can generate emojis. well, is there like a practical use for the AI stuff?

Speaker 4 (48:17)
Yeah, we can summarize your messages, right? it's the worst summarization.

Speaker 2 (48:19)
Yeah, or your notifications

Yeah, I was like, you know, we're in our, our chat or whatever, and I'll open up my phone. It'll, it'll summarize the messages. And I'm like, I get nothing out of this. So these are just kind of throwing out that they have AI. And it's, you know, it's just so big.

Speaker 4 (48:41)
Apple's AI story is a little different. We should probably touch upon that next episode. yeah, for sure. could be like a bit... It's too much to go into right now. It probably needs like a full episode to talk about. They need to do better.

Speaker 1 (48:48)
F**KER

Yeah,

mean but you would think the Apple would have the best of the best

Speaker 4 (49:02)
Yeah.

the

Speaker 2 (49:10)
Yeah, they're good at like getting the idea out there and then making it good five years later.

Speaker 1 (49:19)
That's crazy.

Speaker 2 (49:22)
We're still waiting to see what this Apple vision is going to do practically. I'm excited for it personally, but we'll see what they do with it. I think even with the iPhone itself, when they first released it, I was like, okay, I can touch the screen now. It's cool, but it doesn't technically give me any more value than this other phone over here.

Speaker 1 (49:28)
Yeah.

touch the screen. That's cool though. When the first screen touch came out though, that was the coolest thing.

Speaker 4 (49:52)
That was cool.

Speaker 2 (49:53)
It was cool as hell, man. But the problem was like, like half the time I touched the screen, it doesn't work. like, don't know if you remember that, but, but yeah, it was, it was so, so whenever they first came out, it took a few years for it to be consistent.

Speaker 4 (49:59)
Yeah.

Speaker 1 (50:02)
would have to tap it hard.

Yeah.

Speaker 4 (50:10)
Yeah. So what's this I hear about a defake scams targeting?

Speaker 1 (50:16)
yeah. Did you guys see the video I sent?

Speaker 4 (50:19)
No, course I did not.

Speaker 1 (50:21)
man, dude, so people are actually using deep fake or deep fakes or like deep fake tech AI tech to impersonate themselves in this case particular There's like this really there's this really big financial analyst Hughes called Michael Houston or Houston. Yeah, Michael Houston. He's basically like a really big financial analyst there in the UK

And there are AI generated videos of basically him used to promote fake investment opportunities on WhatsApp and like, yo, join my group type of thing. And it's like, why would this person ever do something like, but I mean, social media sites are under pressure now to stop the spread of deep fakes, because you could really impersonate anybody now. And then how far the technology has come, can be

very realistic, just to like, you know, to people who are not used to seeing it a lot. But yeah, like how can we safeguard from, how do you guys think we could safeguard from like the public from deep fake driven scams?

Speaker 4 (51:30)
I mean, unfortunately it's just people have to, people have to start being vigilant and saying things like, I don't know if, if I'm, if I'm sponsoring something or, or, or if I'm saying something, it's going to be on this, on this account only. or say stuff like I'm never going to sponsor anything. So if you see a video of me sponsoring something.

I mean, I'm not the one doing it. or, or these platforms are just going to have to try figuring out, like have AI look at every single video and try to find these deep fakes. I mean, I don't know a lot about deep fake technology. cause I've never tried to make deep fake technology. so yeah, I don't know. I don't, I don't know off the top of my head, what, what kind of safeguards these social media platforms are putting into place.

But I think it's going to have to fall on the actual creators of these platforms to be like, hey, this is the only account I have. If you're seeing videos on a different account, cross check with this account. And if that video is not here, then it's not me. Yeah.

Speaker 1 (52:47)
Yeah.

Speaker 2 (52:48)
Yeah.

lot of like short form content creates issues. Right. I mean, you look at like Tik TOK shorts, any type of short form content, people repurpose other people's content all the time. And I think, you know, I think there becomes like ethical concerns too. you're, know, if you think about the huge Twitter thing, that was a big deal of like content moderation or social media and platforms in general, like content moderation.

people saying like, you know, you over-moderated, you're controlling like freedom of speech kind of things. You know, if they go into, which started out simple, right? You know, don't be a Nazi, you know, pretty, but, but then you get into like these, these kinds of situations like deep fake and it's, know, kind of like you said earlier, like there's false, you know, there's, there's a hallucinations or whatever that the AI can make.

Speaker 1 (53:28)
Give it a breeze.

Speaker 2 (53:44)
So how do we make sure that it's removing content that's truly deep fake or, and we don't allow it to get mishandled to the point where we're removing things that people are saying or, you know, that maybe a company doesn't want to be said or, you know, something along those lines of, you know, they could automatically set these things up to be removed as well. So I think.

To me personally, deep fake technology is kind of the scariest side of AI. Yeah, I'm not a big fan of deep fake. I don't really see a lot of benefits of deep fake technology, to be honest with you.

Speaker 1 (54:26)
Other

than memes, probably.

Speaker 4 (54:28)
Well, I mean

Speaker 2 (54:28)
Yeah, and mean even then though, it's just kind of a slippery slope.

Speaker 1 (54:31)
Yeah, that's true. Yeah, because you're using somebody's likeness if you're doing

Speaker 4 (54:35)
Yeah.

mean, I can kind of see something, like deep fake technology helping where let's say you're, unfortunately your loved one passed away and they sent you a letter and you want to have like a video of them reading the letter to you or something like that. can see something like that helping, but, but other than that, yeah, I don't, I don't know. I don't really know what, what good can come from deep fake technology.

Speaker 1 (54:47)
yeah.

Speaker 2 (55:04)
Yeah, I mean, I get like, okay, you know, we have motion capture and maybe they want to use that for like films, I guess, you know, they think for that. But I just I'm not a fan personally of deep fake technology. I definitely think there needs to be some way to flag either accounts or like if you're using deep fake technology, I think there just needs to be a tag.

or there's something that on Twitter, X, you want to call it, they use like community notes. I think there needs to be some type of like community flagging almost for things like this. That way people do know. you know, like, I know like I'll watch YouTube shorts and every once in a while there'll be something that pops up and blocks the thing. And it's like, oh, this has been flagged for XYZ. Do you still want to watch it? I feel like for a deep fake, you know, they should have that where it's like, Hey, this has been flagged as a deep fake.

Speaker 4 (56:00)
Right. this account is known to use deepfakes or something.

Speaker 1 (56:05)
Yeah, something like that. I

mean when I think of deep fakes and like, you know scams Yeah, I feel like I feel kind of nervous like for the older generation where they're not able to pick up on those kind of things Yeah, and like man, like if they fall for something like that, that'd be horrible. I mean you get a lot of people

Speaker 4 (56:24)
I

mean, I'm 29, like even some, on LinkedIn, like people post like, Oh, which one's real? Like, I don't know. It's so well done. You really need to like focus and be like, Oh, is there, is there some weird tick? Is there some weird artifacting on, on the video? And half the time there isn't, and you can't, you can't really tell.

Speaker 1 (56:34)
So

Speaker 4 (56:52)
Unless, unless you're being like super, super nitpicky or something like that. They're getting so well done. It's, it's crazy.

Speaker 1 (57:00)
except for

Speaker 2 (57:01)
Part to me is like even YouTube they had this this advertisement running for a while that was literally a deep fake ad and the only reason the only way most people could tell is because it said one or two words incorrectly like it pronounced incorrectly but you have to like you would have to have watched past like the 15 second skip thing right you know like I'm doing something I wash my hands or something you know and I've got it sitting there and like

17 seconds in it says something that you're like, so you know, like, I think that's it's crazy that a company like YouTube, which is owned by Google.

is not able to even take the time to like, review something that is being advertised regularly on their platform. So and that was a financial advising thing as well. Like it's saying sign up for courses type thing.

Speaker 4 (57:59)
I mean, as long as they get money, maybe they just don't care. Yeah.

Speaker 2 (58:02)
Yeah,

I think there's no argument there. I you look at Google search where it's come versus where it was, and I won't get into that right now.

Speaker 4 (58:11)
We can talk about it next time.

Speaker 1 (58:13)
Yeah, talk about the next episode. But like we kind talked about it. Yeah, for real. We kind of talked about it already, but like what is, what's the responsibility that these platforms have when fighting this issue? Like, do you think it's like, they just get their money and then whatever, then turn a blind eye to it? Or do you think there needs to be like something in place to actually crack down on this kind of thing? Cause it's, it's, think it's going to be a big issue down the line.

as this, you know, as AI kind of grows and gets better as well as deep fake technology, you know, how, what's the, what do you think the responsibility is going to be for these companies?

Speaker 2 (58:54)
think it's going to be, you know, we all went to like business side of school, right. And they teach you, you know, like social, ethical, kind of governance type things. And what is it like, you know, where you can use resources over and over, whatever, I can't can't think of what it says right now. Sustainability. Sorry. So

I think it's kind of, it's going to be one of those things that you learn in school. You know, as people come through school, it's going to be a, you know, environmental social governments cover thing. know, I do think just as P as companies are responsible for that, they are responsible for this as well. think anything ethical, your company is obligated to abide by ethical standards, you know, and I think.

in general, consumers at a large scale know what you're doing, right? Then, you know, they know they're going to use a company that aligns with their beliefs, you know, and if you're, mean, AI is such a hot topic right now. If you're found to be misusing it, it's going to be you're done pretty quickly, I think.

I don't think, I think this is a bigger deal to people than any of those other things we've mentioned for the most part. I outside of, you know, like racial or, know, like not hiring people based on those kinds of things or child labor, you know, I think this is kind of one of those things where people will really quickly feel, like their being, their privacy is no longer valued or anything along

line. So if you're leveraging AI in the wrong way, I think people will see that. I'm sure, you know, it's going to be all over everything. A lot of these companies get special tags and awards and all kinds of stuff for their like sustainability practices. There's different like licenses they can receive for their practices. So I think there'll be something like that at some point around AI regulation. It's probably going to come down to

a non-government organization. Because I think all of this is just moving too fast for bureaucracy, to be frank. mean, you think of technology over the last 20 years, I mean, the government, don't think has really even caught up to regulating the internet properly. Or companies that, you know, like Google and stuff like that, that are a big focus of theirs. I don't think they understand it enough to regulate it quickly. And the level of bureaucracy, I, you know,

Speaker 1 (1:01:27)
Yeah.

Speaker 2 (1:01:48)
I think a great way to visualize how slow bureaucracy is and how it impacts society is honestly to look at, kind of how rapid fire Trump's been issuing things and people are saying like, well, that won't be allowed. Well, that won't be allowed. Well, that won't be allowed. Well, we don't know whether it's allowed or not. And it's, you know, he's passing these things and getting them done, whether he knows or doesn't know that they're going to be, you know,

blocked at a later date, he does understand the bureaucracy is so slow to it that it could be two years before it's actually stopped. don't think, you know, I think a lot of companies and stuff like that, they understand the same thing. They're saying, okay, well, I understand that the bureaucracy is going to take too long. So I'm going to do as much as I can and as quick of a time as I can. And I think it's going to really come down to non-government organizations.

consumers as a whole and just kind of, think we're at really a big point in society where we understand that and where we, there's a lot of distrust in the government in general. And with the internet, you know, everybody has access to everything and

Speaker 4 (1:03:05)
How does that entity enforce that though?

Speaker 2 (1:03:10)
Well, I think it comes down to less enforcing more of labeling companies, right? Like there's, don't know all these labels that there are, but there's non-governmental organizations that give, you know, these tags to companies. They're like, this company abides by XYZ. So they get this tag. anybody that buys from them knows. It's, you know, kind of stuff like that. Yeah. There's, there's, there's companies that are responsible for like granting

they're like third party optional things like you don't need to have this to legally operate. But when people are looking at companies to buy stuff from maybe they have a sustainability thing associated with them. I can't think of what they're called off the top of my head. It's been a while since you know, I did this stuff. But I think it'll come down to like, these companies abide by XYZ AI regulations. And you look at I think there's just a

You know, like look at USB-C for iPhone stuff, you know, like even there, it's like, they use their own thing for a while. Society's kind of come to a point where they hate that, you know, that everything else is using USB-C. they, they faced a lot of pressure and then eventually it was regulated. and I think that's just kind of what you'll see in any industry right now is like the, the kind of mass populace are going to pressure these companies. Good or bad. It happens.

and the companies can choose to abide by these types of things, or they can choose not to. And I think kind of society will decide for itself. There's always pendulum swings as we've seen, especially in the last five or six years. I think that'll be the, it'll, it'll be more public.

Speaker 4 (1:04:52)
So

Speaker 1 (1:04:53)
Nope.

So like, what was the most surprising AI news story of the week for you guys this week?

Speaker 4 (1:05:02)
That's a good question. It's a good question. I mean, probably, probably the Yahoo getting into the AI space. Cause I know I, I use Yahoo for, for a long time. And I mean, unfortunately now I don't really use it that much, but I mean, I guess that means that they know that people aren't really using their platform as much as anymore. And they, need to start getting into the AI space to kind of compete with all these other.

other players. So I think I really like that that story when it came out. It really resonated me because kind of that nostalgic Yahoo things like that. So for me personally, I like the Yahoo story lot.

Speaker 2 (1:05:48)
Yeah, I think for me, it was the Blackwell release, to be honest. I think the Blackwell Ultra release was the most surprising. that seeing something... I really wasn't expecting Nvidia to come out with anything really impressive, to be honest. I mean, every year it's like, oh, we've got the 4080, we've got the 3080, we've got the 5080, like all these, it's like, oh yeah, it's got a 5 % in this or 10 % that. So them coming out with something that seems very...

targeted toward AI, which they know what people have started using their GPUs and chips and things for, right? Is AI centric crypto mining kind of thing. it looks like they're leaning into that, which I'm impressed to see. I feel like with companies and stuff releasing like, yeah, we're using AI. I feel like we're hearing that regularly. I think with Yahoo, I will agree that I'm

kind of surprised that they were, they made the move, but I do think that it's not, you know, it's not incredibly surprising with these companies. think that I want to see more of what they're going to do with it. and how they're going to implement it. Cause I think Yahoo is they're not like AOL or anything like that, where they've just kind of really fallen away, but they just, they've really fallen behind like being in Google.

Speaker 1 (1:07:18)
Yeah, they got some work to do kind of thing.

Speaker 2 (1:07:20)
Yeah, but I do think that they've somehow they've stayed at like the forefront of like finance news. Yeah. Over the years. I mean, I think all of us probably use them for finance news.

Speaker 1 (1:07:32)
First Kramer. opposite of what he does. The opposite of what Jim Kramer does. That's what I use Yahoo for.

Speaker 4 (1:07:46)
I I think like Finviz.

Speaker 1 (1:07:49)
yeah, Finviz is good too, yeah.

Speaker 2 (1:07:51)
But I haven't used it. think they're just there. Other companies, other small companies are kind of an era of like small digital companies, you know, of coming up lot of startups and things like that, that people are using. And a lot of companies like Yahoo, that, you know, they're just kind of, they're fighting the losing battle right now. So it'll be curious. I'll be curious to see, I think, you know, as the younger groups come up and people are interested in things like finance and investment, it's a lot more.

accessible now than it has been in the past, which means it's easier for smaller companies to develop applications, leverage these bigger companies, APIs, things like that. I think it'll be interesting to see. think everything in the next five years with AI is going to be sit back and watch the fireworks.

Speaker 4 (1:08:42)
Yeah. I mean, everything is rapidly, rapidly expanding and all this technology is we're advancing. So who knows next five years, the AI landscape might look completely different than what we think it is. mean, people back then thought in the 2010s we have flying cars or something, but that's not the case. So, I mean, we'll see what the future holds for these companies and AI in general.

Speaker 1 (1:09:12)
Yeah. Speaking a little bit of the future, what do you guys think is an AI trend that no one is talking about, this guy might be huge in the future. Like what's one thing you could put your finger on?

Speaker 4 (1:09:24)
I mean, I think, because of the amount of resources and processing power, these big models take, think, I think of the future, people are going to be focusing more on small language models or just smaller models in general and tear and tailoring them to specific use cases. and I think that allowing users to.

run these smaller models locally in terms of data governance and things like that will be huge. know data privacy and stuff will also be talked about more and more. And so I think that the ability to run models locally, as well as utilize less resources for these small models for specific use cases, I think it will be pretty big. I mean, some people are talking about it, but people are more focused on

on these big models and these big use and these big and the big proprietary things that come out. But I think, I think we need to start looking more closely at the small language model players.

Speaker 1 (1:10:34)
Yeah.

Speaker 2 (1:10:34)
Yeah, I think, I think an underrated area is the agricultural industry. know somebody that works in the weather industry. And, and they were talking about like how they're starting to implement AI and it's, you know, so much easier to use. It's way more accurate, like a way, way, way, way. He was talking about like all the different systems they've had to use in the past versus, know, what they're using now.

And he was saying the system that they're using is similar to, you know, chat GPT or these like internal chatbots of, know, you use it as an assistant, say, Oh, I see this trend, but I don't understand. What's the percent chance that this happens here. And what's this? What are these two paths mean? Blah, blah. I don't know the exact stuff he was saying, but, um, the fact that, you know, you don't have to take

three hours to try to cross check all these different things, you can take 10 minutes and it's already been trained on everything. it's like predictive analytics is just so much more advanced. Um, but I think when it comes to like the agriculture agricultural industry, that part alone, um, just the, predictive analytics is going to be phenomenal for them. You know, I come from a farm town and I've seen many times where

farmers will plant things thinking the soil is going to reach a certain state and the weather is going to be a certain way. you know, halfway through the year, there's things that they never would have thought about that are impacting them. Even, you know, like one of my neighbors was talking about, you know, they have to take into account the migration of like certain insects and things, which is all weather-based, but they're not like birds where every year they

they migrate, they're a little different in that they migrate with weather patterns. So that's something that we don't really think about, right? Like, it's easier to think about weather. Like, yeah, it's going to rain X amount this year. But we don't think about like, well, you know, these insects that eat through plants or mess up soil or change the like levels of nitrogen in the soil and stuff like that.

We don't really think about that part. So I think that's going to be an industry to kind of, you know, kind of think about how that is affected. And there's going to be a lot of companies that go in and target that as well. So I think that'll be an interesting industry to kind of watch over the years.

Speaker 1 (1:13:12)
And if you had to invest in one AI company right now, which company are you betting on and why?

Speaker 4 (1:13:22)
I mean, I wouldn't necessarily invest or tell anybody to invest in this company, but a company to definitely keep an eye on and, keep looking at is perplexity. Um, because they don't necessarily have, they don't have their own model, but they essentially utilize other models. Um, and essentially democratize it by allowing people to use these functionalities, um, for, for what $20 a month, um, open AI is

deep research perplexity essentially had their own, and they're kind of democratizing AI for, for a lot of users to be able to use it. because not, not everybody has the $200 suspend. and they're also using, they're also sure they're using other people's AI, models, but they're also solving issues that these models had and they're

basically enhancing the capabilities by doing so. So I think perplexity is definitely a company to keep an eye on.

Speaker 1 (1:14:28)
If you enjoyed today's episode and want to stay in the loop on everything happening in the world of AI tech and finance Make sure to follow us on your favorite podcast platform subscribe to our YouTube and Share this episode with a friend or colleague who you think would find it valuable Until then stay curious

Speaker 4 (1:14:47)
Thanks, Ed.


People on this episode