Cream City Calculation
Three friends talking about data and how it impacts our lives and the lives of others.
Cream City Calculation
The Evolution of AI: NLP to LLM
Join Colleen Hayes, Frankie Chalupsky, and Sal Fadel as they explore the history of AI, its current state, and how we use it daily—highlighting its benefits and challenges.
Relevant articles:
In a first, Google has released data on how much energy an AI prompt uses, MIT Technology Review
https://www.technologyreview.com/2025/08/21/1122288/google-gemini-ai-energy/
Welcome to the Cream City Calculations podcast. We're three colleagues and friends that love data and to talk about how data is impacting our lives. I'm Colleen. I'm Frankie. And I'm Sal.
Sal:Welcome back to Cream City Calculations. This episode is NLP to LLM. We're gonna talk about AI and the evolution of ai. Really understanding where the language came from, who, how it developed into where it is now, and then how it became mainstream. First of all I'll give everybody a little background, maybe a little background on who I am as well, even though I've done an episode. But I am a data scientist and I've been working with natural language processing an LP for a very long time. And it's been really exciting to watch it evolve into an ai at our LLM. First I will say that. That AI is not truly AI right now. I do believe that it's still a a machine learning process. So a lot of this stuff is based in machine learning. And so we're gonna go through that from the beginning a little bit, and then actually ask have Colleen and Frankie say like, where, how they have evolved using it. I think it, it'd be fun.
Colleen:Sounds good.
Sal:Perfect. overall I, where it started was really like data science back in. I think in the beginning of the computers, honestly, it's actually well before that with statistics and probability they started building the fundamentals. And I think it was like 1970 when they first started to talk about AI in general of artificial intelligence. They didn't really truly know exactly what it was or where it was going to be and how people are gonna use it. I think they, they thought a little bit less of a. Over the world perspective, but more of Hey, this is gonna help with some of our heavy computing, heavy mathematics that we are gonna do. And then so that evolved, but really simultaneously, both computers evolved at a exponential rate that allowed us to start processing more and more, which led to, I think one of the biggest parts of how. AI became and especially like natural language processing and data science was the big data craze. Colleen, you're old enough. Frankie, you might have been in it, at that point, but I think I just remember the big data craze going nonstop when we were like starting our careers or like in the midst of it. And it was like big data. And no one was talking about ai.
Colleen:No, that's, that's completely true. People were concerned with having enough data to sort of run against these different algorithms so as to get some value out of it. And I remember, you know, companies that I worked for kind of questioning like, are we going to have enough data to create a model? Yeah, it was definitely a thing.
Sal:prior to. to like synthetic data and fake data that they had, like people were just like, oh, I'll build out data on this small subset and then build what they call like a Monte Carlo and build out a bunch of simulations that would then feed into a model.
Colleen:Mm-hmm.
Sal:like that was like the advance machine learning at that point. And then one thing it developed into was there was a couple things that. Researchers end up developing it. One is GPTs a type of transformer. and what a transformer is how data is vectorized. And sorry if I'm getting way too technical'cause but really it's all about how does a word position themselves in, let's say a 3D. Environment, right? And so what Transformers did, and what a lot of these natural language and vectorization, did was they would combine words and say, Hey these are similar words together. with that, it developed into even more mature of oh, okay, how do we build in? These words are similar, but wanna make sure that we're taking out unnecessary words and making sure that we're putting in tone and then starting to put in, oh, let's put adverbs and actually calculate or like tag all the words with this a noun? Is it a noun when it's referred to in a certain pattern? And as that started to evolve, that's when we started to move closer and closer into this AI kind of, Hey, this might actually be a thing where people are asking. A computer, a question in natural language and in, in plain English or any language now. It evolved into oh, wow. Now this can actually predict what the response should be. and that I think when I say predict, I think that's the biggest part. And that's why I say AI is not true artificial intelligence. And I think a lot of people will say that out there. Truly machine learning, so it's all probability. So what an LLN system and AI system does now is it predicts the next word based on the probability that the word prior to it is. Or whatever word that was prior to it, it'll predict based on the probability and most likelihood. And then when you ask the questions that allows it to filter down those words, those additional words. So then it's just building the probabilities and it's chaining those words together. That's why these models, now, they say reasoning. It's actually more, less of a reasoning model still. They're not fully there yet, and they're actually only about 55% accurate. A lot of these things for these reasoning models. So there are, it's still a math problem right now and a machine learning problem that people are trying to predict. It hasn't built in thought, in that aspect.
FRANKIE:Probably a good thing.
Colleen:I guess some of us probably could. See, we can see that in action sometimes when our talk to text or our predictive text is not so great. Right? You ever have autocorrect suggest a word and it's like, that was not what I
FRANKIE:Yeah, and I, when you think about it, like the possible combination of words that go together, it's enormous when you're thinking about every single word and the English language so that it's. Gotta be extremely challenging to predict that next word. Given that there are so many possibilities.
Sal:So how they ended up predicting it was these things called vectors or or neural nets. And they would've vectorize the words that way. I know I'm totally nerding out but that, that became to a point where, all right, let's, how much data can we actually. Turn into these narrow nets to actually vectorize. And so where a large chunk of this processing power has come. And what AI systems and a billion or 10 billion or 900 billion parameters. That's what that's what they're talking about is like, how many can they dump into a vector and train a model so that it has the probabilities the next word accurately.
FRANKIE:And I would imagine too, like when you're. Speaking versus typing. I bet tho those vectors would be completely different.'cause I know, like for me, when I'm typing, I try to sound a lot more professional. And then if I'm speaking, I'm not necessarily like thinking of that. So that's interesting.
Colleen:I was thinking of the differences like. Inflection, right? Like the way, so the English language itself is, is a difficult language because you've got words like Record that mean two completely different things. And that emphasis is lost when you're typing that out, whether it's a text message or in an email and you know, your computer's trying to predict what word you're gonna type next. It's sort of lost the value of, of whether you're talking about a verb or noun. Because they're the word record. R-E-C-O-R-D means two different things depending on how you, like what part of the word you emphasize.
Sal:That's exactly as we went through this process or went through this evolution in the last five years, to watch. LLMs and AI and open open AI and all these chat bots or chat GBT come along. One of the things that we saw was like, oh no, this is it can't spell Mississippi correctly or something. Like these different words that you're like, oh, it should be so simple. Or it's because it's learning it and learning that probability. These LMS and chatbots have a hard time of distinguishing different words from e each other. It has a hard time. And as we went through this evolution in the last five years, which has accelerated really quickly, one of the big things is. What we've done is prompt engineering and correct prompting so that it can start to learn the context in which you're trying to refer to this information. So what it's really doing in the backend is filtering out words or chains of words. And this is where prompting is super, super important. It actually builds out, oh, the filters that need to come in place to know whether or not it's record or record
Colleen:Yeah.
FRANKIE:So let's, could we dive in a little bit deeper? We've talked about like a high level overview of like how AI has evolved. Can we dive deeper into 2022 through today? I know 2022 is when chat GBT was released and I feel like there's been a lot of change in AI just in the last couple of years.'Cause once it got big, it was like. Nobody could stop talking about it and everybody was hearing the buzzwords and all of that and, but I do think that it actually has progressed a lot in the last couple of years and I'd love to dive deeper into how it's progressed there.
Sal:Yeah let's do that kind of overall. In the last five years, obviously the biggest thing is I think the access to it and the ui it became friendly, to start chatting with this thing right before we were passing in data. Vectorizing it binding the nearest neighbor or or matching it or doing a a tenor flow mat predicting of the next word. And then it, but it was like passing in data. It wasn't really easy to work with. And I think the biggest thing that happened, especially around, I think it was like 2000 or 2022 area, was chat. GPT came out with a. UI that can interact really well with these models. And so as you try, I think, and that's where the adoption, everybody's whoa, what is this? I get to talk to my computer and it gets to respond to me in a relatively normal way. I think that opened the doors, the flood doors or flood gates, sorry to it. And as it started to evolve, they saw oh my God, we gotta start. Building out and training on larger and larger models. And that's where you see this like AI race going on because everybody's oh, a ton of consumers that want to take in this information. There's a ton of businesses that want to take in this information and use AI to support their, inform their day-to-day. Working and honestly some job replacement using it, how much can we dump into these models so that it can have more context to information that would be important to customers? And so now you have this AI race of build the biggest and best model, and now you're seeing, gronk take off. You're seeing chat, GBT, you're seeing Microsoft AWS all coming out with their own models, Gemini coming out with their, with Google. They're all focused so hard on building the biggest and be best model that you start to see this evolution of oh my God, these things can now do so much. And everybody's trying to grab their market share because there what, like in 2000 there was zero market share for this. So everybody's let me get this piece. And so that's where I think it's evolved too. And now we're getting to a point where, hey, these models are so far advanced and they have gone like 10 x, 20 x from what they were or a hundred x honestly, from the beginning. They've gone so far that now it can actually do work. And so people are starting to think, oh, how do I connect an AI model in this thought process to an MCP server or something that can do a function for me and build out reports for me? And also companies like OpenAI and Gronk are building that into their system. And so it's oh, I can build a PDF for you in Gronk. Or in open AI and it's quite impressive.
Colleen:I think that's like a really good segue into some thoughts I've been having while you're, you're talking here, Sal is like, all this is great in theory and I think most of us are familiar with the idea that you can go to a web browser, go to chat gpt.com and like write in a question and have it answer your question. How are we starting to see this in our professional lives? How has this been integrated into your day-to-day life in a way that has actually made it useful?
Sal:Yeah, maybe we'll take all of our perspectives here, but like for me, it's literally everything I'm doing right now or a large chunk of what I'm doing mostly'cause the company I work for, I am helping drive data, AI strategy, and building out what information do we have to collect so that we can then provide it into an ai, or LLM system that it can start understanding this information that we have locally. It's not what it was been trained on the internet that is publicly sourced. It's now that movement into private. Information. And when I say private, I don't mean PII stuff or personal information, but more of that private company information that gives companies competitive advantage. So some of the things that we're starting to work through is the integrations and secure integrations between these LLMs and the private data. And so a lot of things that you can connect are called MCP servers. So model context protocols. So how do they how they communicate and how they function together. The other things that are connectors, the, these, aI companies are coming out with native connectors into Snowflake, native connectors, into Dropbox and your web or, and your Outlook emails and Microsoft products. So they'll have these native connectors. So now it can start to build out these functions and do functions for you. For example book me a flight to X, y, Z. Now I can go out to the internet, book that flight, put a reminder on your Outlook calendar, make sure that you have that any changes to the, that it can possibly reflect and say, oh the flight has been delayed. This is a new update to your calendar. It can send emails to people notifying things. So it's like now it's becoming much more of that assistant slash actual like. I.
Colleen:I guess when I think about my personal usage of ai it's a lot more like nitty gritty than that. I will say too, I've heard a lot of people grumbling online that, you know, companies are trying to embed AI into everything, and folks are saying like, I wish I just had a way to just, to turn that off. I don't wanna see the AI responses, I wanna see the actual sources because we all know that sometimes what a chat bot or model will return to you is, is not true. Right? Like it will hallucinate and can return things to you that, that are. Completely made up. so I think that's an interesting trend. I feel like it's getting to the point now that some companies are trying to integrate so much AI into so many things that people are getting a little tired of it. which is interesting. But for myself, I really appreciate when it can automate tasks that I don't wanna have to do. Case in point you know. a feature now on some of our remote meetings where you can turn on AI assistant and it will come up with a summary after the meeting of what was said, who took on what action items. And I've seen really pretty accurate results. It's really nice to have. And it's nice to be able to share that with the people who were in the meetings so that you can recall what was worked on last time. And in your next meeting you can determine, oh, did everybody. Complete their takeaway items from last week or whatever the case may be, or in creating tickets, right? Like we use a ticketing system to create ways to keep track of work that needs to be done. So if a user comes to us and says, Hey, there's a issue with this data source, could you take a look at it? Please? You could take either an email or a message in like a. Instant messaging system and create a ticket from that. And from the, what they've said in that chat or in that message, I've had pretty good accurate results in AI companions or pieces that are integrated in these other products. You know, they've created some pretty, accurate tickets of the work that needs to be done, which I think is a really useful case for that.
FRANKIE:For me I would say like I, I use Gemini. I a lot for work'cause that's like our approved tool. So I pretty much always have it open. I constantly do. That's actually I'd rather use either Glean or Gemini before I even use Google these days. And if you're not familiar with Glean, that's, it's an AI tool that's built out that you can use, like to search across. Your enterprise. And I think like eventually, I actually think that'll go away because I think that we'll be able to do that kind of stuff like without it. But right now it's super helpful'cause when I have questions, I can just utilize that to try to find the answer instead of digging through all of my folders or like I've got a Google Drive, I have storage on my computer. And there's. Thousands of employees that have information, and it's really hard to find that information without having some sort of search that searches across your enterprise. So that's one huge way that I've been using it.
Colleen:Yeah, that's amazing. I mean, I remember several years ago taking part in projects to implement a tool that would do that. Like huge enterprise wide, took months to implement projects to implement something like an enterprise search tool so that you could do that, you could search across all your word documents, all your emails, your intranet sites and things at work for any content related to whatever you were searching for. So that's really awesome
FRANKIE:Yeah. Yeah. I love it. And it gives you the part of the problem I guess, would be that it gives you thousands of results and you have to narrow it down to help it understand like what you're actually looking for. Or maybe I need to prompt it better, but usually I don't know exactly what I'm looking for either. I'm just looking for some information. And you never know what people are gonna name things. So that's why I try to keep it open-ended and I search through the material myself, but way faster still than trying to figure out who the right person is and reach out to them, see what information they have. Then you can probably get sent to somebody else and yeah, so that's one cool way. We also have a tool that we use called Cursor that's been pretty substantial. It actually like will build out. Demos for us. So I'm like, that's probably a piece of the job that we don't always, it's very time consuming and I like building things out but sometimes I have to build things out like in 15 minutes, so it's like a great tool to be able to go in and have it throw something together for me. It even connects to my accounts so it will populate in my accounts. So that's really. A substantial tool. And then the last way that I've been really using it in work is like we have an agent built out for sales information. So we can go and search and like use, natural language to have a conversation about any customer or anything like that.
Colleen:Very nice. Very cool.
Sal:a little bit, so Colleen, to touch on your points of people are falling away from it a little bit based on, it's not returning the type of information, or they don't wanna, they just go to Google or, and just search what they need or just go right directly to the source. And then touch on, frankie's point of all these tools that now are getting integrated. I think one big thing that people have to understand is an LLM or itself the correct prompting should be pretty specific. It should be specific tasks based on the prompt that you send it. and this is where agents actually come in really well. So what an agent is. It's an orchestrator of multiple different LLMs or prompts, so it can go out to these LM functions and say, oh, do this. First and say, back information, all my news articles, right? Then it can go into another and actually do another prompt and saying, based on these news AR articles, summarize this. Then it can go, oh, based on those news articles and a summarization, gimme a analysis of this and then now connect it into my data. So like it, then it gives that next layer. I think what people mostly are doing now I meant from like a standard consumer that is going to open AI or going to Google and using Gemini. They're just using just an LOM. They haven't even touched
FRANKIE:Yeah. And one thing that's cool about agents too is that you can limit the information that they consume. So if you have, a company that has multiple divisions, you can limit to like a couple different databases or even down to a couple different tables so that when you're asking questions of that data. It's only utilizing the information that's provided. So that's really nice too.'cause if you're looking for very specific answers that are relevant for you and high accuracy, you can set that up so that you can get that.
Colleen:I think the, my comment was more along the lines of, you know, so, so many companies are embedding AI into things. So this isn't people even going out hoping to use some sort of chat bot to get an answer back. It's not a matter of prompt engineering. It's a matter of logging into Facebook and saying, having it ask you, do you wanna enhance this photo with ai? No, man, no, I don't. Please just let me post my photos from my vacation. Right? Like, that's just one of the things I could think of on the, off the top of my head. I've just seen people talking more and more about this, like, is there an option in this app that I don't, I'm, I'm not going there to have to have help writing my resume. Or have it reword what I wanna put on my kids' birthday party invitations. I don't need ai, I don't want AI part of this. I'm just hearing more and more people kind of getting frustrated with the fact that it's constantly there. It's like part of these other applications where you would not think to go to do something with ai and it's constantly like recommending things and it thinks it's being helpful, but in reality it's like a three-year-old helping you wash the dishes. It's not really that helpful. It's not really necessary. So it's like. Maybe
FRANKIE:and sometimes I like, I totally agree.'cause I recently had a chat bot that was just terrible and it's maybe I'm like a little bit more of a critic because I know what a good chat bot looks like. But like I had a, an order that I placed on some company to get dog beds and it said when I ordered that would be delivered in three days. And I still have not gotten my dog beds, and it's been like 18 days now. So I reached out like I was trying to reach out to their customer service. So I asked the chat bot about the delivery and where my dog beds were, and then it provided me an ad for a different dog bed. I was so annoyed. I'm just like, this is the worst chat bot. And I actually filed a complaint. I was like, that's so against their, like I said, their chat bot was terrible and the delivery is terrible, but that's just an example of where ai, like it was built out poorly and it's not helpful.
Colleen:Yeah. But that was an instance where you went there knowing it, you were interacting with a chat bot, right? Like Annoying in, and as of itself, if you get
FRANKIE:I did wanna reach out to customer service though, and they didn't have a way of doing that without using the chat bot.
Colleen:Yeah. Yeah.
Sal:Let go of all their customer
FRANKIE:Yeah, exactly.
Colleen:I think that can be, that type of thing can be done very well, and I think that's a really great way, having a chat interface is a really great way to weed out the majority of the questions. But I've also interacted. We probably all have with chatbots that were just not helpful or, you know, a lot of times you'll see where they do offer you prompts. Are you interested in, you know, these different topics? And so if your question or the thing that you need an answer to is not one of those sort of suggested prompts, it's like, well, how do you go about finding a person or finding the right way to word that question? And so I, I, I'm just a big believer, be that you should still have a way for people to talk to a person. Because I think that's a really easy way to frustrate your customers is if you have an experience like Frankie and you can't get assistance, why do I have to go through this chat bot thing? Why can't I just go to a person at a certain point, right? Like if I've tried that, there should be a good way that just says, you know, I'm sorry we couldn't help you with your request. So and so will be on the line momentarily. I've had that happen as well, which I appreciate.
Sal:So that, that brings up a great point, like where AI and these systems are going. They keep talking about this AI bubble, right? They think it's gonna burst at some point and can we, can this keep going? Can every company have, try to. Put AI in every one of their systems to build out everything, to have customer service and then, replace their whole staff. With ai no the answer is no. There, this is a bubble. I think it's gonna pop, but it won't fully pop. Like there, there are fantastic use cases. ai, but I do think that not every company in every aspect will need AI integrated you. I think the question that companies need asking themselves and individuals actually too is it efficient? Is it more effective for me to put this do I gain revenue? Or whatever your metrics or your KPIs are by putting AI in there, or am I just putting AI just because it's something big and hot?
Colleen:I think that's exactly it, right? Like you have to ask yourself the question, are we doing something with AI just because it's ai, because it's the next cool thing, or is it more appropriate to have an actual customer service person there?
FRANKIE:Yeah. And I feel like we need to remind ourselves too that it's still a tool. It can't like just replace
Colleen:Yeah.
FRANKIE:a workforce, but one cool use case that I learned about or I helped out with recently was. There was a company that, they have a lot of different parts and machines that they provide, and so they have their customer service group has all the manuals and they're like literally searching through the manuals when they have somebody call in for support and for them. That takes a long time because there's a lot of different manuals and a lot of different parts and machines. And so they're just like literally paging through trying to figure out what the problem could be in reading through those manuals. So like we used we, we used AI and we read in the documents and then put a chat bot on top of it so that you could actually just ask questions. And it only looks at those documents. So it's very specific to what the customer needed. And so they were able to just. Ask it questions in natural language and take away a lot of the time that they spent digging through those documents. So that was a fun one that was like very successful.
Colleen:And that seems like, I mean, it's a very specific purpose, right? Like if you think about the type of person that would have to, you'd have to have behind the scenes to be able to answer all those questions. Like that's a person with 20 years of experience that would know all those parts. So I think that's a, that's a real obvious win there, like right? It's literally taking the place of afic, like this unicorn of a person that you probably would To find.
FRANKIE:for sure. And I've seen that being success. Like when you have any sort of role that's reading through documents, that's been one that's been really successful and actually having a very high return on investment.
Colleen:Mm-hmm. Yeah. Can I ask you guys a question? I've seen a lot of, of. Concern online as far as how much energy it takes to run some of these models or even when you're making a G chat, GPT type query. Do you guys have thoughts about how about that, I guess in general?
Sal:Do I have thoughts?
Colleen:Yeah. I mean, I can quote this article here. I found something in the MIT technology review that Google released information as to how much energy a single AI prompt uses, and this makes it seem much more, or much more manageable? I guess, you know, I hear about these data centers that are proposed for rural areas and then you immediately hear how much water consumption that data center would need, or how much electricity consumption that data center would need, and the impacts that that would have both ecologically on the around it. As well as on the communities because it ups the price of the energy for basically its neighbors. But this kind of puts things, this article puts things in much more. Manageable terms. Basically they're saying that one median, prompt, one that just falls in the middle range of energy, demand concerns consumes 0.24 watt hours of electricity, which is the equivalent of running a standard microwave for about one second. So when you think of things in those terms, like that's not terrible, right? But I think when you think in scale, how many prompts do you run per day and you're one person.
FRANKIE:It makes me wonder, like the comparison of how much energy it costs to put something in a Google search versus putting it in touch. GBT.
Colleen:Yes, exactly. I wish they would've had that for comparison. Like, are we talking, you know, what scale is this? It would be nice to have that information as
FRANKIE:Yeah.'cause I feel like it's really become more of a topic lately. And it came with the buzz of ai, but I feel like we've been doing that all along because like when you look at a Google search, it's not that far off from something like chat, GBT.
Colleen:In a lot of cases that's completely true.
Sal:what I think is where. we might fall short, yes if you're com comparing a chat GBT prompt or Gemini prompt to a Google Chat prompt, which now Google is doing both at the same time. And so that's now double, but it's when you start to get to agents, that's where it starts to build out. Like you ask one question and it does 50 times, 50 props based on
Colleen:Right,
Sal:that's
Colleen:right.
Sal:are gonna see the energy spike. And what I think is this is gonna be the catalyst to an infrastructure boom of building out new infrastructure for power utilities. And I think that's where a big win is gonna be for. The United States than anywhere, right? Is we're gonna become more, hopefully, more efficient with that and how we're doing that hopefully more green as well. But we'll see. the other thing that I see us moving to towards, and this is like a farfetched way of saying it thinking through it, but both nuclear power and honestly with satellites and the amount of data that these satellites are like starlink are. Able to send back down. I wouldn't be surprised in 20, 30 years that they put a data center in space because it's substantially colder. and they the costs to do that would be obviously astronomical initially, but over time it might not be as, as crazy.
FRANKIE:That's really interesting. I never thought about that.
Colleen:Yeah. I think though, if you put something like that in space, I mean, there's all sorts of other issues that that come into play, right? Like what happens? I mean, there's physical equipment out there. I mean, that's not gonna be easy to service, right? So it may be colder, it may take up less space on the, on the surface of the planet, but now you've got. Half a dozen other things that were depicted in movies in the nineties, like how do you send astronauts up there to save the data center or whatever in, in the case of, of something going wrong.
Sal:And again, I'm talking like 30 to 50 years if this continues that way, right? Or fusion is also possibility is now that they've actually done that is, but they're gonna move into a way of, it's going to force companies to invest in new ways of thinking through energy. I think that's gonna be the biggest thing. And like another thing is like deep ocean. data centers, I think that's gonna be a thing at some point. And they'll just put those down there and manage'em as is. don't, I think they're gonna come out with a bunch of different types of solutions for this, and it's gonna drive our overall energy ability. And if it does, if they it, and again, I say they, but like it's really all of us. If they don't come out with these possibilities, we're going to hit a threshold where we can't keep up with the amount of AI and sh. Energy that is being used, that we actually are gonna downscale or plateau
FRANKIE:because they're already seeing like the rolling blackouts and stuff like that in areas where there are large data centers.
Colleen:Right. Like we have issues with the power grid in sections of this country, period as it is without ai. So I think it's great to think of these things in these lofty terms and these goals for what might happen in the future, but I think we need to very realistic about what exists currently in this country and how feasible it. To be doing things like building giant data centers. While we may think as data people, Hey, this is really great. There's like really cool to, to, you know, applications for this. You know, there are more and more people saying, no, we don't want that in my backyard or in my state because of the ecological ramifications of having a giant data center in the middle of the, you know, woods or in the middle of the, Some of these areas that are more rural and less inhabited. so I think we need to, you know, to your point sale, you're gonna have to sort of answer those questions or find solutions for those things before you'll be able to have these sort of data centers and providing the type of energy that will be needed for the next stage of these things. We're kind of, I think, at that crux right now is, I guess what I'm saying.
Sal:I don't disagree with you. I think this is where like. The front runners of these data centers or more of these counties that are allowing these data centers to happen that are like giving them full access to all their water, all, any energy that they want. I think they're going to pay the price'cause they didn't push back on these companies that have millions and billions of dollars to invest, In this, to build a better, more efficient infrastructure. Around that maybe doesn't need to have the cooling or the water uses much water to cool it. They can do their research and build out more efficient ways. Again, it's for their bottom line, so they're going to do it. If counties unilaterally start pushing back on that.
Colleen:Yeah, I just think, I mean, they just did just against adding a data center in Port Washington, Wisconsin as a great example. There were studies and, and numbers that came out as far as how much water. That data center would be using from the Great Lakes and it became immensely So I don't know how you look at a company and I don't remember what company it was that was gonna try and move in there, but how do you look at that company that's a multi-billion dollar company and say, Hey, you gotta come up with a better way to do this. I think they're gonna keep kicking the can down the road and keep looking for a community that's gonna say yes before they invest the money and the time to find a way to make their data centers more. Energy efficient, I guess.
Sal:I don't disagree, and that's why I say unilaterally is everyone has to come to the dentist and really it's probably gonna be a governmental or government regulation of, Hey, you can't use more than X amount of energy, right?
Colleen:Yeah.
Sal:data centers
Colleen:I mean, agree with you that that type of thing is needed. I unfortunately don't think we're gonna see that for the next several years in this country.
Sal:I don't disagree as that well,
Colleen:Just period, plain, full step.
Sal:Yeah. But like thinking through it, but like we are in a. A, a revolution of new change, right? And so with that, it comes, becomes uneasy for a while. And again, long term, probably past when I am alive. think hopefully we come to a better state at that point to do this
Colleen:Mm-hmm.
FRANKIE:So circling back s let's talk a little bit about what is the difference of Google versus a chat GBT? Why would you use like one or the other?
Sal:So I think the biggest part is around, like Google does natural language processing and elastic searching. Which is a way to say these keywords are very similar to all these other keywords. know that sounds a lot really similar to Vectorization, but what it is just actually filtering down the websites or URLs that people are, that you're searching against and say, let's say that people have certain key words in their website. It will go search those words and filter out. It's a little bit different. It's a direct filtering prospect versus a. An LLM or a chat GBT will actually go back and through its memory, I guess you could say, or its training, it will take in that information and say, Hey, these are this, based on what you're asking, these are what very similar or to that would be. and I think those two things are different but the same in some way. And. I don't know how much like energy or anything that each of those take differently, but overall they, one is like an Elasticsearch. It will, the more keywords that you put in your website, the more likely you are to be at the top of Google, right? an LLM, you don't really have that. Ability to add more keywords into your website and then be on a larger search. It's gonna go and just do a web search or go and do a, a look through the information that it's trained on. Hopefully that answers that.
FRANKIE:Yeah, that makes sense. I just think like in general, like for me, my go-to these days is I kind of use either Gemini or chat GBT. But a lot of people they just don't understand exactly how those two differentiate. And I think there was when chat GBT came out, everyone was like, oh, it's just a Google search. Like we've had that forever. And it's just, yeah, it's a little bit different but. I just find that interesting.
Colleen:Yeah, I do think that there's people who use them almost interchangeably. but hopefully over time as you are using them, you kind of get the sense for how a I, I'm just using chat, DPT, like a generic, like a generic term. But I think the more that you use it, the more that you realize what you're getting back is sort of more robust and it's providing, I think, some context maybe to what you've asked, instead of just spitting back, here's a list of the websites that essentially have those terms in their SEO.
Sal:So on an average Google Web search, how many do either of you go past page
FRANKIE:No.
Sal:ever?
Colleen:Very rarely.
Sal:do that. Like I would've to be going down to it is not even answering even close to what I think. And so that's where the difference I think is like it's going to filter down easier, the information that you're trying to get out for. So like
Colleen:Yeah.
Sal:GPT, Jeff and I, it's just going filter down. You don't have to have 1,000 pages, you know of links.
Colleen:and I think you get like amalgamations too, like chat, GPT again, generic term would combine information from various sites to sort of make, give you a cohesive, singular answer instead of here's a list of all the different webpage that are sort of all saying the same thing.
Sal:I hope that you enjoy our conversation. Hope you got a lot out of it from understanding and maybe a little bit more around where AI came from on the natural language processing or NLP all the way into the large language models and that are out today. We didn't talk a lot about what's the future, but we'll probably end up talking about that at some point. So keep listening and keep calculating.