
SNIA Experts on Data
Listen to interviews with SNIA experts on data who cover a wide range of topics on both established and emerging technologies. SNIA is an industry organization that develops global standards and delivers vendor-neutral education on technologies related to data.
SNIA Experts on Data
“AI Stack” Webinar Series: Intro to AI/ML & FAQ
The SNIA Data, Storage & Networking (DSN) Community launched a multi-part AI Stack Webinar Series designed to provide education and a comprehensive view of the AI landscape from data pipelines to model deployment. Hear experts, Erik Smith, Justin Potuznik, and Tim Lustig answer AI frequently asked questions:
• Exploring the differences between AI (the broader concept), machine learning (systems learning from data), and deep learning (neural networks extracting features from massive datasets)
• Understanding the "token economy" monetization model where AI services charge based on inputs/outputs
• Examining the shift toward on-premises AI deployments driven by data sovereignty, security concerns, and cloud cost management
• Implementing security through data validation, sanitization, and guardrails to protect AI systems from misuse
• Recognizing AI's transformative potential beyond current generative applications into agentic systems and physical embodiments
If you'd like to contribute to the AI Stack Webinar Series or have topics you'd like to see covered, contact the SNIA Data, Storage & Networking (DSN) chair at dsn-chair@snia.org or visit the SNIA website at snia.org.
SNIA is an industry organization that develops global standards and delivers vendor-neutral education on technologies related to data. In these interviews, SNIA experts on data cover a wide range of topics on both established and emerging technologies.
About SNIA:
All right, welcome everybody to another amazing Experts on Data podcast here with the SNEA community. My name is Eric Wright. I'm the co-host, or rather the host, of this amazing podcast, Also the co-founder of GTM Delta. You can find me everywhere online.
Speaker 1:I'm Disco Posse on all social media, so always love to connect with folks both through SNEA and anywhere in the world, and I'm really really lucky today because I got some fantastic humans and we're going to talk about what seemingly is the most non-human hot topic these days, which is around AI, the AI stack and really a lot of what the AI stack webinar series that's coming up from SNEA is going to be about and really why it's important. So thankfully, I got experts in the room where they always say the last thing you want to be is the smartest person in the room. Never a problem for me on the experts on data podcast. So quick round of intros We'll start with I'll say the first Eric, because I like to say that I'm the other Eric. So, Eric Smith, you want to give a quick intro and then we'll work our way around.
Speaker 2:Sure. Thanks, eric. I'm Eric Smith. I'm a distinguished engineer working for Dell's CTIO team and I'm also the chair of the SNIA DSN community.
Speaker 1:Fantastic, and Tim also, the chair of the SNEA DSN community, fantastic.
Speaker 3:And Tim. Hello, good afternoon, evening. I'm Tim Lustig.
Speaker 1:I work for NVIDIA, where I'm a relationship development manager for an inception program Fantastic. And last but very clearly not least, also because one of my sons is named Justin, so I love the name. Justin, introduce yourself and tell us where you're from.
Speaker 4:Hey everybody, justin Petuznik, I'm an engineering technologist at Dell and I work with the first Eric. And, yeah, I'm up in Minneapolis right now.
Speaker 1:Fantastic. So let's start with what is the AI Stack webinar series and just to give folks a bit of a preview of what they can expect, I know we've already got the first one live, so, depending on when people are watching this, we may have more than one that's already published. But with that, I think, eric, you wanted to walk us through what the team's working on with this.
Speaker 2:Sure, Thanks, Eric. So the AI Stack Webinar Series is basically basically a planned 11 webinar segments that's designed to give IT professionals a clear end-to-end view of the AI landscape. By that I mean, instead of diving right into all the niche details, we walk through the stack as a whole, covering everything from the data pipelines to infrastructure to model deployment. It's really just all about enabling people to see how the layers fit together and again, the goal isn't to really make everybody an expert overnight, but to provide what I call a framework of understanding that cuts through the noise, reduces overwhelm and confusion and just gives people the confidence to start experimenting, asking better questions and building their own path towards AI. So yeah, that's the basic idea.
Speaker 1:In looking at. One of the things we often have to begin with is pure definitions, and so we have this idea of AI, ml and deep learning, and this is often something that we kind of use them interchangeably, or at least the marketers do. God bless us. Fine folks, I'm a marketer and a technologist. I'm a split duty, but in the community I find there's a little bit of confusion. Sometimes we talk about AI versus machine learning, versus deep learning. So how can we best describe that? And maybe let's, tim, let's call on you and give your split of how you see those definitions being important.
Speaker 3:Yeah, good question. First off, and just to add on to what Eric was saying, the AI stack starts with a very general and it gets broad, so anybody can jump in at any time. If there's people who have experience with AI, if you know some of the stuff that can get in the middle. For those who are beginner and want to know a little bit about how artificial language, machine learning and deep learning are worked together, artificial language is really the larger bucket that encompasses both machine learning and deep learning. When you get a little bit more specific, you can dive into machine learning, which basically learns data to improve over time without being explicitly programmed. So it's a little bit more encompassing of artificial intelligence with a little bit more training to it down the line. When you get into deep learning, it's more of a specialized area within machine learning and deep learning utilizes multi-layered neural networks inspired by the human brain to basically automatically extract features from massive data sets. This makes it especially powerful for complex tasks like image recognition, language translations, voice assistant, things of that nature.
Speaker 1:And this is, I think, also why, from the outside, people get very confused the normies, the normal folks that don't inject themselves into the news every day about this. We often see things too when people say, say, like AI, and it gets confused with, like, generative AI. Like you know, I've been working for decades in this industry with AI, but generative AI suddenly is the thing that we, we know AI as. So it is funny that it's been commoditized. That AI means chat, gpt, you know pardon, I'm calling out a particular vendor, but we know it's like Google, it's become a verb as much as anything else. So it's interesting that. That's why I like to call out the definitions.
Speaker 1:Now, the other thing that's interesting is the switch to, I'll say, the token economy. And so what do you each think about this? The way that we're going to manage, how do we sort of charge and understand consumption of this stuff? Because it's no longer raw compute as far as, like, mips and hours and, you know, megahertz. Now, the tokenization of how we charge for this stuff or calculate it is going to be really tough for a lot of people to understand. So, justin, I'll start with you. So what does a token mean now in how we describe the technology and how we're using it.
Speaker 4:Sure. So to start at the base level, right, a token is just any input or output coming into or out of one of these models. Right, that can be words, that can be portions of a picture, what have you? It's sort of settled as the lowest common denominator for a lot of the cloud-based models, especially because, again, what you're putting in is tokens and what you're getting out is tokens. So you know, the models themselves, once they're loaded into that GPU memory, are essentially static and inference, and then it's just what you put in and out that makes the difference. That will change. I mean, we still have all those MIPS and gigs and everything else. They're just obfuscated by that token in the cloud. When we do look at on-prem systems, though, tokens are still useful and valuable as a way to measure and maybe to charge out to your customers, but, as the folks building those systems, you still are going to need to care about what's behind it and really build the system that way.
Speaker 1:Yeah, I guess maybe it's definitely where we're going to see the merger of. You know understanding the costs and thus where we can apply margins and make these viable. You know understanding the costs and thus where we can apply margins and make these viable. You know business platforms as well, so it makes it fun. Now the other thing is people getting started with AI and they're beginning their journey. You know, what do we start with? Do we need super servers? Do we need GPUs? Where does AI begin for people in different waves of adoption? And maybe let's start with you, eric, on that.
Speaker 2:I mean, it depends on what you want to do. If you're trying to train the next large language model, you're going to do that with your own hardware and you know you're going to need to set up a. You know there's several examples of my company working with other companies to do that. But you know you don't have to start there. You can start with something that I've been doing a lot of working with a lot of work with.
Speaker 2:Actually, I think colloquially it's called vibe coding, and vibe coding is basically using natural language to interact with the chatbot and describe what you want to create in terms of an application. And a great example of somebody using vibe coding to produce an application is actually the binary digit trainer app that we provided as a demo for this session. So I think at the end of the session there was like 10 minutes where we went through how you train a model, how you can use it for inferencing, what checkpoints are, and it kind of goes into all the details and between the application. The site that I used was called Replit and I did use a lot of ChatGPT5 to sort of help when Replit couldn't get it done by itself. But that's a really good place to start, and it really unlocks you from your whatever skills you might be limited on and allows you to work at the speed of your imagination. So that's where I think this is going to go and how you can get started.
Speaker 1:Yeah, and I think I like the idea that we very quickly jumped to fractional availability of resources and that, because we had, the cloud was there, the model was available, and then you know, of course, getting access to that king-sized hardware that you need to run these like supermodels and do fast training.
Speaker 1:Yep, it became tough to get, but it also became quick. That, almost like the SETI project, you know you've got all these people who are like, hey, I'll share part of my, my access to my grid with you and it's we really got to a sharing economy with hardware quickly, which I I'm encouraged by Cause that, as you say, with vibe coding, now you can, you can kind of vibe build, or as I call it, vibe vulnerability creation, but, uh, you can. You can definitely quickly get started and get those prototypes ready and gosh, it's just such the bar has been lowered so beautifully that anybody can access this stuff and it's not just us happy nerds who are just reveling in what we can do with it. I went to my barber the other day and he's telling me about stuff that he's doing with, like generative AI and it's like that's just such a fantastic way to like what we do is being used every day by everyday people, and I think that is the beauty and the commoditization in the community offering of everything with AI.
Speaker 4:Well, Eric, I think you make a good point that one of the weird things with AI is it, unlike previous compute revolutions we've seen, or things, it was already in everyone's hands before they even realized it, right?
Speaker 4:I mean, the vast majority of folks have a smartphone that's doing some form of AI on it and were even before. You know the iPhone moment of chat GPT getting launched a few years ago, right. The iPhone moment of chat GPT getting launched a few years ago, right, and? And the nice part is AI can work on a small platform, as that, or, like you said, it can be distributed and fractionalized through a cloud, and and what we see is is this idea of essentially the, the, the impact you want to have, whether it's on one person for five minutes or a thousand people. You know, concurrently, that that's how the hardware needs scales, right, that's how your compute and and and in your need scales. So, yeah, it's really easy to get started as one person, kind of just picking away at a keyboard and trying to make what, what you need to happen, and then it can scale very quickly, uh, assuming you can get the hardware Right to really just connect with the rest of the vendor ecosystem so that we can accelerate all of us.
Speaker 1:That rising tide lifts all boats type of delivery, that we can all go faster together.
Speaker 3:Yeah, eric did a great job kind of starting the discussion around bringing this AI stack to SNEA, and SNEA is an extension arm, an educational piece, and I've been involved with it for quite some time as well as Eric, I believe, too and that's actually what we're trying to do is work together as teams to make sure we're educating the community, and there's other arms within SNEA that works together to make sure that standards are set, and it's all goodness for everybody, whether we're educating or whether we're coming up with new standards to take the technology further we are at this importance of a multi-vendor.
Speaker 1:I'll say it's a coopetition in a sense, like because we're all, each organization has specific commercial goals but yet we can all beautifully come together because we have a shared belief in you know and a shared goal of all of us getting there, like advancing the entire ecosystem. And this is probably the first time I've seen such beautiful interplay, because it is not just vendors but it's every kind of vendor Network storage, compute memory, gpu, hyperscalers, the local folks that are doing like mini AI desktops of the world. So that really, as you said, is defining the standards, bringing the people together, and that allows us to have that base that we can really quickly accelerate from Now. On the storage side, you know SNEA did we're kind of big on storage here at SNEA what is the impact now on storage in how AI is changing the consumption pattern? And let me just see I'll let you volunteer, but I haven't talked to Eric in a second.
Speaker 2:Oh, yeah, no, I'm happy to take it. Yeah, so it depends. You know and we had this subject come up during the webinar as well you know how is AI changing storage and it's having a massive impact All of a sudden. Latency and throughput are extremely important. I mean they've always been important, but they're much more important these days, and it really breaks down. Important, I mean they've always been important, but they're much more important these days, and it really breaks down.
Speaker 2:You know, one of the questions that we got is do you need all SSDs or can you just have some HDDs and SDDs? Because it's expensive and we get that, and it really depends on the modality of the model. You know what it's doing. You know, one great example is checkpointing. You're really got to have very, very low latency and high bandwidth to be able to get those GPUs back to training as quickly as possible, because you're losing money for every moment that you're checkpointing and then, depending upon the type of data, video has different requirements than audio, than text, and so you really need to know what you're training, what the data is, and then you would structure your storage solution appropriately.
Speaker 1:And now we definitely have the. We thought it was going to all end up in the cloud and there was this sort of race to which cloud would be the fastest to acquire all of the GPU hardware, but I'm seeing much more. That's moving on-prem, because the idea of sovereign AI is also super important to you know, vendors or, like you know, companies and customers who they need to have that sort of a bit more control, and there's quite often state by state or province or country level. You need to have separation just for regulatory purposes. There's a whole lot of interesting boundaries that now sovereign AI and on-prem AI are important. Justin, you mentioned before about on-prem. What are you seeing as far as the shift to a lot of on-prem experimentation, where probably two years ago that seemed a bit far away, Absolutely.
Speaker 4:And to the points we've made earlier. Cloud is great to very quickly spin up and get started and, kind of like we said, it's a great place to start right and have those experiments and that sort of thing. As you mentioned, though, as you start to use more and more of your own data, where that data resides is often very regulated. There are security concerns, that sort of thing. Many different countries are trying to, you know, promote their own systems and do this at a national level. So all those factors, as well as exploding cost in cloud is both driving and allowing folks to bring this stuff back on-prem, and they're finding that actually there are a lot of advantage to that.
Speaker 4:Anyway. Often that's where your data already is and you are having to lift and shift it, and that brings its own set of complexities. And then, if you own the hardware, you can choose how it gets used. You can have it doing one thing for one block of time of the day and something else during a different one. So, additionally, I think we've seen different size models allow for different scaling factors. Even within a few servers, you can support many users and do those sort of middle tier of you know, of the continuum where you know you start with one data scientist in the cloud poking around proving viability and then you move to that point where, hey, I want to get 500 people using this system to really prove out that it can work and that the quality is there and my scaling factors. That's generally when we see folks really start looking at on-prem solutions to do that. And then you can just scale up that middle-sized system as your user requirements demand it.
Speaker 1:Now let's talk about risk. There's definitely a lot gosh we could make. I could go for eight hours with you guys and we could cover a lot of stuff. But I want to quickly tap into what we see as kind of the risks that we know we should be very mindful of. And what are you seeing around how we mitigate some of those risks, especially with stuff where opportunities for people to do data injection inside models it's very hard to untrain a model you know, and expensive, and so there's always that risk of when the foundation models could potentially be poorly built and then from there it's really hard to undo. So how do we set guardrails and stuff like that? Tim, we'll start with you. What have you seen as kind of early risks and what are you seeing around mitigation? That's being more widely understood now?
Speaker 3:mitigation. That's being more widely understood now. I would say a key thing is to prevent bad information. You have good information and good information. Now Put controls around the data that you have. You want to make sure that the model is being trained straightforward. What methods that ensure the quality of the data? It's very critical and this can be achieved by multiple ways making sure that the data you have is trusted, comes from trusted sources, it's been protected. There's automated checking through recruitment oversight, as well as there's additional applications with things out there that can basically go through the data and check for anomalies and things of that nature. But key strategies you need to make sure that the data is validated. You need to make sure that you're sanitizing it and monitor the sources of data.
Speaker 1:And Eric, I imagine you've also had similar exposure, especially given the crosswork you're doing within SNEO. What are you seeing as kind of identified risks and where people are getting excited about figuring out how to build those guardrails in early, before we get caught out?
Speaker 2:Yeah, I come at it from a different angle, you know, as a user of the technology right now so I don't do a lot of work, training models or hardening them against hackers, but a lot of what Tim said and you know. So I don't do a lot of work, training models or hardening them against hackers or you know, but a lot of what Tim said is you know things that I've heard as well. You know you got to be careful about that. Don't use just raw data, use curated data. You know, and, rag, you have to know where your documents are coming from, those sorts of things. So those are somewhat obvious, but, as a user, one of the things that I find frustrating from a security perspective is my company and.
Speaker 2:I'm not arguing with their rationale for doing so, but it doesn't want us to put any IP into any sort of publicly available chatbot. So scraping docs from one vendor and throwing them into ChatGPT-5 and saying, summarize, sort of publicly available chatbot. So you know, like taking, you know, scraping docs from one vendor and throwing them into ChatGPT-5 and saying summarize. You know we can't really do that and we do have our own model and you know our own ways of doing things like that. But so it's. What I'm finding is that there's this tension between approved tools by the company and what's available, and that gap is huge. It's because it's changing so fast.
Speaker 2:So if I could do a project that I'm working on right now, if I could do that completely in a vibe coding tool or at least get a prototype or a proof of concept ready for it. One of the challenges that I'm dealing with are what training, what data can I give it? You know we were going to operate on company proprietary data. This application that I'm thinking of would use company proprietary data. I cannot upload that into the tool, so it makes it challenging that way. And there's also concerns about who owns the app. You know what's the licensing and those things are. I don't think they're fully settled yet. At least I wasn't comfortable as I was looking through the literature about what the answers to those questions are. So that's kind of how I committed security from an AI point of view today. From a AI point of view today.
Speaker 1:Yeah, and it's funny too, because we often there's no greater lie than one backed by statistics, other than, of course, the one that says I have read and agree to the terms and conditions, like no, no, let's all be real. No one like I don't even click the link half the time, I don't even pretend to read it. So you know, but given I'll say that on the enterprise side too, justin, not that I'm saying you're only enterprise, but you obviously got a have gear that they want to put into use. Where are you seeing them working with guardrails and what are the tools and tips you're seeing?
Speaker 4:Well, and I think, firstly, we've been saying this guardrails as a phrase, and I think it gets used two different ways and it's worth exploring that. You know one it's used as sort of a blanket term for how do we keep our AI from going off the rails, right. And then the second way is guardrails are a component of your AI system and I think that's what a lot of folks, when we dial back out, we look at security as a whole. We recognize, or we're starting to see folks recognize, that it's more than just one model sitting in a container, tokens in, tokens out. You need to build an AI system, right, and that system probably has multiple models in it and multiple agents and multiple tools. And so you know, completely agree on the points we made about the quality and the security of your input and how you build that model is important. But then after that it's like having a teenager you did all that work and now you handed the keys to a car and away. It goes, right, the guys doing the training don't sit on it forever. So we have stuff like traditional injection attack vectors that are now suddenly available. How do we guard against those? How do we deal with bad actors specifically trying to hit the system and either get information out of it we don't want them to or tilt that AI system.
Speaker 4:So guardrails as a specific system and component of your AI system is a very important one and they're very tunable. You can use them for any number of things. You can use them for any number of things, but especially it's good for forcing your system to answer with no. That's not something I talk about. That's a big piece that you can do with guardrails right, especially for companies.
Speaker 4:All you want it to do is talk about you know specific set of things or a specific section of the world, so you don't want to ask it political questions, or you know if it doesn't, or the weather, or what have you right, if it's not a science system, don't ask it science questions or don't respond to those. So, having that larger system that's going to protect against injection on the front end right, and that's standard web injection protection that we've all had, or database injection protection, but have guardrails on the back end, have injection on the front end, have multiple layers, like any good security system and I think folks, you know, for a little while anyway, we're really trying to find a silver bullet one thing to do at all, and, like everything else in security, it's going to be a layered, nuanced system that filters at different layers and makes sure only the good gets out and we stop the bad somewhere.
Speaker 1:Yeah, it's funny too, because we you know, with vibe coding being a fast way to prototype stuff, unfortunately it's not a. It's fast to prototype a tool, but they very rarely prototype security and vulnerability protection inside it. And I say it just with all honesty, because if you don't think like a systems architect when you're building that, I love the capability that it's opened up, like I want everyone to have access to be able to do these things. But what I want to do is also remind them like, hey, when you put this thing into the world, you know on a vercel or a replit or whatever, or you put it out in heroku, the whole world has access to it like fire, a windows ec2 instance up and you'll find out real fast how many you know network connections are poking around looking for RDP.
Speaker 1:It is just all these fantastic honeypots are out there, and so whenever somebody vibe code something, the first thing I do is I check it. I'm like, hey, this is really cool. And then I usually send them back their system prompt and all the model information that I can pull from it with like two queries. So I do like that we're starting to think about security and, you know, hopefully it becomes more you know, before it goes out the door, and I think that's the, it's too late, we can't stop it, like, and I it's great, I love that it's out there and we're already using it. If you think you're preparing for AI, that's like preparing for oxygenation, like sorry, you've been using it every day. It's just you didn't know it. So, tim on, let's talk about the positive. I want to. I'd love to hear what's the? What's the thing that really makes your heart beat faster about what is being done with all these tools and technologies we're talking about.
Speaker 3:Well, you know, I like to think of it. Jensen said we're at the iPhone moment, and Justin said that earlier too. So we're really just seeing the tip of this iceberg, and you know where it's going to go is really going to be incredible in the next. You know, five, 10, so on, years. So and you talked a bit about, I think, generating the AI. That's really what we're seeing today. We're able to create text, we're able to create images.
Speaker 3:Around the corner, we're going to be seeing it deployed in factories, where we have agentic AI, and in businesses, where we're able to have these agents that are going to be working together inside an artificial intelligence system that are going to accomplish so much more, and we're going to see efficiencies just escalate. You know, then, past that, we're going to get to a physical AI where we've got robots that can interact with environments and we'll have them train so that they can do things that are not programmed to do. They can learn on the fly. And you know, I think it's a really exciting time right now. I think this AI stack series is great because we'll get people in at that ground level. People who know a little bit more can scale up with us as we go through more of the webinars, I really want to encourage people to stay in touch with SNEA and just follow along with the AI stack and join where you think it fits with your abilities.
Speaker 1:Absolutely, Eric. I'd love to hear what's your thing that gets you really jazzed about what we're seeing as the outcomes from all the nerdness that we're excited about.
Speaker 2:Yeah, I just I see it, I know there's a lot. It's causing a lot of chaos right now, you know, with employment and everything else. And it's funny, you know, like there was a recent MIT study that was put out that said like 95% of all Gen AI pilots have a deliverable, measurable impact. And what struck me about that was when I read that I sort of started thinking about what was being said about the cloud back when we were all thinking SaaS was going to be everything, everything was moving SaaS, and the reason that it sort of reminded me of that, because back then we had shadow IT and if you wanted to get something done, you'd go get a VM, you'd go get an instance and you'd do something really quick and then you pull it off.
Speaker 2:I see the same thing happening with AI and there's this concept of shadow AI or shadow AI services doing the same thing, concept of you know, shadow AI or shadow AI services doing the same thing, sort of like what I did with this binary digit app, just to create a demo because, you know, because we needed one and that was the only way it was going to really get done, at least based upon what I was able to do, and so I find that what I think is going to end up happening.
Speaker 2:I think, much like we had with the cloud, we're going to see that this is going to increase productivity in ways that we can't even imagine at this point in time. And thinking back to when we were thinking about cloud, I mean, this is yet another extension of that sort of mindset shift when we think about what we're able to do with on-prem IT equipment and what the cloud enabled, and now what we can do with AI, where I don't even have to think about the infrastructure anymore. I can just use natural language and tell it what I want it to do. I just think it's really I don't know that my imagination is big enough to come up with all the ways that's going to impact us.
Speaker 1:Yeah, I forget, was it Ilya? I'll butcher his last name, but sort of famous our early guy. Yes, yes, yes, yes, and I think one of his tweets at one point a few months ago was the programming language of the future is English, and it's such an interesting way to talk about like that. We've done it like this is what we've always wanted. It's like can I? I want to be able to interact with my system in a way that is natural for me to do so and then create the right conversion layers in between, and we've done this with programming for so many years. And now that we can even go one step higher and it's just now really the way that, like observability, is meant to look at the system as a whole that now we treat the inbound inputs as a system as a whole and we can do it in natural language. It's super, super cool, justin, what do you see is like the stuff that has been on the ground that you've seen come out, that it maybe even surprised you in how people are using these tools.
Speaker 4:Well, I don't know about surprise, but one area that you know when I talk to customers and they want to start their AI journey, I say I tell them you know, think of two really easy examples that come to mind for you for AI, and then tell me what your two hardest problems are. And those are like your four starting use cases to go tackle. What's interesting is often one of those two hardest use cases will be one of the first they actually get a working system for and then everything changes for them, right. Suddenly, they're an order of magnitude faster at processing something or making a decision or what have you, and they're very surprised, like that's a problem we've had for 20 years and we've built systems around the fact that it's just never going to get any better. And now we can use AI and we've made it better and we've made it just another step in our chain to execute. So I think some of that's both surprising, but also it's very powerful, and that's what we see is, if you get the right, if you put AI in the right places, the change that it can have on an organization is huge, you know. Additionally, it just helps us scale as a company, as a population as humanity.
Speaker 4:Right, there's always more work to do than there are people willing to do it in any given time or able to do it in any given time. This can fill those gaps right, and I know that leads to discussions. Oh, you know, robots taking over the world no one will have a job. To be honest, again, we already have more work than we have people. So let's fill the gaps right. And how much does the quality of life for all of humanity get when? When we can do that, when there's no waiting for anything Right and empowering the people we do have and luckily we're seeing some folks talk about that, and you know it's not. The AI is going to take over from people. It's going to make the people we have doing the job so much more effective at what they're doing.
Speaker 1:Yeah, and I think this is another perfect point of wherever people say AI is the end and then, at the same time, ai is the beginning. It is this sort of dichotomy of we see it as the end of many things but then when you open your eyes wider and really look around at what it creates, and you look back over time, into patterns, over history, and since we're talking about generative AI, let's delve into the crucial and critical paradigms of the past. You know, like, the reason why generative AI does what it does is because it's taking history and then in compacting it together and then distilling it out in, in, in tokens and phrases, and now what we're going to get is new stuff that never existed, that is going to continue to train and retrain, and then the people, as you said, justin, are going to like move faster, and now their biggest problem is no longer their biggest problem, and then they find the next one. Jim Keller said it great, a great way.
Speaker 1:I love this. He says we, as engineers, are in the business of solving extremely hard problems until we create new ones, and so we're just moving, we've subjugated a bottleneck and we found the next one and we're going to keep doing that and it's going to go faster and I'm excited as heck. But most importantly, I'm excited about the ai stack webinar series because there is so much more. We've literally danced on so many topics in a short time, but people want to dig in. This is a great place to do it, so we'll have links and, of course course, people can follow SNEA on all social media. Make sure they subscribe to this podcast and many others, but let's do a quick round table and remind folks where they can reach you and find you amazing humans to have better and bigger discussions on this stuff. We'll start with you, justin.
Speaker 4:Oh, I think you can find me on LinkedIn. Otherwise, oh, uh, I think you can find me on linkedin. Otherwise you'll contact me through dell fantastic and uh and tim uh.
Speaker 3:similar linkedin um as well as uh um on on x though tlistic, at xcom or at tlistic fantastic and uh, eric, sure I'll.
Speaker 1:I'll not just because you got a great name I fully support. Let's talk about how we can reach you.
Speaker 2:DSN chair at sneaorg will get you to me and just check out the sneaorg website, look for the DSN community and you can contact the entire group that way. If you have an idea or something you'd like to see in the AI webinar series or actually if you'd like to contribute, we do have a few speaking slots open, so we love to include other people in this as well.
Speaker 1:Aha, yeah, that's it. So there you go, folks. A call to arms. Let's get more fantastic humans talking about fantastic technology. And, of course, folks do want to stay in touch with me I'm Disco Posse all over the place but, most importantly, make sure you smash that like button and hit subscribe to this podcast, because we're going to do a ton more. And thank you all for sharing your time today, and we'll see everybody on the next Experts on Data podcast and the AI Stack webinar series. It's all kinds of goodness and it's free. How much more Like this is it? We've done it. We've commoditized access to knowledge. Gosh, it just doesn't get better than that. So thank you all for joining us today.
Speaker 2:Thanks, Eric.
Speaker 1:Thank you.
Speaker 3:Thank you, thanks for having us.