SNIA Experts on Data

Optimize Your Infrastructure for Today's Data Era

SNIA Episode 18

This episode focuses on the evolving landscape of data optimization, emphasizing the intersections of standards, sustainability, and community collaboration. From tackling large-scale implementations to enhancing environmental efficiency, we investigate how SNIA's approach to managing data across various infrastructures has broadened beyond traditional storage paradigms. Our experts cover key trends, challenges, and solutions that organizations need to address data management and sustainably, while optimizing their infrastructure for future growth, discussing:

• Exploration of the concept of optimization in data storage 
• The blurred lines between data characteristics and storage infrastructure 
• Importance of sustainability in storage practices and life cycle assessments 
• The role of SNIA Swordfish and Redfish standards in enhancing interoperability 
• Community engagement and collaboration as key drivers for innovation in data management

SNIA is an industry organization that develops global standards and delivers vendor-neutral education on technologies related to data. In these interviews, SNIA experts on data cover a wide range of topics on both established and emerging technologies.

About SNIA:

Speaker 1:

All right, everybody. Welcome to the SNEA Experts on Data podcast. My name is Eric Wright, I'm the host here of the SNEA EOD and I'm also the co-founder of GTM Delta Super excited to have an amazing crew here today with us. We're going to talk about optimization. This is a place that I've been living in a long time, both in the work that I've done and heck on a personal day-to-day basis. So if I could tell you how to optimize things, I'd love to. But even better, I've got three of the most amazing people who are going to tell us about what the whole SNEA community is doing around our data focus areas. Today we're going to talk about Optimize. With that, I'm going to do a quick roundtable, do a quick introduction. So we'll start off with Chris Do you want to lead us out? And then just a quick little bio on you, and then we'll jump to Rochelle and then JM.

Speaker 2:

Sure, my name is Chris Leonetti. I've been a reference architect for various companies over the years for probably what the past? Oh my gosh 30 years now. So I've implemented very large solutions all the way down to very small solutions and large I'm talking $3 billion kind of solutions. But then I've also architected a lot of solutions that you know are in the $50,000 range to solve business needs.

Speaker 3:

So Chris has been doing that since he was a toddler, clearly, but more importantly, he is the secretary of these things.

Speaker 3:

Correct Secretary of these things, yeah, and wears many hats around our organization. So I'm Rochelle Alvers. I am the vice chair of SNA as well as the chair of the Swordfish TWIG. You guys are hearing it here first, we just renamed the SSM TWIG which nobody ever knew what that acronym stood for to the Swordfish TWIG, so that you know what we actually do. And I'm also the chair of the storage management community, which helps promote the education, promotion of everything related to storage management within SNEA, and so that's actually one of the things we'll be talking about here in a little bit. Oh, sorry, I forgot my day job. Day job I lead the technology initiatives and ecosystem enabling for Intel, and I've also been working on things related to everything for storage management and technology initiatives since I was a toddler. All right, john, I'm John Michael.

Speaker 4:

Hans, I'm Senior Director of Product Planning at B a toddler.

Speaker 4:

All right, john, I'm John Michael Hands. I'm Senior Director of Product Planning at FADU, so right now we're working on enterprise SSD controllers for hyperscale. That's the day job. I also co-chair the SNEA SSD SIG, so we wrote stuff, like you know the form factors page on EVSF, the SNEA TCO model, which is also used by lots of hyperscalers, and other fun stuff. I also am the secretary of a 501C nonprofit called Circular Drive Initiative, where we are focused on sustainability of storage, and then very actively engaged in the Open Compute Project, sustainability Initiative. So I co-chair one of the work groups there.

Speaker 1:

Everybody's side. Jobs are like full-time jobs. It's amazing that you can keep all this on track, but I've seen you all in action so I know how you pull it off. You're a bunch of fantastic folks. So, Chris, if you want to, let's just sort of do an initial intro to people about what it is that we talk about when we mean the optimize data focus area and what this is under the SNEA sort of new description as we talk about these data focus areas.

Speaker 2:

Yep. So when people say the word storage, they're kind of like asking how long is a piece of string. There are so many answers to what storage really means. A good example is storage management is something that's desperately needed out there. That's highly valuable, but it goes down different paths than storage environmentals, so like, for instance, the green twig. So we have different focus areas for storage that deal with the different aspects of what storage does and what storage is. So, for instance, we have one group that focuses on form factor. We have another group that focuses on manageability. We have a third group that focuses on efficiency when it comes to power, that kind of thing. The hives of SNEA are basically set up to explore all those different avenues that storage can manifest itself in. In fact, rochelle can probably go into more detail about how these hives are laid out.

Speaker 3:

Yeah, we have six different data focus areas that we've defined. Yeah, we have six different data focus areas that we've defined as we've been expanding SNEA over the years. Snea started as the Storage Networking Industry Association, very focused on SANs. Over time, we expanded to all things storage, and what we've been expanding to over the last few years is really a recognition, and the reason we came up with these data focus areas is really recognition that we're beyond just storage. We're really looking at data from the perspective of storage.

Speaker 3:

So when we're talking about this particular data focus area, which is the optimizing infrastructure for data, it's, it really is. How are we looking at, you know, all aspects of data and optimizing the infrastructure for data? It is from the perspective of storage, but it's really broader than just the storage elements in the system. Everything we're doing within SNEA is really beyond just the storage of them. It's the entire data, the entire ecosystem managing data, including everything from managing data accelerators, managing this storage fabrics, managing the storage elements. But it's definitely a broader picture than it was, you know, five or 10 years ago.

Speaker 2:

And an example of that would be someplace that the line blurs between storage and data. In fact, you can't really optimize the storage infrastructure without knowing the data that's going to be on it. And what I mean by that is, let's say, you've got a dataset that's highly compressible or highly dedupeable. You might optimize different storage for that dataset than you would for a dataset that has no compressibility and no duplication ability. You also have to be able to choose between object files and block.

Speaker 2:

So there's a lot of optimizations that actually matter more about what the data is than simply a place to store it. You can't simply look at storage as a box you put on a wall, that you put things in. You have to really look at storage as the optimal way of putting these boxes together in such a way to solve the needs or the requirements of the data engineer. The data engineer is going to say I need this kind of data stored in this way, with this IOPS, with this envelope, with this reliability. There's a lot of features you have to build into that design, into that solution that really lets you optimize. So the lines between pure data and pure infrastructure are so blurry that they kind of don't exist anymore.

Speaker 1:

Well, and even optimize for purpose is even. You know we can talk about optimize for performance, optimize for capacity and JM. You know. Let's maybe talk for a second about optimize for sustainability, because I know that's one area that you're bringing a lot of focus to and doing a lot of work on. How does this play into the sustainability side of of you know the world and obviously snea?

Speaker 4:

yeah, uh, we had a really good presentation at snea, stc this, uh, I guess this just a couple months ago on, uh, what sustainability means to storage and really the impact you know for the SNEA projects really, which I focused around circularity, and you know, media sanitization, which are two really key aspects. But when I think, when people hear sustainability in the storage world, they usually think about energy efficiency and that is one really important aspect of sustainability, right, it it's like you know, this is how much energy you're using during the use phase, when the device is powered on. But I think people forget really the other important part, which is what we call it embodied carbon, which is how much did it take to manufacture that device and get it from point a to point b before it even gets to the service? How much does it take to decommission and end of life, this full life cycle of the device?

Speaker 4:

So Flash, specifically like SSDs and memory, you know, can take a tremendous amount of electricity and resources to manufacture. So one of the what our presentation was about at STC was the idea of circularity, which you could basically sanitize the device, remove all the user data and then have it get a second use in another use case and that might be an emerging use case somewhere that has different performance TCO requirements. You know SSDs. You can look at eBay right now. You can see drives that I worked on at Intel in 2011.

Speaker 4:

They're still up there, they're still working. So everybody kind of intuitively knows this that the devices can last a lot longer than a five-year warranty, and so the main barrier is that has been actually being able to trust that, when you remove the data from the drive, that you can reuse it. So that was the main topic of our presentation at CISDC.

Speaker 2:

And that's a good example of where the hives cross over too, because you've got a group like the energy efficiency side of the house that wants to be able to reuse these drives. You've also got the security side of the house that wants to make sure that no data leaks from your infrastructure. So you've got to play those against each other, and at-rest encryption is one way to solve that problem, because you can do a digital shred, but that may not be good enough for some businesses. It may be for others. So I mean the different hives. Although we've got these separate hives that handle all the technology, sne as a whole is all about collaboration between the hives as well.

Speaker 3:

And the place we bring that all together is in the standardization of the manageability and the infrastructure where you can expose in a standardized way all of those use cases to say, I have the instrumentation where I can report and control and really prove that all of those use cases are true, so I can ensure that the security is in place and comply to any you know, any government regulations you know, and ensure that, yes, I have the security in place and I am enforcing it. But I'm also reporting back and providing the metrics to show that reporting on and the instrumentation are in place to say, yes, that all of those sustainability metrics are in place, to show that, yes, I am actually getting the green aspects, the power instrumentation and that all of those things are actually trending to the exact reporting that we are anticipating.

Speaker 1:

When we have this interesting thing too of what's really important about the SNEA as a community and as a standards body, what we look at, why these things are important, because the artifacts will continuously change, but the way in which we access and manage those artifacts is where we standardize. This is the reason why we know HTTP has been a standard for so long, and CRUD and simple things in regular development. But that goes all the way down to physical hardware and people often lose sight of that. And I remember seeing the early work with, watching what was going on when Swordfish was, you know in its earlier days, and I thought, well, this is magnificent. The trick was always getting everybody to come on board.

Speaker 1:

But once that happens, like the pace of innovation was so much more rapid because we knew every software abstraction layer could use the same common you know ways, semantic architecture, in order to access these hardware layers. And then now that meant the innovation that happens down there becomes you know, same way as we look at at Tesla's with you know full, you know FSD and stuff, where it's now a software update, a firmware update, to access the hardware. That hasn't changed but we can better use the hardware and this is happening in in storage and we. It seems wondrous and magnificent in the future, but it's. It's actually. It's. It's been the future for a long time. We've been living the future for quite a while.

Speaker 3:

Yes, exactly, and so so I want to go back to you know, talking about accelerating how this works to something that John Michael said when he was introducing himself, about you know all the different hats he's wearing about working with circular drive initiative, about working in OCP. This work is not exclusively happening in the Swordfish twig. Swordfish is built on top of Redfish. A lot of this instrumentation is actually common to Redfish. Redfish works heavily with OCP on the base instrumentation. So a lot of these metrics that we're developing are actually common metrics to Redfish that are actually common instrumentation to profiles that are being developed in OCP for servers. So we have a lot of leverage between servers, between OCP, between CNA, so the profile and the instrumentation are actually common between about four or five different organizations.

Speaker 3:

So, as the Circular Drive Initiative says, hey, this is how we want the profiles to be developed and the metrics, there's actually very little lift we need to do to say this is what we want for this specific storage profile. It's really okay. I have a couple of these same metrics that we have on that server instrumentation that we just need to apply over here and we need a new profile and it's incredibly common instrumentation. It's not a big lift. It's a very small lift that then needs to be developed for this representation. So it's really that, how to? What is that specific use case that we can then apply? It's not a huge lift and so that's how we can actually do rapid instrumentation, rapid development on that specific use case, and then focus on developing the alliances between the organizations to really get that rapid time, you know, and not have the silos between the organizations.

Speaker 1:

Yeah, the ability to get consensus within a community and across communities and that's the one thing too is like this is no longer exclusive to one organization and one standards org. We've got international standards that are aligning and we've got new patterns of workloads right. It's not just the artifacts but actually the use cases, and everything old is new again. Like sands through the arugas we store the days of our lives. So one of the things that I loved is that I've been talking about sustainability for about as long as I've remembered how to spell sustainability. It's been something that about.

Speaker 1:

I think about six years ago it became a real hot topic. It was dripping off the tongues of every CIO and network. You know, a week was publishing all these new things about seeing what the potential was for sustainability. The problem was we didn't have an understanding of what it really meant and we didn't necessarily have the workloads that really required it. The initiative was there but we hadn't figured out how to truly implement it and see the benefits of it.

Speaker 1:

But it's a long game and that long game also is surviving a lot of workload changes, so even the traditional SAN style workloads. You throw new workloads at a SAN and it fundamentally changes the life cycle of it, the wear patterns of it. And so you know, john Michael, if you want to talk about I know you've done a lot around what companies are trying to achieve around, you know, net zero goals and looking at sustainability as a practice, but you know, I think we're actually seeing it happen. We're actually seeing real results. This is no longer a pie in the sky. It would be nice if we could get there. We are actively seeing things and we have to, because that little thing called AI is probably ripping up drives as we speak on a regular basis.

Speaker 4:

Yeah, I mean. The one thing is that a lot of these companies put their 2030 net zero goals in place before the AI CapEx spend really ramped up. Now these hyperscalers are spending over $30 billion a quarter and basically telling the world that, hey, if these models keep scaling, we're going to 10x that and obviously this directly impacts sustainability. But thankfully, a lot of the technologies for sustainable data centers actually help get better power efficiency and power density in these AI data centers. Actually, two of Rochelle's colleagues who are part of the OCP sustainability project Sammy I guess Mohan was there and Eric Dahlin they wrote this paper from OCP on power usage effectiveness and some of the new trends in IT around liquid cooling, cold plate immersion cooling and these technologies are.

Speaker 4:

First, they were oh great, they're so good for sustainability because they reduce the energy. But now it's like, oh, we have to do it because you can't cool the AI servers without them. Oh, but by the way, now you reduce the fan power and the heat and all the IT load and make the actual data center rack more efficient. So it kind of comes full circle and this whole thing.

Speaker 4:

They're all connected.

Speaker 2:

I mean, it's not ironic that all of the top supercomputers in the world have been liquid-cooled forever? They kind of knew what they were doing from the start and they optimized for it.

Speaker 1:

Yeah, and there's an interesting thing. I guess it's almost even a bit of a Jevons paradox, or Jevons paradox. You know, this idea that eventually we will move towards. You know the thing becomes we make it so expensive to try to not use it. That actually becomes efficiency. It is the way that we've, like you said, we just set these goals for net zero, for, you know, for 2030 goals, and then, somewhere between the time of setting the goals and 2030, the entire face of computing changed.

Speaker 1:

I mean the fact that Sora was launched very recently on OpenAI, as an example, and for three days of course, all sorts of exciting errors happening across the whole world of OpenAI's infrastructure, completely changing their consumption and usage patterns. So even a company that knows what's ahead in the very short term probably can't even prepare for what's about to happen as we see it in practice. So you know what is that impact at like cost of operations and that cost of a life cycle of a piece of hardware, what is the real cost and what are the actual metrics that drive that tco over time? Maybe jm, if you want to, this is probably your, your backyard so I'm gonna pin you want to.

Speaker 1:

this is probably your backyard, so I'm going to pin you down to try and throw some numbers around.

Speaker 4:

Yeah, so this is a shameless plug. We have a SNEA TCO model for storage. That's very good and, by the way, when I originally created this at Intel, it was we were thinking about okay, how do we replace all the hard drives in the world with SSDs and some of these workloads where you need a certain amount of IOPS and you can change your replication and compression factor? These are actually really big TCO drivers. So, on the first part of like, if you're looking at just storing data, this TCO metric of, like TCO dollar per terabyte effective, which is like, after all these, like you know, replication, compression and all this technology is very important Per rack, per month. This is kind of how the hyperscalers think about this, but there's all sorts of other TCO models for it. Like you have to look at your different KPIs and look at the different types of workloads, what you're optimizing for, but for actually storing data, that's that's fairly well understood. And now it's just about how do you optimize the infrastructure to get to a certain you know with, with all the certain constraints, and leave.

Speaker 4:

You know that whenever I share that TCO model, somebody, somebody says, oh, this thing's not right, how did you come up with this price. I said it's a model like the you have to put your own inputs in. Like you have to put your own inputs in. It doesn't just do everything automatically for you. You actually have to have some assumptions and there is no one size fits all TCO model. Like, every single customer needs to go through what matters for us. But it's like a really good starting point. But there's all kinds of other, obviously, things like now AI. They're trying to moving from this dollar per gigabyte, you know, just as being one of these only metrics to, okay, maybe in some of these AI workloads they're actually IAPs per terabyte and IAPs per dollar is now a really important thing, you know, for how do you create a new storage technology that might fill the gap between Flash and DRAM or something like that?

Speaker 2:

And that kind of goes back to the original concept that you really have to build the infrastructure because you have to decide what the important metrics are you going to be measuring. There's dollars per gigabyte, there's IOPS per watt? I mean, that's a weird one. There's a lot of different metrics that you want to take a look at, and the original thought with the SSD guys was let's destroy the hard drive business, let's get everything onto SSDs.

Speaker 2:

But I say that with a grain of salt, because the tape industry is still hanging on strong. In fact, a lot of cloud providers are buying more and more tape every year because it makes a good tier level and the sustainability for tape is insane, cause once you write to a tape, it can sit on a shelf for 10 years unpowered, which I really wouldn't trust NVMe or SSD to do, or even hard drives, for that matter. But the point is that all three of them have tape block or tape hard drive, and NVMe and store and SSDs. They all have their own niches. Now SSD niche is very large right now. It's eating up a lot of the data center. But you really have to go for what you're measuring.

Speaker 3:

And we really have. You know not to go completely off on a tangent, but I'm going to do it anyway we have the DNA data storage. That is, you know, a potential future disruptor. That would take this whole model into a complete other direction. Right, you mentioned tape and archive. That would take the model into, you know, yet another complete dimension as that technology moves forward. So there's a lot of different dimensions you could take in terms of what's valuable and what you know. What. What, as John John Michael mentions, this is a model.

Speaker 1:

What attributes are important to you. Went in on that one, Rochelle, you know, since I got you on mic already. Let's talk about what that means as far as the sort of accessibility, programmability, and where Swordfish comes into play, when we look at new form factors, new types of storage, new patterns of consumption. But where do you see the Swordfish wins happening today? Because of, you know, the early work that was done.

Speaker 3:

So, yeah, there's actually quite a few swordfish implementations out there. We don't actually have a lot that are working with our conformance test program at this point, but we do know there are a lot of actual swordfish implementations out there. We've had multiple implementations over the time. We're developing a new adopters page for SNEA in general. You can look for that coming in the next few months where people will start to register their overall implementations. We are actually starting some work. We expect to see in 2025 some initial implementations with our DNA data storage. We have some early implementers there that we actually expect to see some initial modeling from some manageability with the DNA data storage. We have some expansion coming with file storage with some file system implementers. We are talking with folks in the object store space as well. So there's a lot of additional expanding on where we have our file, our base of block storage file or you know, base of block storage.

Speaker 3:

We are also working on expanding where we have our I mentioned earlier the redfish folks work extensively with OCP. We have some initial base profiles, which is how folks like OCP basically call what I generically call them recipes. Basically, the work that OCP does is largely like defining how to use other people's standards and so think of that in terms of saying here's the recipe, how we want to use, you know how to use the standard, so we define those in terms of profiles. We expect to expand on that in the storage space more so, working with OCP, working with other standards orgs as well, or other consortiums, to say here's the specific recipes. Moving forward, we've been doing that quite a bit with the NVME consortium. Moving forward, we've been doing that quite a bit with the NVME consortium and we expect to see that as defining more and more profiles in conjunction with other groups, as to kind of clarify the expected and targeted usage for the redfish and swordfish in the storage space.

Speaker 1:

And you know, this is the very important piece where we talked about, you know, standards as a common abstraction, because, at the lowest level, when we start to do measuring the actual cost of a life cycle of a piece of hardware, from manufacturing through to, you know, destruction and it's not just operation, it's you know, you mentioned this before too jm was the idea that, like, what's the cost of creating these? I mean, you know, the penny is the classic that we always look back to. It costs more than a penny to make a penny, and that's why many countries have decided to abandon the practice, just because it just didn't financially make sense to produce this or environmentally make sense to produce this anymore. And so, when you come to measuring around sustainability and lifecycle and cost of energy to drive something, where does this? How are you seeing these measurements be? Like, how do we find them? Like, who chooses the right measurement of? Like, yes, this is the actual cost from, you know birth to completion for this particular device.

Speaker 4:

Yeah, so you know there is a lot of standardization in something called the life cycle assessment and it's not perfect, but you know it's supposed to be under specific. You know it's under specific conditions and this is one of the key places where OCP has been working with the iMasons Climate Accord on trying to standardize some of these categories of like okay, if you're an SSD or a hard drive, like you're kind of how you should report it, but at the end of the day, like there's, you know the memory manufacturers know how much energy is going into the fabs. They have to run these multi-billion dollar fabs.

Speaker 4:

They can do a life cycle assessment of the materials and the F gases and stuff going into the air. And like one of the metrics we're trying to get to is like this carbon per gigabyte or carbon you know, you know embodied carbon per terabyte. Well, the numbers that we've come up with right now for flash are like astronomically high. When I talk to traditional sustainability, folks they're like, wow, we really have to fix this. So and that's where I mentioned this like the whole focus of this circularity is to is to. It makes sense from a like logical standpoint that if you're going to use a thing for 10 years instead of five years, that the per year embodied carbon or use case of use of that device goes down if you amortize it over a longer period of time. The equivalent would be like I drove my car for five years and then we just threw it in the shredder because it was done. No, nobody does that right, they sell it because there's economic value. You know it still has use left in it. So you know we're trying to do the same with with ssds. Um, the big unlock this year has finally been that like the hyperscalers are really engaged. Uh, rochelle might be able to. You know, she's probably well aware of all the things that are going on in OCB with security.

Speaker 4:

The big two things this year are Calyptra, this hardware root of trust that can enable like attestation.

Speaker 4:

This is like open source IP that can go into new devices like SSDs and hard drive controllers to basically establish this cryptographic authentication attestation.

Speaker 4:

To basically establish this cryptographic authentication attestation. But the really exciting new capability that they put in there this year is this thing called LOCK, this Layered Open Source Cryptographic Key Management Block. This is an open source IP block for key management backed by Google, microsoft and then three of the other SSD vendors to basically ensure that this IP is really good around key management so that they can trust that when they do a cryptographic erase. Or if somebody you know, in the case of Opal we have a self-encrypted drive Somebody walks into a data center, puts the drive in their pocket, walks out. They want to know that that data cannot be decrypted, even by a nation state actor with hundreds of millions of dollars, and the only way they can know for a fact that that can't be decrypted is to know how the keys are managed on that device with an open source key management block. So these are just two examples of like that you know the hyperscalers are taking this very seriously and you know, devoting new technology and IP into this space.

Speaker 1:

And I think this takes the abstraction up one step further and maybe, chris, I'll tap you is the idea of there's local optimization, there's sort of server level, rack level, data center level and then energy sustainable data center level. So there's all these different places this is happening. So what are we seeing around actual data center level optimizations happening?

Speaker 2:

Well see, that's the neat news is that if you know how to talk Redfish, you know how to talk Swordfish. It's the same protocol. It's an extension that gives you all the neat things that enterprise storage gives you. However, what that means is I can gather metrics from all of my servers, all my storage soon to be switches. I should be able to gather that all from the same API calls the same kind of mentality. But, more importantly, I can do a cross vendor. So I can talk to an HP, I can talk to a Dell, I can talk to a Cisco, I can talk to a Lenovo. Supermicro, it doesn't matter, I can make the same call and pull the same metrics. Super micro, it doesn't matter, I can make the same call and pull the same metrics. So, just like we were talking about embodied costs with carbon, that kind of thing, we can also look at things like wattage and fan speed and incoming air and outgoing air. We can look at all that stuff cross-vendor with the same code base. And that's really good for customers, because customers don't want to rewrite the code to an SDK for eight different vendors if they have eight different vendors. The average customer out there uses two different storage vendors or two different server vendors. So they want a common code base. And in fact we also talked about metrics. How do we know what metrics to record? That's a neat thing about Redfish and Swordfish as well, is that all of the different partners that help write Swordfish, which make up the industry, we all agree to it. No one company comes in and says I want this metric. There's an OEM section for that. Redfish has got an OEM section. Swordfish has got an OEM section. They can put a custom counter in there. But we encourage, if you've got a counter that's going to be good for the industry, that you bring it forth and let your competitors see the same counter and decide okay, we want to subscribe to the same thing. Once you have multiple people doing the same thing, it becomes a standard counter.

Speaker 2:

A good example of this was that we were going to bring again. I work for Hewlett Packard Enterprise. We purchased a company called Cray Cray Supercomputers. You probably have heard of them and we said we have a mandate across our company. Every server we ship is Redfish compatible period, end of story. And we brought Cray in and Cray was not. So we said well, we got to put Redfish on you guys and they say, okay, that's great, but we looked through the protocol and doesn't support water cooling. So we're like, no problem, let's get water cooling in the standard. So we brought it forward and we had Dell and super micro and IBM, lenovo check the work to make sure it's acceptable. But now water cooling is in the standard, so it doesn't take much to expand the standard to cover everything in the industry. Um, but it is a cooperative effort, uh, and it's definitely customer focused and and uh, definitely customer-focused and definitely customer-friendly. One code base to rule them all, as it were.

Speaker 1:

Yeah, when it sure beats the, you know it's deprecating and moving forward in an efficient way and an optimal way. Just for people operations is so much better versus before way. Just for like people operations is so much better versus before. Like I say this is I'd look over in a drawer probably still filled with nine pin to ethernet cisco cables, but just because you never know when you're going to need to grab that old switch. But imagine that you know I had the same thing for my storage gear and my fiber switches. It was like there was always some proprietary thing on every single thing and i've've let's get away from that it's so fantastic.

Speaker 2:

How many times has one of us old-timers gone into a lab with a fluke analyzer to try and figure out how much power a server's pulling, but then we don't really know what it's going to do in production? Or we buy a specialized PDU that's got all the power draw information so we can monitor that thing. Well, every server that's sold in the market today by every vendor has a sea of sensors. They all have a lot of sensors that give us a valuable wealth of information. If you want to know where a hotspot is in the data center, that's easy to find. If you have a situation where a certain model of a certain vendor's hard drive got recalled because there was a bad batch, you could write one command that queries everything in your data center from different vendors and finds where that drive happens to show up. So you know what you've got to replace. There's a lot of cases where this standardized approach really really pays out for the customer.

Speaker 1:

When it goes up to the software, even to the point of placement right, like if we think, uh, I saw some of the stuff and even a couple of years ago with like OCP was talking about, and like Facebook does this with their data centers and they actually move workloads according to heat maps, because they'll see, like a hot pocket building up and you know that they just literally can redeploy workloads and distribute them throughout because of the physical or whatever it's going to be, can literally affect the efficiency of your entire data center, because if it's put into a spot where there's too much other than required, it's incredible. And that same instrumentation that tells you what the temperature is also tells you where to place, because you can choose. That's the beauty of standardization. Yep, exactly.

Speaker 2:

and when you're talking about cooling. Cooling you turn cooling up or down like a sledgehammer it's. You're going to cool 50 racks at a time, either by turning the cooling up or turning the cooling down. If you can make sure there are no hot spots across an entire set of racks let's say 150, 200 racks and you can make sure to eliminate the hot spots, you can run the entire set of racks hotter than you normally would. Generally people run an entire set of racks colder because they have to make sure that the hottest server in the hottest rack doesn't go over the mandated number. But if you can prevent that server from existing or prevent that server from ever getting that hot, that means you can run everything hotter. You can run less air conditioning, you can run everything closer to the edge and that's a massive power savings, because every watt of CPU or computer IO you put in, you're also putting a lot of heat in. So I mean that's half the equation right there.

Speaker 4:

I'll certainly echo the excitement for Redfish being just standardized.

Speaker 4:

You know, one of the things that we're looking at in the sustainability space I mentioned is this classification of IT and non-IT power.

Speaker 4:

Right, the fans are not contributing to the useful work and being able to basically easily enumerate that in Redfish to say this is the fan power, this is the rest of the system power.

Speaker 4:

And you know for, like you know, we work on SSDs having the ability to, within that server, say, okay, this is how much power is going to all these different areas, and then I can optimize things. That is extremely helpful, right, because a lot of one of the things we talked about earlier this year at FMS was, you know, one of the really basic thing of just using NVMe power states to take a drive from a 20 or 25 watt TDP down to a 14 or 16 watt TDP, you can just say that now you will not consume more than that power. Well, that tiny change can have a massive ripple effect on the fan speed and the overall TCO and if you don't have that type of Redfish instrumentation, it's very hard to like actually say I made this power change in my drive. I can do some basic math. You know, 20 watts to 16 watts should save four watts per drive. Turns out it's actually a lot more than that, because you get the fan power too, and so, knowing that kind of stuff, to be able to instrument with Redfish.

Speaker 2:

And if you can tie the effect in with the value you're saving, maybe it makes sense. But if you have a bad effect from lowering the power but you can't show the value of the savings you're getting, you may never want to do that yeah, and so this is a great example of an area where we've got a lot of the base instrumentation.

Speaker 3:

We've got all that base instrumentation in from the power. We put the base instrumentation in but we have not and we we've. This is an area where we can go back in and put and look and say when and where should we put the controls in for those NVMe power states? Right, we did the basics in the Swordfish model and this is where we differentiate between Redfish and Swordfish. Right, redfish is where we put the base power controls for the chassis.

Speaker 3:

Swordfish is where we'll go put that additional instrumentation for those NVMe SSDs and where we'll look at both the controls for the NVMe SSDs for those power states but also the security around who gets to control them, because you don't want to expose those up too much, and so that's an area where we will look at partnering in again, putting those profiles, those recipes in on where do we expose those up, who should have control over them? And working with folks like John Michael and Circularity and NVME and getting those alliance partners in to say who are the folks with the expertise there and where should we put those partnerships in place so that we can accelerate. Maybe look at that in 2025 here, as we're two weeks away from going into 2025 as we're doing this recording today. But get that as we go into 2025, can we make this a focus? Get that functionality and those controls into the Swordfish specification and get that base enablement in so we can get those into implementations here in the next year or two.

Speaker 1:

And I think you hit one of the top things and you know, I wish we could do this all day long. I would love to. I only wish we had another hour to commit people to. This is so fantastic, but it's far beyond just the specifics of optimization and even when we talk about sustainability it from manufacturing to placement, to life cycle, to everything security we often get mentioned.

Speaker 1:

People always say you know, why did we have to? You know, create this thing called dev sec ops? You know, security is implied like oh no, the. The presence of it was assumed, not implied. You know, but in this community, what you've got now is the same practitioner. You know practitioners are actually then crossing the working groups and so you can talk about this and then you've got security.

Speaker 1:

People can come in and see what's happening and they can participate in this conversation and they can understand what the impact is as they look at. You know, protecting the ap APIs, protecting all the endpoints, and it's this is the spot where it's people that have a common goal in their own individual section, but the really it's it's how do we optimize the community and the community of people that we meet, community and the community of people that we meet and maybe I'll just ask each really quickly say like, what's the one accidental thing you found yourself working in. It may be crossing working groups or just meeting somebody in the community that you realize, oh wow, this just saved me a ton of time, or this was a discovery that actually helped you in your primary area, because you were beside somebody that you wouldn't have otherwise met had they not been participating in this in this community.

Speaker 2:

Maybe, chris, I'll start with you- um, it's funny because I I end up meeting somebody that does something drastically different. That is very interesting. At every STC I go to I mean the most recent STC, I got to spend some time with DNA storage guys and got some real deep skinny on that stuff. And the time before that it was another topic. The time before that it was another. I find that, unfortunately, there are too many technologies to get into and I'm interested in all of them, um, but I wouldn't have access to that unless I had gone to an event where this cross collaboration happens, and that's really the magic of it is is meeting people where you get to cross collaborate. I'm looking forward to um. I know they're working on a automotive task force right now to figure out if they can start working with the people from that. Do the the networks for cars? Um, I'm starting to research that. That looks like really fun stuff, um, but there's so much to choose from there really is yeah, I mean, yeah, he's right.

Speaker 3:

I mean, with there's, there's something that pops up all the time. It's like I, you know I'll come back from a show and go. Yeah, we just went on this random. Yeah, this one was a random. Yeah, uh, uh, we dug into this.

Speaker 1:

It's like a new journey every time you go there. I I love it.

Speaker 3:

A couple of years ago we were all off on a. Hey, we're doing storage in space.

Speaker 1:

Yeah, yeah.

Speaker 3:

Then there was the. Here we're doing all of the. We had the global mapping of. Here's the story we're putting storage in planes, they're going at high altitudes and doing really GPS mapping of in really, really granular detail, of of of the earth, basically, um, and so it was like all of this, the storage aspects of it were were doing really really high um granular, and you had to make sure that all the storage um was um. It was like really um. I'm sorry I'm not describing it very well. It was just incredibly fascinating. But it was all like sub-zero temperatures, that it was all like immediately purged on landing and different requirements.

Speaker 3:

Yeah, really really really HA, and it was incredibly fascinating. And I forget almost all of the details of it because by the time we got through, oh my gosh, this is fascinating. And then I completely forget about it and go on to the next topic.

Speaker 1:

So we just solved the New Jersey drone problem right now. Is that what this is?

Speaker 2:

And sometimes you get inspiration from the weirdest places, like DreamWorks when they came in.

Speaker 3:

How they're dead people usually fascinating while we were working on it, and then I completely forget all about it and go to the next one.

Speaker 2:

Yeah, which is why you should definitely take trip notes when you go to these events. Pull out that paper and write notes through the event, because you will forget day one by day three and eventually videos are posted online. But you want to have your notes from those videos as well, and if you share that with people at your own company, not only will they be more likely to send you the next year, but you may find that other people in your company find the information very useful for their jobs.

Speaker 3:

I can't. I can't remember the details of all of them, because there's just been so many of them.

Speaker 1:

Yeah.

Speaker 3:

Yeah.

Speaker 1:

It is truly a wealth of humans and a wealth of knowledge and such a sharing culture, which this is the breakdown of. There is no corporate wall. When we're there, we make a point of kind of avoiding our associations, but we also are proud to say where we're from and no one questions it. It's just because we are the new artifacts, the human artifact, that can go through and move from place to place and achieve these amazing things and then again seeing it actually play out. Now we're actually seeing the real results with some of the stuff in sustainability, and it may have been a marketing buzzword six years ago, but when that went away, the work didn't and we kept going and we kept achieving innovation. So maybe just as a last close, jam M, what's got you excited and what else in your SNEA connections has been a wonder that you probably didn't expect.

Speaker 4:

Well, it's funny, I presented at SNEA STC this year on sustainability, so that was my idea of like let's try to get people that aren't usually exposed to the sustainability world to start caring about it more. You know, it's slowly people are. We're making them learn about circularity and whatnot. But, of course, like I get to go attend these presentations on storage and AI, which is what I'm spending most of my day job learning about right now on manageability and other stuff that helps me with some of the side projects for GPUs and AI workloads and all kinds of other stuff I'm in. So, yeah, it's. It's just, I guess, what we just said some of the uh, I brought one of our like junior sales guys to stc this year just to like stop by and say hi and he was like wow, are there usually this many like senior people from all these companies? They're like. He's like every single person I meet is like director, senior director, you know some kind of like fellow or something like that. I was like, yes, that's welcome to stand.

Speaker 1:

It's a, it's a beautiful thing and, uh well, I can say I've been lucky as a. My greatest gift is meeting all of you and being able to share time and learn on a constant basis. And you know, it's, it's. This is the. The very reason why I love you know, because we talked about SaaS and space, we talked about sustainability. I literally get to sit in on all these conversations and, having come, you know, I'm in the space myself as well. So it's funny, not space, but the space in the industry, I guess, and it really is enriching, you know, for me, because then, as I go to the next company and I'm advising companies and startups and everybody, I say, like you want to meet people who are driving change and who are your future. You know, collaborators, this is the group to be in.

Speaker 1:

So there's lots of virtual events. There's lots of in-person events. We've had a really, really great change with the membership structure for 2025. So there's a podcast where we share about some of the updates. I talked with Jay Metz about that. So definitely recommend people go check out the podcast, Go check out the SNEA YouTube channel. There is no shortage of fantastic content. But, most importantly, get signed up, come to an event. There's virtual events, there are in-person events and, oh, thank goodness, we're back in person. It's so great to be able to be in the room, because the hallway track is one of my favorite parts of every conference there's a lot of hallway conversations.

Speaker 2:

That is so true.

Speaker 1:

And I can tell you that I've literally met who would become future clients and friends and employers through these hallways. And so thank you all for letting me in the hallway and thank you all for the folks that are listening and watching. Do continue, so keep your eyes out. 2025 is going to bring a lot more great content like this. So, with that, chris Rochelle JM, thank you so much. And again, folks, don't forget to sign up to the SNEA Experts on Data podcast. Hit, subscribe, smash that like button, do all those things that the Zoomers tell us we're supposed to do, but, most importantly, get to a SNEA event, find out how this community can help you and what you can contribute back. And thank you all for watching and listening.

Speaker 4:

Thanks all, Bye-bye.