SNIA Experts on Data

Computing in Space: What’s Happening on the International Space Station

Dr. Mark Fernandez (HPE), Cameron Brett (SNIA) Episode 9

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 41:38

Can commercial off-the-shelf technology survive the rigors of space? Join us as Cameron Brett from the SNIA SCSI Trade Association Forum and Dr. Mark Fernandez of HPE's Spaceborne Computer Project tackle this provocative question. Learn how they've successfully adapted enterprise SAS Solid State Drives (SSDs) and NVMe technologies to work on the International Space Station.  

You’ll learn about the real-world impacts of computing in space, including accelerating DNA analysis from months to minutes and enhancing astronaut health monitoring and safety protocols, ensuring mission success and crew well-being.

Looking to the future, hear how storage advancements are setting the stage for even greater achievements in space exploration. 

SNIA is an industry organization that develops global standards and delivers vendor-neutral education on technologies related to data. In these interviews, SNIA experts on data cover a wide range of topics on both established and emerging technologies.

About SNIA:

Speaker 1

Welcome to the SNEA on Data podcast.

Speaker 2

Each episode highlights key technologies related to handling and optimizing data at GTM Delta, and I'm extremely excited to be the host for today's podcast. And a welcome to incredible folks who I've been lucky enough to consume a lot of their content over the years and watch what's been going on with the project we're going to talk about Because, as a fan of things that are down here on old terra firma, but even more of a fan of the stuff that's able to get off and get out, we're going to talk about SAS in space, and with that I'd like to welcome Dr Mark Fernandez and Cam Brett. Now, cam and Mark, if you guys don't mind, let's just do a quick introduction for folks that are brand new to you, and then we'll talk about the whole project that's led us to this discussion and what's coming, and also how SNEA really played an interesting part in how we developed a lot of the work in collaboration with it.

Speaker 3

Okay, I'll kick things off. My name is Cameron Brett. I'm with the SCSI Trade Association Forum, which is a part of SNEA. I'm also a marketing director with Kioxia and we make SSDs and flash memory. Just a quick touch on SAS storage technology, which has been around for more than 30 years, continually improving performance, functionality, reliability and with the latest generation of 24-gig SAS today, and with the latest generation of 24 gig SAS today with our family of SAS products. We've been working with HPE as an SSD provider for the Hewlett Packard Spaceborne 2 program.

Speaker 4

And with that I'll hand it over to Mark. Well, hello, this is Mark Fernandez. I'm the principal investigator on H3E Spaceborne Computer Project. This is our third iteration. It is on the International Space Station right now, traveling at about 17,500 miles an hour. So I joke with you about how people have the fastest computer there is in existence, although there are many much larger computers here on Earth, et cetera. And our partners at Keoxia have been with us through the entire journey. They have helped us with the transition to SAS and NVMe and helped us with the failure analysis and keep us moving forward. So I'm glad to be here. Thank you for having me.

Speaker 2

This is fantastic. The project itself is really interesting and maybe I think we've got a lot of stuff we want to cover. But even in this half hour, I'd love to hear how the whole plan began and what were the goals and what led to what has now reached its third iteration, as you mentioned, mark. Let's talk about the history of the space-borne computer and what got us here today.

Speaker 4

Sure. So I kind of mentioned big computers here on Earth. My company supplies NASA with some of their largest computers and the visionaries at NASA came to us one day and said when we get to the moon and further on, when we get to Mars, we're not going to be able to do our mission here on Earth because of the distance, because the communication lag and the low bandwidth et cetera. So would you mind taking one of the supercomputer nodes we have here on Earth today, modern at the time, and do three things. One, see if you can find some rocket company that will allow you to put it on their rocket and if so, would it survive the Shake Rattling Road launch? Number two, if you got this CPU with memory and storage computer on board, could some non-IT trained folks, which we call astronauts, install that and get it up and functioning? And number three, most importantly, could this unmodified commercial off the shelf CPU, memory and storage bundle, called a standard server, function in space and if so, for how long? So that was our mission with Spaceform 1.

Speaker 2

Now, the one thing that really stood out to me, especially as this idea of going back to the roots of commercial off the shelf shelf, whereas, you would see, often it's very we.

Speaker 2

We tend to go down proprietary roads because of very special technology, because of the very special environment in which it's going to work.

Speaker 2

And then, what was neat in all of the the years that I've been involved in, with snee and watching the work that goes on, more and more stuff has actually gone completely the other way and it sort of turned a bit of people's ideas on their heads where they said, wait a minute, collaborating in public and using shared ideas, shared standards and ultimately commercially available technology.

Speaker 2

That allowed us to actually innovate a lot faster. And because other technologies are catching up as well, some of the limitations we may have seen on the software side are now being relieved by hardware innovation. And then the reverses as well, where we're seeing advancements on software and databases and data management itself, where we can do things at speed, at scale, much differently, and especially at the edge, and there really is no much more of an edge than 254 miles off of terra firma here down on Earth. So based on that, mark, you know what was the decision to choose, that we wanted to go with sort of an open method by which we can get this technology to work and to get it out there into the community.

Speaker 4

Yeah, absolutely. As you mentioned, technology both hardware, software, storage, et cetera are accelerating in their developments and the so-called radiation-hardened hardware and components are now decades old and very, very expensive. So the idea of taking modern artificial intelligence software, our modern GPUs, our modern solid-state disks, is just foreign to that concept of 20-year-old hardened technology, that concept of 20-year-old hardened technology. So hence those visionaries.

Speaker 4

Nasa said let's just see if you can do it, mark, and come up with what you can right. And we came up with this concept called hardening with software. So we put in as much commercially supported hardware redundancy as there is available and then we monitor the systems very, very carefully from the software side to make sure that they continue to give us a correct answer. At this point I want to bring up that I'm not doing anything having to do with the rocket or with the astronauts, so I'm not doing life support or propulsion or navigation or any of those critical subsystems which are already rock solid and not to work fine. I'm there for the scientists and engineers to advance space exploration and to do that you need modern CPUs, modern GPUs, modern solid-state disks, and that's what we're going up there with and a layer of the modern software that mostly everybody leaving college now is able to use and is totally unfamiliar with 20-year-old software technology that they would have to use in a hardened environment.

Speaker 2

It certainly makes quite a difference. You know, both in the and all the form factors. We'll come back to a second. We'll talk about the two form factors in which this is getting delivered and operated, but let's go down to the drive side and let's talk about SAS, about SaaS and what were some of the choices, cam, and what you and the team did around making sure that this is going to meet the needs of the payload and the workload, and then also just to physically to be able to manage in the environment, which is much different than your typical data center Though I can say I've been in a few cold data centers in my time but that parka is a lot different than the one you need to wear out in orbit.

Speaker 3

Yeah, the drives that we selected. We worked very closely with Dr Fernandez and the team over at HPE and we came to the conclusion that in one of the platforms SAS SSDs would work and we use the combination of value SAS SSDs, which is a single port SAS drive that is more kind of more cost effective, lower power and OK performance, as well as your traditional enterprise SAS SSDs, which are the higher performance, a little bit higher power. So it's a combination of those two drives and I won't steal Mark's thunder about how much we ended up sending up into space. But SAS technology, a couple of gigabytes.

Speaker 2

we could say yeah.

Speaker 3

Yeah, but we chose SAS for that system because it's very commonly used, it's known for being very reliable and well-performing and it fit the power profile. And in the other system we needed to use NVMe SSDs and we used the M.2 form factor for those.

Speaker 2

Now on that. That's the interesting thing. Resilience obviously stood up and, mark, I definitely saw your eyes light up on that one. This is particularly an environment where resilience is so important and we've got, you know, even in earth-born projects. Here we've got a big drive towards sustainability, not in the idea of just purely sustaining reduced power and the ability to reduce emissions, but absolutely can we extend the life of existing hardware with software and with other innovations that we can do.

Speaker 2

So this one as a project. It really looks to me sorry, I'm Canadian, so I say project instead of project. It really looks to me sorry, I'm Canadian, so I say project instead of project. It'll really twist people up when I look at this. This is truly a first principles approach from both sides. Right, you were given a base set of requirements and you had every opportunity to say, well, let's go back to the core and rethink, based on the workload and the environment, what are the real requirements we have, where obviously resilience is absolutely fundamental? And then from there, performance, and then managing data transfer, managing workloads on this hardware. So it's every layer of the physical and logical software stack is being touched in a new way.

Speaker 4

Yeah, absolutely. So. Yeah, I was super excited to work with the folks at Keoxia and when Cam came to me and said you know, we've got this breadth of SSDs to take a look at, and he said we've got these that are pretty reliable and they're low power, I said yes, that's what I really need. I'm going to need low power and reliability and OK, but we also might need X Mark and I said, ok, we'll adopt that as well. We'll take that on as part of our experiment.

Speaker 4

So, as Cam explained, it's not uniform across the entire Spaceborne Computer 3. We've got three types up there and we can compare and contrast speed, capacity, power etc and evaluate what works best in different environments. So I'm super excited about that partnership and how we got that diversity of SSDs up there. So super excited. Now this may be a surprise to some of your listeners, but again, I'm ultra conservative and so in general, I have four drives a primary, a mirror and two cold spares, which is sort of unheard of here on earth. But when I need reliability and I'm not going to believe all the marketing that may be coming at me I go with that one plus one and two spares, and I've been pretty happy with the performance.

Speaker 2

So it is such an interesting thing that we often get caught up with this idea of, like, how many nines can you deliver? And in the end, it's really about what is the application and the data that's on there, and it's really what we're looking for availability. As a result, every dependency underneath that, to deliver it has to have that level, or higher, obviously, of capability for resiliency and you know, looking at you know this is a great, great thing. That I'm sure, cam, you've seen plenty in your time in the industry as well, and I've seen on the data center side and delivering.

Speaker 2

You know, on the consumer side and the business side, we really, really tried to aim for high availability, high resiliency, but in the end, it's always a tradeoff of cost and, particularly a lot of enterprise technologies, it's tricky because we're not necessarily innovating as fast in some areas, so it takes slower to adopt, so it takes slower to adopt. So when it came to this, you have a very distinct project and a very unique type of workload and you used, you know value SaaS as a method. You talk about trying to go for lowest availability, lowest cost ultimately, but performance and resilience. So what are some of the engineering decisions, cam, that come in when you had to make that selection other than just, of course, power and some of the minimal things.

Speaker 3

Well, I mean it was pretty straightforward. I mean we have a pretty broad range of products between endurance, capacity, security. So it really just came down to some of the core requirements, power being one of the primary, and just making sure good enough performance in the right form factor. I think for the value SaaS SSDs the capacity wasn't as critical as it was for the enterprise SaaS. The capacity wasn't as critical as it was for the enterprise SaaS, but we definitely wanted to keep in mind, you know, that balance and tradeoff that you had talked about between all the different aspects. Yeah, so it wasn't too difficult of a decision and we've worked with HPE for many years, so plenty of our drives are qualified on their platforms and so we just kind of pulled from their menu.

Speaker 2

Now the question I would have next, based on this, is you know SNEA as an area where we do a lot of this innovation. We see fantastic folks like yourselves who are sharing you know what you're doing and being active participants in the SNEA community. So we see innovation already in what's happening across organizations, across vendors, some who we would all believe that we're fighting in the field. But we all sure enjoy sitting down together and really diving into what's possible and then looking at a collective way in which we can get there faster together, which is what I really adore about SNEA as a technology community. And then this such a unique project. So people might think, oh well, this is only going to help this project. But, mark and Cam, based on this, what have you seen come back, based on what's been learned from the space-borne computer? And are we going to see some of these innovations coming back into other just data center technologies and other ways that we build and deploy systems in this style in other environments that may be not quite as harsh?

Advanced Technology in Spaceborne Computing

Speaker 4

So I'll go first and again, I'm not laser focused on solid-state disk and the technology, I'm kind of back looking at the whole solution. And in Spaceborne 1, we sent out 20 solid-state drives and they were state-of-the-art and nine of them failed. So this isn't good. We brought them back down to Earth and we got our partners at Keoxer to do the failure analysis on them. It turns out that there was a component on board called a supercap that was beginning to know being a known failure point here on earth as well, and it was migrating out of the industries and so that explained, uh, our failures. Um, so I'll hand it over to cam, just to finish the story, as how and why he selected the replacement ones for spaceborne 2 and 2.55 to give us that more reliability and avoid that failing component in size. Cam.

Speaker 3

Yeah, so those first SSDs were SATA technology, which is, you know, based off of a hard drive technology. I guess, technically, sas and SCSI are based off of hard drive technologies as well.

Speaker 3

SAS and SCSI are based off of hard drive technologies as well, but SATA is an older technology, it's not innovating as far as performance and reliability and we saw that we probably don't want to go with SATA. And, as Mark noted, you know, the super caps have been basically phased out by the industry as a whole because of their failure modes in some cases, and so tantalum caps and newer versions of that type of capacitor for power loss protection have come into play and they're being used across all enterprise-class SSDs and we certainly wanted to make sure that we have as high of a reliability as possible. Even enterprise SAS and enterprise NVMe are probably the pinnacle of the nines of reliability Value. Saas is also much, much higher reliability than SATA more than double the performance, higher reliability and probably, I guess, better functionality within a SaaS ecosystem connected to Raider SaaS.

Speaker 4

HBAs. Let me add something to that. It's a light-breaking data point. We recently had an Aurora Borealis event here in North America and it crept down farther south than it was predicted to be. I'm actually in South Carolina and we saw it down here in South Carolina and that is a high radiation event. Well, I can share that. On Spaceborne Computer 2, we had four correctable errors on the CPU caches. Good news is they were correctable. The good news is the Royal Borealis affected them and we were able to detect it and make those corrections. And finally, we had no detectable errors on our SSDs. So again that modern technology is helping us out and we're proving it as the beta tester on the International Space Station for further use elsewhere in other harsh environments here on Terra. Firma.

Speaker 2

So I think that brings up the great question that people always have is what exactly are you doing that required this to come together? And some of the experiments that I've heard about and read about are just incredible. And especially, we look at the scale and speed that they're going at compared to what we've been used to in the past. I love this idea of months to minutes. So, if you don't mind, mark, let's talk about some of the actual work that's been done and where innovation is really playing out. It's like this is obviously the work that you and the team have both done is awesome, but like it's what we do with it, that's really innovative, sure.

Speaker 4

A couple of the exciting examples that most of the public can relate to are DNA analysis and astronaut safety. So first off, in DNA analysis, there is an actual DNA sequencer device on the International Space Station and you put a physical sample in it and it grinds away and spits out those four characters in a gigantic ASCII file and those are what were taking months to get down to Earth.

Speaker 4

Ok, so we talked to the scientists and said you know what are you going to do with these when you get them down to Earth?

Speaker 4

And he ran a modern piece of software that the international industry recognizes as DNA sequencing processing, and it's a multi-step process.

Speaker 4

So first you've got to store, be able to store, these large DNA sequences Everybody knows that they're really, really big. You have to store those and then the intermediate file effectively stored and processed and pre-processed until you get to the final answer. And the final answer was about 20,000 times smaller than the DNA sequence. So hence you go from months to minutes running the exact same modern software on the exact same modern hardware, storage and software. So that's one that's very exciting. The scientist actually said that his proposed project was to monitor the health of astronauts and he would be able to do one astronaut and one sample a month, whereas when he goes from months to minutes with spaceborne computer and our onboard storage capability and our onboard GPU processing capability, he can do the entire crew every day and he could then propose back to NASA to monitor astronaut health on a daily basis and come up with do Asian males get affected before European females? And all these other exciting things. So the light bulb went off and we just exponentially expanded the capabilities there on the space station.

Speaker 2

It sure is a lot easier than finding identical twins who are having to be astronauts, that you can send one up and one out, twins who are having to be astronauts that you can send one up and one out. It's nice that you can monitor and you don't have to find perfect pairings everywhere in the world.

Speaker 4

Correct, yes. The second one is again with astronauts and safety. Not necessarily health, but safety. When astronauts go on a spacewalk, officially called an EVA or extravehicular activity, they're out there now as long as they can be 8 hours, 10 hours, 12 hours and they're using their hands and they've got gloves on board and just like if you were working on your car all day or working in the garden all day then you know your gloves get dirty, wear tear, et cetera day, then you know your gloves get dirty, wear tear, et cetera. When those astronauts return inside, they have to take a Nikon D9 camera, high resolution, and take multiple photographs of those gloves at different angles, different lighting, et cetera, and those photographs are sent down to Earth to be analyzed for nicks and cuts and tears.

Speaker 4

Well, we took that artificial intelligence-based software on Earth that was trained on every single photograph ever taken of an EVA glove and we put that inference engine on the International Space Station. So then, as the Nikon D9 camera was spitting out these pictures, we got a copy of them on Spaceborne, stored them onto our SSDs and then began to process them and send down the results. Generally, nasa states that it takes about five days to get those results back down to Earth, which means an astronaut can only go to work one every five days, so it's like he works on the weekend, takes the week off, right? Well, we did five days of work in 45 seconds on the International Space Station. Again, you've got to store those massive images, run the artificial intelligence with the CPU and the GPU, store the intermediate files, annotate those, generate new images in JPEGs which is something everyone can handle and compress those JPEGs and send them back down to Earth. And so, again, five days to 45 seconds because of our CPUs, our GPUs and our SSDs.

Speaker 2

So it Really sounds like this is a workhorse drive in what we've been able to do. It's got a pretty diverse set of workload requirements. We talk about AI analysis, just general storage. There's lots of stuff, and usually when we get HPC and we get traditional enterprise, they're fundamentally different design patterns and then operational patterns. So Cam, based on the current generation of SAS that you and the team are shipping, it sounds like this is pretty far in what it can do compared to where we used to have much more sort of purpose-built gear in the past.

Speaker 3

Well, the SAS technology that we're using is 24 gig SAS, which is the latest, I think. The value SAS is rated at 12 gigabit per second, which is the previous generation, because that is sufficient, but enterprise SAS is what is being used used, actually. One thing I do want to note is that, you know, with Mark's help, we have a script that is monitoring every single drive every day and we can track and monitor the health through daily diagnostic checks and then we will analyze them for any anomalies or issues or if we need to swap out a drive and invoke one of the cold spares. So we're continually analyzing the drives on a daily basis and, you know, sas technology has a lot of those capabilities built into the protocol. So that's another reason why we like to use enterprise SAS.

Speaker 2

And again another shout to the importance of what we're seeing with collaboration across with you know some of the stuff that's happening with Swordfish and Redfish and being able to manage and monitor this hardware and, as we see, open protocols and open standards.

Speaker 2

This makes it much easier for us to collaborate so that capability that you're using to monitor ultimately could be also brought as we can do in the open and as a collaboration, especially when you know this is an opportunity where we all benefit. All humans of Earth, you know definitely will be, will be happy with what we're seeing and that innovation, like I said, is coming back to other OEM partners now with you know what you and the team are doing, cam, that now these advantages can be leveraged by other folks and we'll likely see. You know V4 and what's next, for Spaceborne probably has a pretty solid leap above where we are today. So, based on that, mark, what's next? As much as you're allowed to share, of course, because I imagine there's a lot of this stuff which is happening in quiet rooms and developed for safety before we talk about it.

Speaker 4

So Spaceborne 2.5, as we call it, is still up there and I did an experiment last week, I'm going to do one this week. It's still fully functional and operational. We're going to continue its mission out to, hopefully, its three-year lifespan. So, pretty excited about that, we're just now designing quote-unquote Spaceborne 3. And I can tell you that our plans are that it will be twice as capable in terms of storage capability and GPU capability and half the footprint. So that's one big aspect. There we plan to use less power, less total electrical power, even though we're twice as capable and you've just heard me and learned that I'm pretty conservative I will have twice the cold spares as I have now in that smaller footprint.

Speaker 4

And I'm just riding the technology wave. You know, I don't know that I'm being innovative. I'm being exploitive of the innovation wave of smaller, faster, cheaper, and I'm bringing along our SSD partners. We've already, under NDA, briefed them on what our requirements will be and they've offered us some pretty exciting next-gen SSDs that we plan to take up there and test. I won't fully commit to those next gen ones, as you know you tell from previous statements I'm a little conservative, so I will have on board some of those that I have already shown to be reliable, as well as some of the new technologies that give me a lot of advantages.

Speaker 2

This is an interesting thing of the speed at which we test new things, and obviously it needs to be a little bit more tried and tested, but I think that, with the type of workloads and experimentation that's happening in data centers in the world today, and with hyperscalers really doing a lot, I think we're testing hardware in ways that we couldn't do at scale before. Everybody used to talk about at scale or in edge locations, and what they really meant was just like a smaller server next to a bigger server, and at scale meant nine, you know, nine servers, you know, or maybe 900 servers. I mean, I ran 2,000 servers in an environment because there were economies of scale that I could make use of, and then that's the neat thing here is that this is literally the most harsh possible condition you could have, where you genuinely have constraints that are physics bound, like there's no amount of things you can do to speed up data transfer, and so that core requirement really makes you change the way you think about how you operate the compute. And now, with local LLMs and AI capabilities and GPUs, like I feel like I'm just patting you both on the back. I just want to hug you both and your whole teams, because there's nothing that's not great about what's going on here and seeing your talks that have happened at SNEA as well, being able to dive into it.

Speaker 2

I think I'll definitely recommend and we'll have some notes in the show notes as well for folks that want to take a look at other presentations that have already happened. I guess on that side, cam. Then Mark, what's next in what you see ahead for the coming year with SNEA and what you and the team are doing to look to? We've got SDC and lots of other events that are coming up. What's exciting you about 2024?

Speaker 3

and lots of other events that are coming up. What's exciting you about 2024? Well, as every year, there's a number of events that are industry events and we're definitely eager to participate. You know, as a part of SNEA and the Skizi Trade Association Forum, we participate very actively in these and try to promote the technology not just SaaS technology, but all the data storage technology in general. The SCSI Trade Association has been a part of SNEA for just over a year now and we couldn't be happier with the amount of collaboration and companies that we have access to and can work more with. The Scuzzy Trade Association has been around for 25 years at least and it's been an area of collaboration between all kinds of companies that develop the ecosystem for certain technology.

Speaker 3

In this case, it's you know SAS and SCSI. So for the for events such as flash memory summit, sdc, ocp and other acronyms, yeah, we're definitely looking forward to participating in as many as we can.

Speaker 2

Awesome.

Speaker 4

So you brought up something that I want to bring back to and it right. You brought up something I want to come back to and it was illuminated in a NASA presentation last summer and the presenter was talking about DNA analysis and he said we have changed the paradigm and that's what I'm doing in the future. I'm changing a lot of paradigms. Previously it was well, I don't know if the computers are going to work. We proved that in Spaceborne 1. I don't know if the computers are going to work. We proved that in Spaceborne 1. I don't know if people can use that computer. Okay, we're proving that in Spaceborne 2. I'm not sure that you have enough storage for my DNA sequences, my videos, my photos of my gloves, et cetera. Well, I do now, because Cam hasn't shared that. We have four 30 terabyte SSDs on board. So we're changing the paradigm. I can compute faster than you can download and I need to be able to store it before I can compute on it, and now I can do that because of the modern SSDs.

Speaker 2

And that paradigm is changing and that's what I'm doing in the future. Moore's law and you know there was some thought over the past year or so that we'd kind of gotten past the Moore's law. You know I'll say limitation or what was noticed, the amount of time, but I think we're actually it's just it's not meant to be measured in that way. It's measured to be meant to be measured in how and what we do with these fantastic technologies. And the use cases that we're seeing play out are really where the now the software layers are coming much faster than they could have in the past because of hardware limitations and some of the discrepancies that you would find in. Again, you know complex conditions like edge computing, but even more so, obviously, with the severe constraints that we're applying in this particular environment.

Speaker 4

Obviously with the severe constraints that we're applying in this particular environment. Yeah, moore's Law is focused on one small, tiny part how fast can you do this amount of compute, whereas the modern applications are multilayer, multidisciplinary, and this whole workflow is what you need to take a look at and hence, months to minutes, blows away Moore's Law, because we took it all those steps along that workflow path and we optimize them edge to cloud storage, to compute network, et cetera. So that's where the acceleration is going to come from From that change, that paradigm. Look at your entire workflow and the limitations that you have now at the edge are primarily networking. I can address the CPU, the GPU and the storage with modern SSDs and, like I said, I can compute faster than you can download and I can go from months to minutes, from five days to 45 seconds. Look at your entire workflow and optimize compute storage along that workflow and change paradigms.

Speaker 2

It's nice that compute is being used for much more than just generating AI images from Dolly and the like. So bless those fine folks for all they're doing. But I like that there's a little more that we're going to see coming out of the exponential experimentation that we can do now because of the innovation that's going on. Well with that, I could literally sit here all day and dive in deep. There's great resources we'll share. That.

Speaker 2

You've talked in the past, mark. So we've got a couple of podcasts we'll probably link out to as well, and some of the talks that you and Cam have participated in and some of the other industry talks we've had with SNEA. With that, I'll let you both sort of take it out. And again, thank you both for you know and both of your companies for the collaboration and the collaboration with the broader SNEA community and the SDA community. So, in closing, what else do you have to add and anything else that you want to let people know if they want to reach out to you and get connected and find out more about the project Mark?

Speaker 4

Yes, so we're open for business and between our collaboration. I'm doing experiments every week and I would love to light your light bulb off, change the paradigm you're working in and show you that the ISS is your beta tester for what you're doing in R&D here on Earth to advance humanity as we continue to explore the universe. Thank you, Cam.

Speaker 3

Yeah, on the SAS side, technology is still being innovative and developing new features for the current generation of 24 gig SAS. The SCSI Trade Association Forum is going to continue to promote the technology and work with other companies and groups within SNEA to build unique capabilities. And we have an interesting project that maybe we'll get to do a podcast on later with the university. But we're doing some exciting stuff and I look forward to hopefully telling you all about it later.

Cross-Org Collaborations and Innovation

Speaker 2

And here I am. I was just proud that I got a Raspberry Pi to be able to display a logo on a microphone flag. I felt like I was winning life as an engineer and it pales in comparison to what you both are doing, and thank you both for really taking these ideas and also sharing the stories and looking forward to hopefully being a host for that very podcast With that. Cam Mark. Thank you very much, Thank you to both your teams and thank you to all the fine folks who are listeners and viewers of the SNEA Experts on Data podcast.

Speaker 2

If you want to find out more, of course, head on over. We've got lots more of the previous episodes and lots more that are coming in 2024. We've got SDC that's going to be coming up towards the tail end of the year, so there'll be lots of opportunities to collaborate in person and we're seeing more and more some of the regional events and more cross-org collaborations, like we talked about with the SCSI Trade Association, and we're seeing stuff with the Ultra Ethernet and with CXL so many many areas of innovation and different standards bodies coming together. It's a blessed thing and I'm glad that I got you two at the helm doing some fantastic stuff and thank you both for taking the time to join us today.

Speaker 1

Thank you for listening. For additional information on the material presented in this podcast, be sure and check out our educational library at sniaorg slash library.