DVL Power Hour Podcast

Rewriting the Blueprint: A Pre-Construction Manager’s View on the Evolution of Liquid Cooling in Data Centers

DVL Group

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 39:19

As liquid cooling becomes an operational necessity it is the mechanical contractors and preconstruction teams who are tasked with turning blueprints into reality. We’ll be joined by Jason Gott, Preconstruction Manager at Murphy Company, as he reflects on the shift he’s seeing in real-world data center builds. We’ll also discuss rising heat densities, customer expectations, and the growing complexity of liquid cooling deployments. 

Additionally, we'll touch on:

•What’s top of mind for customers as HPC projects get underway
•The practical challenges of designing and building for next-gen cooling technologies
•How contractors and design teams can adapt to the unknowns of new infrastructure requirements

Whether you're in design, operations, or just tracking the infrastructure evolution, this session offers a hands-on perspective you won’t want to miss.

This is an edited version of the DVL Power Hour Webinar which was originally held on August, 21, 2025.

SPEAKER_00

Thank you for downloading DVL's Beyond the Product Podcast. This podcast was originally released as a live webinar and has since been edited to fit this format. To view the original webinar and reference PowerPoint slides, please visit DVLnet.com slash resources slash webinars. Have a question or want to submit feedback, please email us at marketing at dvlnet.com. Thank you. We hope you enjoy the podcast.

SPEAKER_01

I would like to welcome you to this episode of the DVL Power Hour. My name is Robert Leek. I'm the director of marketing here at DVL and thrilled to have you with us today. Um, talking about another exciting data center topic here in this world of critical infrastructure. Before we get too far into this topic of rewriting the blueprint, a pre-con manager's view on the evolution of liquid cooling in data centers. Um, that's almost as much of a mouthful as uh some of the equations that it uh that it takes to put these types of solutions into these facilities. But we'll get there in a little bit. Just wanted to welcome you. If you're not deeply familiar with DVL group, would like to share with you that we have, you know, you can see there at the bottom of the screen, we just celebrated our 40th anniversary this year. We were thrilled to be doing that. Uh, we grew up in the data center world starting in 1985, up in the Northeast in Philadelphia, where we're headquartered, really focused on data center cooling. But as that, you know, focus continued to evolve, we became experts in critical infrastructure of many types, including power, thermal management, controls, distribution, uh, even management monitoring. We've got relationships with all different types of manufacturers and software providers that can provide you the types of critical infrastructure, not just operational efficiency, but the peace of mind that you need to sleep at night. We have engineering expertise. We can provide turnkey solutions, let you focus on that type of the business, that part of the business you're most familiar with, and let us handle the facilitation of that new data center. And then additionally, you can see that we have expanded outside of our headquarters location over the years. We're now pretty much throughout the Rocky. We've actually just grown into Idaho this summer as well. So you can also count that. And in those dots, you will also find all of our services. So we have technical teams, certified technicians that go around and provide that emergency and maintenance support that you're needing to keep that lifespan strong. For now, I would like to welcome Jason Gott, a pre-construction manager with Murphy Company. Hey, man, how are you doing? Welcome. Um, thank you.

SPEAKER_02

I'm doing fantastic. How are you?

SPEAKER_01

Jason, I am great. Um, thank you so much for being here. As we get started, first off, um, good to see you again. We met a few months back at a meeting at Murphy Company. So before we get too far deep into this new world of liquid cooling, can you tell us a little bit about Murphy Company and maybe your experience there and in the world of data centers in general?

SPEAKER_02

Yeah, uh Murphy Company was founded in 1907 in St. Louis, Missouri, uh, followed, lack of a better term, Anheuser Busch out here um in the early 80s uh to Colorado. And we're in the neighborhood of 2,000 employees, St. Louis and Colorado. And uh I've been at Murphy for a little over a year and a half now, um, doing pre-construction, 20 years of experience in uh mechanical contracting, mechanical engineer by training from uh CSU for Collins.

SPEAKER_01

Oh very nice. Yeah, maybe it's good to say we're both here in Colorado, the the Denver dot there. Uh there you go. So, and Jason, how long? So you've been in uh construction for decades. Um, you look great to be able to say that, I gotta say. But uh, and how long have you been working, call it in the data center world?

SPEAKER_02

Oh, 10 plus years at least.

SPEAKER_01

So as it's related to this whole transition of, you know, pretty much an air-cooled environment, which did its thing, did great efficiencies for many years, especially here in Colorado. Uh, you've been in kind of, I would say, a front row seat as we've been transitioning to this world of high performance compute. Uh, you know, I was just at an event this morning, been a heck of a day so far, that was data center focused, and it was really all about this transition to high performance computing, AI, and what we have been seeing and talking about, I would say, since the last couple of years, is right now in our face here in 2025. So let's start there. From your vantage point in pre-construction, how have you seen data center cooling requirements evolve over the last few years specifically?

SPEAKER_02

As many of you know, you know, a data center, most people know what a data center is, right? Where we have servers sitting inside of a room. Well, over the last few years, specifically, we we would do a lot of air-cooled data centers prior to this direct-to-chip or liquid cooling uh infrastructure, and basically flow a lot of air across the servers to cool them down. With these infrastructure of the servers getting more and more powerful, we have to be able to reject that heat a lot faster now, i.e., the direct-to-chip or liquid cooling data centers. Um, so that does a lot in from a dynamics standpoint in our world of mechanical contracting, because we have a lot of we have to look at you know our man hours, our capabilities, our capacities, and things like that. Uh in the past, there were it was in the neighborhood of about 50-50 when it came to sheet metal and piping man hours. Or and then now we're looking at 90% man hours on piping and a little bit of sheet metal.

SPEAKER_01

Wow, that's that's a huge shift. And and what what what did you see in kind of on the on the back end of being able to see, okay, so now from 50-50, we're going 90-10. How are you how are you all managing that kind of in the day-to-day? Is it something that pops up or is it a pretty easy shift just in resources?

SPEAKER_02

Well, it's a it's a relatively easy shift in resources. Uh, you know, we have quite a few largest employer of pipe fitters in the state of Colorado, Union pipe fitters in the state of Colorado. And so um, we've done, we have a lot of pipe fitters. It's just an interesting concept or interesting statistic because you know, normally a normal office TI would be 30-30-30, 30% plumbing, 30% pipe fitting, 30% sheet metal. Um, and so this infrastructure, you know, let alone the power requirements and backup generators and chillers, etc., uh, from a just a man at power standpoint or man resource loading standpoint, it's really shifted in the in the recent years.

SPEAKER_01

So this next question, what's driving the urgency for liquid cooling? Okay, you can you can't the one answer you cannot say is rising heat densities in the rack. Okay, we all know that it's the yeah, that's the one thing you can't say. But from the from the angles and the people and the collaboration that you're having to deal with, where's the urgency coming from for liquid cooling? Is it customer expectations? Is it technical limitations, or maybe there's another variable that that's out there that I'm just the dumb marketing guy and I don't know what it is. But I'm just curious, where's the pressure coming from liquid cooling?

SPEAKER_02

Really, it's the AI infrastructure. I mean, the densities of these servers and and buildings. You know, one another interesting stuff interesting statistic, Robert, that um I was talking with one of the general contracting, general contractor uh pre-construction managers about is that these buildings are uh even though we're getting more and more required BTU rejection loads, the buildings are actually getting physically smaller. Um and you know, that does a lot for construction costs. Obviously, it's a little bit cheaper to build a smaller building, but that just means you have to pack more into a smaller space. Um, so with that, it becomes very necessary to interface with designers uh and collaboration with you know the the BIM world, the VDC virtual construction world, uh, early on in order to understand and make sure that we have enough room for all of this infrastructure. Um piping and plumbing is one thing, and then you start to add the power in the IT infrastructure and the servers themselves. Uh, and so trying to engage with owners early on, and you know, frankly, being awarded jobs early, allowing us to implement um and start talking about these different constraints in these buildings because you know, we're having multiple, multiple air-cooled chillers or cooling towers, you know, associated with these buildings and backup generators, the sound requirements associated with all of those in regards to the you know, neighborhoods or open spaces that they're near, um, is really a driving factor when it comes to understanding the footprint, the blueprint, and how exactly we're going to build a conceptual idea of it for a data center.

SPEAKER_01

We got a question in here from Jake. I'm gonna paraphrase a little bit. I think the question is based on you mentioned space and kind of a smaller space. Is that because owners and operators are knowing they can get more KW into a smaller space, or is it a lack of area to build around? So I'm just wondering it looks like Jake is curious, is it the KW in the smaller space, or is it just a lack of space that's making that kind of a more of a demand?

SPEAKER_02

It's it's the KW. You know, these servers are getting more and more powerful uh per rack, and air is not a very good heat exchange mechanism relative to water or refrigerants. And so it's multiple things. I think that the space available, the physical space available, at least in the uh Rocky Mountain region, is relatively expansive. There's some areas, you know, near Denver Airport or up in Wyoming that have a lot of physical space uh to build, you know, million square foot buildings. But two things, the owner doesn't necessarily want to build a million square foot building if they can get away with a 500,000 square foot building. And number two is you don't necessarily need that much space anymore because you're able to transfer a lot more lit, a lot more energy in the liquid than you were able to with the air side.

SPEAKER_01

Let's let's say we're sitting down, we're starting to talk about these projects with your customers. When you first sit down with the beginning at the beginning of like an HPC Dave Center project, what is at the top of their mind? What do they want to what do they want to discuss with you?

SPEAKER_02

They want to know how we as a team are going to be able to collaborate with their needs, tracking the costs, and understanding what is required in order to get there. So, in other words, you know, cost is obviously a big factor, a big driving factor for a lot of these things. Now, um, there are a lot of people out there that can do data center builds, you know, actually physically build them. There are a lot of people out there that um understand data centers, but it's a matter of figuring out what team meshes well together. And so a lot of times um we're looking at an infrastructure on a data center, whether it's a 90 megawatt data center or all the way down to a two and a half megawatt data center, and the owner knows that it's going to be XYZ megawatts, call it a two and a half megawatt. What they don't know is what servers are actually going to physically go in there, typically early on. So as a designer, as a pre-construction manager, we're having to um identify what constraints that can draw. Now, whether or not that means that we have to do, you know, 80% um liquid and 20% airside uh via rear door heat exchangers or a rooftop unit or variations thereof, or if it means that we have to, you know, physically plan for those servers, that kind of thing. So a lot of times these owners don't know uh or they don't have a tenant yet as we're going into pricing these. It's a very interesting and uh I would say modulating uh constraint that we're dealing with on a day-to-day basis.

SPEAKER_01

And speaking of what like your your customers, your owners, know and and don't know, right? I mean, this is the completely new, you know, completely new application going from air to liquid. The infrastructure that supports it, sometimes there's some overlap, sometimes it's a lot of completely new pieces and parts. I'm just curious, are your customers generally prepared for the scale of infrastructure changes that liquid pooling requires? Or is it a surprise once these projects get underway, once they start seeing the new new things coming to the forefront?

SPEAKER_02

Sure. And that's a good question. It really depends on the customer. And you know, we've looked at data centers that are called a lab data center, where it's a just that, it's a research lab all the way up to these hyperscale data centers. And the hyperscalers, for the most part, know kind of what to expect. They have some metrics from whether it's a another of their competitions data center that was built in XYZ Place or their own. Whereas uh in the recent past, call it two to five years in the past, there were a lot more people that were building airside data centers because that's what they needed. That transition to the director chip or the liquid cooling data centers really is giving us as an industry uh a new problem to tackle. Um, and we're having to come up with very creative solutions for our our owners and our customers in response to that.

SPEAKER_01

Okay, and I may be jumping ahead to another question, but I gotta ask. So give me the most, you know, no specifics, of course, but give me the most creative solution that you've been able to come up with to execute on for maybe one of these problems.

SPEAKER_02

Sure. We had a uh customer that had an uh interesting data center where they were dealing with uh multiple end users, and by that I mean the actual servers. And so when it comes to the uh technical water side of things, we couldn't share technical water with you know server uh manufacturer X to Z. So the most creative solution that we had to come up with, which was actually a very difficult um path to lead down, once we got there, it was it became apparent that we needed to do it this way, but you know, we were gonna go in with uh XDUs, um, whether it was vertive uh uh 450 XDU or a variation thereof, and do technical water from there. We had to do in rack CDUs because we couldn't share the technical water or in row. Um and we're still actually uh navigating that pathway, but to be able to ebb and flow on that very rapidly was one of the reasons why our team got hired on that particular project, because um, at the end of the day, the owner, or at the beginning of the day, I should say, the owner didn't really know. Um they didn't have either they didn't have enough information from their customer, their customers, or um they didn't have their programs situated quite yet. So uh we were able to go from a uh 80% liquid and 20% airside uh to utilizing a rooftop unit at one level and some open or closed cell cooling towers. And we went through probably eight to 10 design iterations on that particular project, including pricing and you know, the general contractor having to do different things for pipe racks and and things like that. And so um ultimately we landed on something that the owner was very happy with and excited that they were able to basically create a showpiece uh for what's what's to come for them.

SPEAKER_01

That's awesome. Let's let's talk uh design and planning. Uh, from a design standpoint, what are some of the biggest unknowns that come with liquid cooling deployment?

SPEAKER_02

Probably sound transmission, honestly. Um, you know, there's a lot, a lot of chillers on it associated with any one of these data centers, whether it's a small one or a large one. And a lot of times we find that we don't either the owner hasn't had somebody reach out to them because they haven't submitted for permit, or they didn't realize that there was such a huge impact to a airport or a neighborhood near there. So the sound transmission from whether it's the backup generators or the chillers or just the construction noise has been a very big uh impact, sound attenuation on chillers and things like that. From an actual design in the data center standpoint, it's really the uh storage capacities for the turndown on the servers, Robert. So when it comes to you know these servers, if you lose power, you need to be able to maintain cooling to those servers. And so there's been a lot of discussion, uh, whether it's internally or externally in Colorado or nationwide, in regards to what is the best way to handle that? You know, is it 45,000 gallon storage tank for 15 chillers or is it a point of view storage tank per chiller? Or, you know, do we have enough capacity in our piping to be able to handle that turn down required for the for the servers before the UPSs or backup generators kick in?

SPEAKER_01

Interesting on both of those. I wasn't sure what the biggest unknown was going to be, but I appreciate those perspectives. You know, mentioning sound specifically. Um, I know you've been around a lot of data centers, assuming you've been into some of these completed projects where the AI is going strong. I mean, that's what one of the most surprising things to me was is that even after you know, we've got everything commissioned, the lights are on, the zeros and ones are flying around, right? AI is so much louder in a data center, right? Data centers are already allowed, there's a lot of processing inside. I was really surprised at the exponential, call it, increase in decibels or how whatever we're measuring it in, but you could tell when you were beside an AI cage versus just a standard cage. It was that, it was that much. And I believe that the um kind of OSHA world is trying to look at that as well, just you know, in a lot of different ways to try to standardize things around that. But speaking of standards, air, you mentioned we've been doing air cooling for decades, liquids just now been bubbling a little bit. How do you approach planning for technologies that don't yet have those decades of standard practices behind them?

SPEAKER_02

Very interesting question. And actually, I did have a uh owner ask that exact question in an interview recently. Okay, and you know, with what we don't know, the best meaning uh on a specific project, right? Um, the best way to um orient yourself in regards to the designer or pre-construction or uh, you know, the planner, if you will, the the owner's rep or the the actual owner is to uh kind of lay out what those uh big ticket items potentially could be. Um and when I say big ticket, you know, obviously dollars associated with that. Um and in order to get there, you really need to understand what exactly um the owner's expectations are, number one. But then secondly, you need to be able to ebb and flow with uh the cost expectations very rapidly. So one of the biggest challenges we run into is large bore pipe, you know, actually being able to get valves, physically get valves, you know, with some of the tariffs that are have been happening lately. Uh valve prices have been unstable, let's just say, especially in the you know, 14-inch and larger piping sizes. That said, you know, we have to have those conversations with the end users early on and say, look, like we can do buyout packages and lock in some of this, but we need to understand what infrastructure you actually need. So a lot of times, you know, there's a there's a design shift, not a large design shift, but as soon as that owner is able to lock in a Lasee for their space or a tenant, then they're able to really understand what constraints they need to design to on the technical water side. However, on the on the infrastructure side, that a lot of times doesn't shift as much as you would expect it to. So the most concerning or the most uh poignant thing in regards to that is being able to coordinate very rapidly with the general contractor and the electrical engineers and mechanical engineers and electrical contractors, because some of these decisions can really drive factors of a million dollars swings here and there very quickly.

SPEAKER_01

I mean, unsurprisingly, in this world, you get the experts in the room and you collaborate together to figure out, make sure, hey, we're doing the best for the project, for the owner, etc. Just as these standards continued, I think comes up. I know Ashray's doing a lot of stuff with some trying to do standards on the liquid cooling, but it's also new. But and I know you mentioned talk about your valve pricing and some of the sound stuff, but curious as far as like your learning moments, right? In these early stages, these projects, you know, what maybe the best learning moment that you've had from something in regards to liquid liquid cooling?

SPEAKER_02

That's an interesting question. For me, it was um, you know, going into pricing exercise on a uh a data center, and we were talking about this with uh our design partner, this exact thing. And one of the most interesting things that I found is what diversification factor do you need to take with these servers? So when we talk megawatts on a data center, you talk about it's the IT power, it's not the actual, excuse me, mechanical load or electrical load. Those diversification factors can rapidly either increase or decrease the cost. And what I mean by that, Robert, is you know, let's say it's a call it a one megawatt data center for ease. And you Usually, a general rule of thumb is about 80% of that is going to be BTUs or heat load. And so, in order to design budget around something that we just say, okay, there's a one megawatt block right here, what do you do? We have to actually back into that. So sometimes it was 70, you know, in the in the past, it used to be about 70%, give or take, was actually BTU loads, and that was because the server density wasn't nearly as high. But I've heard some people say it's in the neighborhood of 90% that actually goes to BTUs, just merely because you know the direct-to-chip and some of the immersion cooling technologies are causing a lot more of that instead of going to heat rejection, they're actually going to the actual server mechanisms itself.

SPEAKER_01

A little bit off topic, but just because we're talking about these big facilities and also the the range of data centers that you guys have been working with, I think you mentioned called two and a half megawatt up to up to 90 megawatt. And I've heard of you know megawatt campuses into triple digits, mind you, as well. And I'm just curious as far as your experience in working with the utility, and obviously, power is usually kind of pre-planned and pre-kind of figured out, but I also know that there's some build and then the power will be available later on. And you know, specifically at those larger call it campuses or facilities or whatever they are. Can you talk to us a little bit about the power conversation? And is it a singular power supply? Are they trying to bring in duals if that's even an option? And maybe even a microgrid. Again, talking those, I mean, 90 megawatts is so much. I'm just curious, what's the power look like that you're contracting a building around?

SPEAKER_02

Well, um, and I'm not an electrical engineer by any means, but um I do know some of these, if not all, you know, I'm talking about a 90 megawatt building, I mean up to a 120 megawatt building, but that's one building on a four-building campus. So, you know, you multiply that out by four buildings or six buildings or however many it is. Uh the power is an enormous conversation piece. So a lot of discussion around, you know, point of use, whether it's a nuclear power, mini nuclear power plant or battery deployments, um, battery grid deployments, things like that. There's conversations around that quite a bit. Now, it's really up to the utility con utilities, whether it's Excel Energy or whoever it is, to say, okay, we actually have power for this, but a lot of times they don't. A lot of times they have to put in a substation directly dedicated to that specific data center.

SPEAKER_01

Not surprising. You mentioned nuclear. I just got to ask, have you been on a project where a reactor has been involved yet? I'm just no, not yet. Okay. Not yet. We gotta come back for the next episode because I I don't want to really get deep into that, even if I know, but anyway. Let's talk about some on-the-ground construction realities then. How do right rising rack power densities, right? We're talking 30, 50, 100 kW per rack, change the way that your teams approach installation.

SPEAKER_02

That's a great question. You know, as you start to get into these higher and higher density racks, rack levels, you know, you mentioned 100 kilowatt plus. I mean, we've heard of some of them are gonna be in the neighborhood of 600 in the near future here. So at that level, you're really, I mean, for one, your actual physical rack size gets can get slightly larger. But for two, you really need to look at you know how much energy you're actually able to push through the pipe. And not to mention the air. I mean, air, as my metric before, you know, that 10% or so of sheet metal versus uh 90% plumbing and piping hours, that's really not changing that much when it comes to the higher and higher power densities. What does change is the size of the actual technical water piece and some of the storage requirements from a turn down standpoint? Interesting question, and I'm not quite sure if we know the answer yet.

SPEAKER_01

Okay. Again, that next time you're on the DBO power hour, we'll address that one too, then. What about some unexpected hurdles that you, your colleagues are encountering during the these builds? Again, during the construction process. I mean, is it just all about the piping? Is it um, you know, the seals, I'm sure, when we're connecting these pipes, much less at the point of actually the application and the water doing its job? I mean, because I mean you guys are dealing with it's not like the the liquid's automatically all the time going to be right beside the rack, right? As well as when you're dealing with the heat exchange as well, getting the energy out. So what's some what's going on when you're when you're kind of in the process of kind of bringing this to light in the facility?

SPEAKER_02

One of the biggest challenges that we run into is uh, like I mentioned before, the physical space. But what a lot, what you don't really think about when you're actually going in and pricing up, let alone building one of these, is the amount of piping and ductwork and electrical infrastructure required. And what I mean by that is um, you know, whether it's a raised floor or you're doing overhead piping, I have my thoughts about either. Um, but you know, you really need to look at having before it was all about air, right? How do you get that air to the server, whether it's through a raised floor and you let it flow through the server and you got a hot L, cold L containment back to a crack unit, or does it go the other way, top down, if you will, on the server? My point is now the piping and actual power for these servers is taking up just as much space, not necessarily just as much as uh airside, but it definitely, definitely is not thought about as much as it should be. So I'd highly encourage everybody to just think about that, right? Like if you're having to run, you know, direct to chip or or um technical water to these servers, really think about how you're physically going to get your power, your data, and your your water to those servers. And then leak detection. You know, a lot of conversations around leak detection, Robert. So obviously, water and power in a server don't mix. We don't want that. But how do you how do you manage that? You know, do you put point of use control valves everywhere that you have a server technical row of water going to those servers? So there's a lot of conversations around that kind of stuff.

SPEAKER_01

And and and and I know that that's kind of in the weeds, but that's where the weeds are gonna where you're gonna find the problems, right? Because like you, I I've heard a lot. I mean, obviously, when you bring the water in, that's why, but I mean, leak detection's gotten more conversation time in the last two or three years, I think, than it had by the last 10 combined before that as it relates to data center. But and and maybe some of these things do overlap, but we'd like for you to talk to us a little bit about you know the differences that you're seeing in regards to kind of the greenfield versus the retrofit, which I'm sure can be a pain in the discus when you're trying to really kind of retrofit a facility that's not made for it. But talk to us a little bit about the differences between those greenfield versus retrofits.

SPEAKER_02

A lot of times we're running into you know a building that's existing, call it built in the 1970s, and they have a uh 10 megawatt data center smack dab in the middle of a TI. Um, and there's two floors above, two floors below, you know, 100 feet this way, 100 feet this way. So the question becomes where do you put the equipment? Right? How do you deal with that? A lot of times they've had situations where, I mean, frankly, piping's easier to run than ductwork to get big air to those places, but a data center didn't exist there in the past. So there's not a chase to get there. So we've utilized technologies such as you know, laser scanning, or there's a lot of different ways to actually physically find routings, but I've had to do that in pre-con and actually say, okay, we're gonna need to offset this pipe 50 feet this way and 50 feet back, uh, or put this air-cooled chiller over here or on the roof or whatever. So those are some of the challenges when it comes to retrofitting and the drainage associated with that. You know, you can imagine if it's a building that has a skiff in it, or if it's a building that's occupied below, um, which we always run into this when it comes to plumbing, but there's a need to have floor drains or a way to drain, you know, whether it's condensate or emergency water or whatever. Um, and so, you know, what do you do? Do you put in pumps, a condensate pump, or do you have to get into the space below and actually work at night to be able to install that drainage system?

SPEAKER_01

Okay, so I gotta ask, have you ever had to just bust the customer's bubble that where they were literally almost ready to start everything? But you're like, this facility just cannot handle what you're wanting. You cannot do this. So I have not.

SPEAKER_02

However, I have heard mainly what happens is it's actually the electrical infrastructure. If they're gonna put a 10 megawatt data center in here, well, they usually have a relatively large building associated with that, but that data center takes quite a bit of power to actually implement. So a lot of times it doesn't come from us, you know, the mechanical or electrical contractors. It's usually the municipality that's actually driving that not happening. So you don't have to deliver too much bad news then. No, not usually.

SPEAKER_01

Not in that regard. That's good. Okay, we talked about collaboration earlier and how specifically, I think, in the world of liquid cooling, it's bringing together what has been pretty much siloed organizations, IT, design engineers, mechanical contractors, normally just kind of out there doing their thing. But with this, it's a lot more collaboration. So, where have you seen the the relationship shift as liquid cooling's coming into the picture and bringing everyone together? What's the conversations like?

SPEAKER_02

Well, um, it's it's kind of around the same lines as the design assist, design build shift in the last 10 years. For the most part, owners and and owners, refs, and general contractors realize that they need to bring a team on early because of all of these types of questions that we're discussing and being able to ebb and flow very rapidly when it comes to pricing and deployment and that kind of thing from a manpower standpoint. But the most interesting thing for me is actually a lot of times in the past, we didn't have to interface directly with the end user or you know, the server manufacturer, supplier, or tenant. Whereas now we're finding that we're actually having to interface with those teams quite often because they have specific requirements in regards to their XYZ server in this location and that kind of thing. So to your point, like that was not something that was as prevalent as it used to be. Um and it's becoming more and more often that whether it's myself, you know, the mechanical contractors, general contractors, or the entire project team having to interface with the end user. Normally it was the owner, you know, they would hire or they would get a tenant, and then that tenant would coordinate through the owner.

SPEAKER_01

So on that note, right, you've been doing this for a couple of years now, that kind of in the thick of it. What skills or knowledge, right? If you could give the audience two or three bullet points that you've picked up and being able to kind of call it maybe even essential for someone in your role to be able to succeed in these new environments, what's two or three bullet points if you're like, this is new, or this is something I picked up, or what's the skill or something that surfaced in those projects specifically for liquid cooling?

SPEAKER_02

Well, I would say um listen more than you talk. And what I mean by that is educate yourself. You know, there's a lot of things out there, openai.org, different things like that, that have white papers on this that work through that. And I've taken it upon myself to read a lot of that and understand where we think the world is going to be going. I mean, you talk about a right now, I think the largest rack that I know of is a 400 megawatt, or excuse me, a 400 KW rack. Whereas one of my one of our engineers on staff here has told me he's he's read about one that is potentially 1200 kw, one rack. So, you know, what constraints does that cause? Well, it's it's a lot of different things. It expounds the issues and understanding what requirements each one of these technical water sides has, filtration requirements, things like that.

SPEAKER_01

So future, right? Here we are, 2025. What had we thought was never going to be a thing, right? Just a 10-year conversation of rising heat densities inside these racks. It finally came to fruition over the last few years. To your point, where where the labs are with RD and the you know triple digits or maybe even four digits worth of KWs in a rack, are we just at the beginning of the transition to liquid cooling, or do you see that the like some sort of norm over the next few years? What do you see as far as cooling technologies, liquid, these continually rising indexes? What what's the near term between now we call it 2030? What are we going to see?

SPEAKER_02

Well, interestingly enough, um, for the most part, at the moment, there are still very large companies and users deploying um airside data centers. And I think for me, what that means is that they have legacy technology, whether it's a ton of servers sitting around that they still need to use, or they haven't scaled up yet to the standpoint that they actually need the liquid cooling technology. Um, but you know, these are entities that have hundreds and hundreds of data centers worldwide. So I think to answer your question, it's going to definitely get more and more prevalent over the next 10 years. It's going to be the next dot-com boom when it comes to construction, you know, specifically mechanical contract, mechanical and electrical contracting, because the densities are getting higher and higher. I mean, if you look at Nvidia or any of the Intel, any of these chip manufacturers, their chips are getting more and more powerful every day. And then we can go into the quantum realm, right? But from a data center standpoint, I think that it's definitely going to get more prevalent.

SPEAKER_01

Just curious, is the is the equation the same for you from your pre-construction world, whether it's a direct-to-chip liquid cooling application or something that's more immersion cooling in the tanks. Does that really throw you guys much of a curveball, or is it liquid cooling to liquid cooling?

SPEAKER_02

It does throw a curveball, not from the standpoint you would think. The immersion cooling systems really turn into a process piping application because of the actual liquid required for the immersion, right? So it's not electrical. I can't think of the word right now. Dielectrical. Thank you. Yes. Um, and so those piping is different, you know, the equipment is different and that kind of thing. For the most part, the rejection of the heat is nearly the same, you know, whether it's through a cooling tower or something like that, because 90 times nine times out of 10, you have a uh heat exchanger somewhere. So the infrastructure itself, when it comes to outside the technical water, whether it's immersion or a liquid cooling data center, is nearly the same.

SPEAKER_01

You know, as we wrap up, Jason, what advice would you give to owners, operators, design teams, manufacturers, reps who are just starting to consider liquid cooling in their projects? If if, you know, again, we know it's here in the here and now, but it started, you mentioned hyperscalers, you know, I think they've set the stage a lot for liquid cooling. We it's almost like fashion, right? Fashion starts in New York and LA and kind of makes its way to the rest of the country. I think liquid cooling is now making its way to the rest of the computing world. Yeah. What would you what advice would you give to people if they're just starting out on their first liquid cooling project?

SPEAKER_02

Reach out to the people that have done it and have been down that path and understand that you know we're all learning together. And there are problems, there's always problems on construction job sites, Robert, um, every day, right? But uh in this instance, it's different because it's this is a technology that hasn't ever hyperscaled until the last couple years. So it's exponentially faster than anything I've ever seen. You know, an office TI is typically an office TI, whether it's high-end finishes or not. Whereas data centers, you know, for the last 20, 30 years, for the most part, have been air-cooled. And then all of a sudden now we have higher densities of racks, we have high, you know, a lot more piping, we have a lot more equipment required. In fact, I was talking to some of the large manufacturers of equipment, you know, whether it's a train carrier, York chillers. And if one or two of these projects goes, they literally shut down that that bows up an entire plant for an entire year. So the actual resources required to build all of the stuff that we're installing is astronomical. So pre-planning is imperative.

SPEAKER_01

That's that's a great point because we we, I mean, the planning, Brian, we talk about the resources, but most of the time we just think about simply power, uh, maybe a little bit of water to help keep the operations, but the operational utility is not the actual. I mean, supply chain, I know it's gotten a little bit better since uh since COVID and kind of the dark days that those presented. But it's interesting how, to your point, if one of these big, huge campuses gets awarded to a manufacturer, it eats up so much of their manufacturing and production bandwidth. Um so maybe sometimes, I guess, you know, multi, multi-providers for for various projects, I'm sure kind of keeps things a little bit more sane. Yeah, absolutely. Awesome. Well, I appreciate it, Jason. Again, thank you so much for being here. I think we've covered most of the questions for now. Uh, to the audience members, thank you for joining us. And uh, Jason, I know we've got a couple of things, the nuclear and maybe a couple other things. So we'll have you back um in the next uh in the next little bit, but look forward to that one. But most importantly, thank you for joining us today. Awesome. Yeah, thank you. It was a pleasure. All right, we'll talk to you soon. All the best, everybody.