Entangled Things
Entangled Things
Episode 137: Parallel IQCC With Scott Genin
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In Episode 137, Scott Genin, Vice President of Materials Discovery at OTI Lumionics, unveils how GPU-accelerated quantum chemistry is revolutionizing material science. The discussion highlights the limitations of current quantum hardware and the role of AI in overcoming these challenges. Scott shares insights into how classical simulations can mimic quantum computers, pushing the boundaries of what's possible. He emphasizes the significance of these advancements for real-world applications, from OLEDs to new catalysts. This episode is essential for anyone interested in the future of quantum computing and material discovery. See more about the announcement here: https://arxiv.org/abs/2603.08883
Hey Cyprien, how are you doing?
SPEAKER_04Hey Patrick, I'm doing great. Looking forward for another episode of Entangle Things.
SPEAKER_03Yes, I think uh we're we're we're rejoined by Scott. Scott's been on the show before. Scott, do you mind reintroducing yourself to the audience?
SPEAKER_01Uh yeah, thank you for having me on, Patrick and Cyprian. Uh I'm Scott Jenn and I'm the vice president of materials discovery at OTI Lumionics.
SPEAKER_03And you guys are, uh as I remember from the last time we talked to you, which was very interesting, you're using Quantum Inspired for your um optical development and your product development. Um and there's some big things brewing, from what I understand.
SPEAKER_01Uh yeah. We, you know, last year I was on to talk about the um, you know, the theoretical foundation that now we've been able to implement at scale across GPUs. And the, you know, the speed up and the results are impressive because it really opens up a um really not just the scientific application or you know, scientific discovery of what quantum algorithms could do in material science, but it also demonstrates that they can be done in kind of a reasonable amount of time on uh classical hardware.
SPEAKER_03Cool. So so you're using AI hardware to like the Blackwell chips, those kinds of rigs to really squeeze that quantum advantage, but you're still using classical compute power. Um when I I I I've told the story before. When I first started talking to Cyprian about this kind of uh technology, quantum, before there were even quantum uh computers of any kind, there was a company called OneQit up in Vancouver that um was doing quantum inspired. And so it seems like you are a good example of a company that's taken that and run with it.
SPEAKER_01Um I I think, you know, uh yeah, in in some respects, I would say we've taken it to probably the extreme. You know, One Qubit does have a broader focus. Like they they do they are in the consulting goals, basically. Yeah, they're they're consulting firm. Right. And so um OTI has a singular, you know, a singular objective, which is to do you know, quantum chemistry calculations as efficiently as possible. So really what this does though is it sets apart um the algorithm, you know, this iterative cubic couple cluster into a different um kind of a different class or domain than just some sort of like interesting, you know, novelty to be run on a you know future quantum computer. Right. Um it really now sets it into the, you know, uh as a high accuracy general quantum chemistry algorithm that has much more efficient scaling and much better utilization of GPU resources than other high accuracy classical quantum chemistry methods. Now, what this what what makes this very exciting, there's there's kind of two aspects that make it exciting. There's the scientific aspect, right? And and then there's also the applied aspect. So from the you know, the scientific and theoretical aspect, this is the first realization of a quantum algorithm, meaning that it's doing quantum chemistry or fermionic excitation operators and electronic structure problems in the qubit domain of math, opposed to you know the traditional fermionic domain. This you know, it's it's important to also you know emphasize that you know all quantum chemistry methods have their own pros and cons, quirks, kind of oddities about them. Yeah, and you know, that was something which couldn't have which could never have been really explored in depth prior to this. Because fundamentally, most results were either like the the system is just not converged. So if we look at like the quantum subspace diagonalization, you know, we you look at the results and it's like these results are worse than uh CISD. And yet it should be a you know, a true ground state FCI type solution. If it's unconverged, we can't really do downstream uh like electron density calculations or uh you know, we can't map it onto there are other things that chemists care about, like things like natural bonding orbitals. Um these are things that like uh you know, actually applied synthetic chemists care about more so because it gives them an intuition about the uh underlying chemistry that is happening. And those things are just not possible if you can't converge your uh quantum chemistry calculation to a like to a correct or a um like to a converged state. And so, you know, at the scale, quantum computers have never been able to achieve this at scale, and there has not been really a quantum native method that has been able to do this at scale until now. Wow. And so now we can actually not only say, well, is a quantum computer, because every everything, like if you look at pretty much all the studies, they're all saying, Oh, if I had a quantum computer that had a hundred logical error fully error corrected qubits, I would be able to solve this problem. Right. But I think it's actually fair to push back and say, well, what will this converge to? Because you say that it will have this one error, but is it actually going to converge to the FCI solution? And at that scale, we don't really, we can't really determine that. So what this paper has demonstrated is that, you know, historically people have used things like uh density matrix renormalization group, which is a very robust, well-known uh classical algorithm that um I think maybe people got carried away in saying that it's the be all end all. But here we objectively just show that IQCC outperforms it. Like it basically says I have a variational value that is lower than your best attempt at DMRT. And I can do it on fairly available GPU hardware.
SPEAKER_03Exactly. Yeah, you you haven't waited for, oh, someday we're gonna be able to do this. You're doing it, you're getting the best of both worlds, really, because you're you're not waiting for, oh, well, if I had a quantum computer, this would really go. You're you're leveraging and and computes exploded with AI. Uh we we've got some Blackwell chips that we use for other things, and it's just mind-blowing how much and the backplanes and the way to you know put them all together. Um, and so you're taking advantage of that. Now, I imagine if suddenly there were huge breakthroughs in quantum computers and there were hundreds or thousands of logical qubits, then you even benefit more. So, so uh so I think this is a great um way to venture forth and get the best of uh no matter how it comes out. You're basically betting both sides of the table.
SPEAKER_01Yeah. And it's because now we can like we can verify that a quantum algorithm, again, working in qubit notation and domain, will outperform some of the best known classical methods, right? And that's that's fundamentally a significant milestone to like objectively surpass them.
SPEAKER_04That's um so if I'm reading correct, these these results, the kind of if we were trying to do some some kind of of mapping to the world of of quantum computers, what you achieved is basically the equivalent of 200 qubits. Like what what you would get on a quantum computer with 200 qubits, is my understanding correct?
SPEAKER_01Yes, that is correct. With with 200 fault tolerant, fully connected qubits, right? That I think that's the caveat is that if you in you know in in how we do it, you know, entangling qubit one and entangling qubit 200 is you know, that's instantly doable, right? From our from our uh software standpoint.
SPEAKER_02Yeah.
SPEAKER_01But on a but on a actual physical chip, that might be actually quite hard.
SPEAKER_04Because that's that's very interesting to to to me, right? And we've we've touched this several times on on this show about the evolution and the benefits of uh simulation, so to speak, right? But uh and and this is where I would like to to get a bit more detail from you. This is not really um simulating in general a quantum computer, right? This is actually implementing an algorithm that runs on GPUs that yields a result that is equivalent to what you would get with a 200 qubit uh quantum computer. Is is is is this correct?
SPEAKER_01No, that that is exactly correct, right? So the IQCC implementation on a what makes it really scale very well on GPU. Um, it actually has much better scaling on GPU than it does on CPU. Um, is that if you, you know, traditionally in what we consider like if we think about classical emulation of a quantum computer, or trying to actually simulate the physical quantum computer, you know, we have approaches like state vector, where we have to store basically the wave function as this is a complex state vector. And that's where a lot of this the bad scaling happens. Now there's techniques, you know, Amazon had developed techniques to kind of prune this um massive state vector to so that it was simulatable on classical computers up to like easily up to 80 qubits and a little bit beyond that. But fundamentally, there is a limit, right? Especially as you want to get more and more accurate. What makes our implementation very different is that it doesn't consider really the state vector. It works in operator space. And this allows us to really utilize integer and bitwise manipulation to mimic the output, meaning the observable of the quantum computer. Right now, from yeah, I think from a you know theoretician standpoint of a physicist, they may think that that's you know a little bit gimmicky. But I would point out that you know, my background's in chemistry and chemical engineering. So the observable is pretty much what I care about. And this is what the quant, this is what I'm going to see. Right. Right. So it's like you're kind of able to bypass and not I wouldn't say necessarily bypass, but you're able to, you know, through kind of uh representing um you know the observable basically in binary bit strings, you can actually kind of exploit the GPU very efficiently to do these calculations and basically reproduce these observables. And that's what we are effectively doing. Yeah, that's that's one of the reasons why you know you get a 90x speed up on a blackwell over CPU. Right. Right? Like fundamentally, if we if I want to be as as uh transparent and direct as possible, IQcc has slightly better scaling than DMRG on CPU, but I think people would really sit there, you know, scratch their chin and be like, well, you know, 109 hours versus 130 hours is currently one of the benchmarks that I have for IQCC being lower than DMRG by about one milliheart rate. People would sit there and be like, mm-mm mm-mm, you know, maybe not. But then I say, well, what about one hour on a GPU? And that's fundamentally it's a different, you're you're in a different question. Two orders of magnitude, yeah. Yeah. It's it's uh it's a completely the the implications of that is it completely opens up what you could do with this. Because one of the things that you could do with it is you could do geometry optimization. And this is one of the you know questions that from the chemistry, applied chemistry side, people always ask is like, well, can I do geometry optimization? And before it'd be like, well, if each iteration takes a hundred hours, you're gonna be waiting a really long time because you have to do them sequentially stepwise, right? But if each step only takes now one hour, that's a very different question. Yeah. Then because the geometry optimization of like uh you know some OLED materials that we have, uh, you know, they may take 50 to 150 steps. So you're asking really the question could you wait 60 hours to have a variational geometry optimized solution for 60 weeks? Versus 60 weeks. Exactly. And it's it it completely opens that up. Oh, it's amazing.
SPEAKER_04Yeah, and and just to make sure that our listeners are are are following, right? We were um talking here about IQCC and DMRG, right? IQCC stands for iterative qubit coupled uh uh cluster and uh DM DMRG, right, stands for density matrix renormalization group. Um that is just kind of the methods that that we are talking about. But what strikes me um as probably as the most important thing of this, and I know I don't want to kind of downplay the the practical sides of it, but for me, what's really interesting is this might just push out what we would consider the boundary of quantum advantage for for these things, right? Because I know for a fact that in chemistry in general, right, the the kind of consensus was it's around 40, 50, 60 qubits, whatever that is, right? Now you're coming with this result, and you're saying, look, sure, we're using quantum inspired and everything. We're still running on classical compute, right? But we can achieve a result that is equivalent to what you would achieve with 200 qubits. So all of a sudden, it feels to me like it's pushing out the boundary of what would quantum advantage be? Um, right?
SPEAKER_01Correct. It it also calls into question about the notion of like quantum supremacy, in particular for variational quantum eigen solver type uh calculations as well, right? Is there even, you know, can quantum supremacy ever be achieved in that class of calc of uh algorithms? And um, you know, that bound needs to be dramatically pushed out. But even like quantum phase estimation for quantum chemistry, now we really have to, you know, dramatically consider um what is that boundary going to be? Because, you know, using uh you know the uh resource estimators that Microsoft had published, you know, using their their inputs on what they think is what their Mariana Mayorana quantum computer when it's constructed, I'm sure one day it will be. Um, but when it is, they say that if everything goes well, they will need about 200 hours to get it within this one milli accuracy. And it's like, okay, well, how long is that how much how much, you know, and then it even comes from a practical standpoint of like, well, how much is it gonna cost to run? Yeah, right. And what's one hour of a black belt GPU's time versus 200 hours of quantum computers time?
SPEAKER_03Time, energy, there's you know, those those are the big trade-offs right now, and and AI is really pushing. I think we have to admit that three years ago this wasn't possible with the technology. You know, classical has has rocketed forward and and the goalposts are moving. I agree. I I don't think that's a bad thing. I it's probably a bad thing if you're trying to uh to prove quantum supremacy, but that's that shouldn't really be that's more of a marketing thing than anything else.
SPEAKER_01Yeah. I mean this this dramatically changes even the quantum advantage or quantum business utility case, right? Especially in materials.
SPEAKER_03That said, I don't I don't think there's gonna be one of the things we've talked about in the past is a quantum winter, the way AI has had several winters where funding's dried up. I I think the the Schor's algorithm specter is is gonna keep the money flowing to you know not only the security side of things, but but also the the material science side of things, which is fine. But I think this is giving these technologies push each other. Uh it's an oversimplification. But when there was more than one dominant browser in the market, we always got better browsers and better features. And I think the fact that the quantum is challenging classical and vice versa is a good thing for all of us.
SPEAKER_01Yeah, and it's I think it's also important, like this, these advances are important for also helping motivate why a quantum computer would be necessary, perhaps even in the future for quantum chemistry, because now we can explore how is a quantum computer going to compute natural bonding orbitals, how is it going to compute chemical properties like electron density from a native quantum perspective, opposed to being like, oh, well, I have the DMRG output, right? You know, DMRG is fundamentally DMRG. Same with CCSD, these are there, there's these trade-offs on the back on the theoretical side, which have downstream consequences. And chemists are very familiar, especially applied chemists, are very familiar with these consequences. And if we can understand those quantum consequences of doing them on a quantum computer, which have previously never been able to be explored, um, it now serves as like this is the benchmark and standard that you know quantum computing companies need to kind of reach and exceed in order to have a, you know, in order to really motivate it. But you know, we might be able to discover new interesting physics or uh chemistry using this quantum-inspired uh technique in the in the short run, which then would even motivate that. Oh, we really need a quantum computer that can run quantum phase estimation perfectly. Because it could be that, you know, oh yeah, you know, if I had another hundred qubits, you know, 300 qubits, because we, you know, we haven't been able to simulate that at this point at that scale efficiently, um that could really then dramatically open up a whole new range of uh you know new materials, innovation, and discovery.
SPEAKER_03Uh Nvidia's Vera Rubin chip might well be able to do that. You know, some some of the just just the fact that AI chips are moving so quickly uh and so and and then custom, yeah. I could even see if this if this really takes off, which it should, um maybe we'd even get chips designed for this. I assume that while the GPUs are really good at it, they there are optimizations at the chip level that could probably help it as well.
SPEAKER_01Yeah, definitely with the uh memory transfer and sharing information between them, right? Yeah.
SPEAKER_04Yeah, and I think at the end of the day, right, this this boils down to the fact that like fundamentally speaking, right, um uh DMRG is based on a model of classical tensor networks, right? As uh essentially opposed to uh uh IQCC, which is qubit native, right? So you are literally describing the problem, right, in the kind of native language of quantum. And I think this because if if I am correct, um even up to like relatively recently, DMRG was considered like the more mature kind of approach, right? But what you're coming and you're saying with your result is that, well, it it looks like right, IQCC can kind of surpass clearly DMRG and get us to um uh to a point where like simulating quantum native things is is becoming like uh uh more powerful with this with this approach.
SPEAKER_01Yeah, and there's uh you've also hit the nail on the head with saying that like you know DMRG comes from this one-dimensional model compute calculation system, right? Um that has consequences, right, when it's expanded to 2D. Um and so you know, IQCC doesn't have those constraints, right? IQCC is a lot what so one one thing that we know about IQCC is that it's a lot less sensitive to the uh starting orbital. It's much more orbital invariant. You know, to be very specific, you know, and well known, you know, in DMRG world is that uh the order of the orbitals really matters. Because if they're spatially too far apart, that increases, you know, that can cause uh either convergence issues or you know requires more and more uh computational time. IQCC does not have that issue. Um it you know, it had this issue where you had to optimize all these uh amplitudes simultaneously at the exact same time, which was a little bit tricky. That's what we figured out last year, and then implementation onto GPU is what we did, I mean, within the last six months. And so it's but the it also to your point is that it will produce the output that a quantum computer would produce. And so now we can actually evaluate that objectively and say, beyond just spitting out an energy number, what else can I do with this? Which is something that, you know, using um kind of some of these subspace diagonalization techniques that other companies are proposing and using, you can't really do because you haven't converged your system to its ground state. Right. And if you can't do that, then because of the scale problem, you know, they're limited to like 77 qubits or something, you can't go and explore it because you know your picture of the wave function of the of the of your Hamiltonian is already approximated and incorrect. Right? That's what always allowed DMRG to have a you know somewhat of an edge, being like, well, yeah, but I'm converged within my model space, and therefore at least my solution is stable. That you know, you're starting from a better place. Yeah, you're you're starting from a better place. IQCC though, you know, DMRG often requires like in some of the new papers, they're like, oh, if we run a thousand unrestricted Hartree Fox simulations beforehand to sample and do this uh sum of Slater determinants or some square slater determinant method. And yeah, sure, it scales oh end of the six beforehand, but maybe I'll save like four hours on the downstream calculation. And I IQCC just does not care about that stuff, right? That's what makes it a much more universal or robust method, is that its drawbacks are much more tied to its implementation and its execution on classical hardware than it is necessarily tied to theoretical approximations.
SPEAKER_04And what also strikes me as being very interesting, uh, we recently uh had uh a guest that was really talking about now that you're starting to have more scalable quantum computers, you can actually investigate like the art of the possible because you start to see practical results, right? And I think this is exactly uh in line with what you are saying, because you are seeing the actual result that the quantum computer would produce, right? Now you can start exploring like, okay, what else could we do with these things? Or what are some of the other angles that we could take, which I think is is a side of simulation, right? That at least I personally um have never kind of was aware of, right? Because I was in my mind, simulation was always, oh, we're gonna simulate a quantum computer to understand, like maybe how many qubits do we need for this and that algorithms and things like that, right? I think this result kind of takes it upside down in the sense that we're gonna show you what the quantum computer of these number of qubits for this particular simulation will do, and it's gonna be the real thing, which is uh I think is remarkable.
SPEAKER_03So, so Scott, you got you now, you're also you're not just you know a researcher, you're you're actually building products. Is this influencing products that are already in production now or or just plans for optimizing your product lines?
SPEAKER_01Like can you even say uh I'm probably gonna have to err on the side of caution and say that um you know no comment.
SPEAKER_03But eventually I would assume that eventually uh that this has real-world implications as opposed to you just putting a paper out into the ether.
SPEAKER_01Correct. I mean, one of the things that I can at least talk about is that this is you know integral to our you know forward expansion plans into you know non-OLED materials. Um it's one of the reasons why we did the greenhouse gas emission catalyst system is because we wanted to demonstrate that uh, you know, we're not just all about simulating OLED materials, we have many more areas of expertise that we can actually uh simulate and execute. And so in those kind of areas, yes, I can say that this is an active area of you know trying to predict which uh what's uh gonna help us get the best product market uh market product fit. Um, but unfortunately in kind of OLED materials, I have to say no comment at this point.
SPEAKER_03Oh, no problem at all. Yeah, but it's still exciting because you know, a lot of times this kind of development is just in uh like, oh, we built this and now let's see if someone uses it, but you're definitely gonna use it, I'm assuming. And so we're excited to see where you know what comes out and uh uh down the road. Hopefully, you know, when you come back on, you can tell us, well, remember I talked about this, this is what we did with it, uh, even if it's next year. Um anything else that people should know about this? Is there is there any other uh ramifications? I'm sure you know you're always working on something interesting. Anything else you want to get out? Because we're coming up on 30 minutes. We've got time, uh, but uh I want to make sure we get everything, every dropout.
SPEAKER_04Before Scott Scott answers to that, I just want to kind of pile up one more thing uh uh for his answer. One of the things that that really kind of draw my attention, if you can just could comment a little bit on that, um, in the paper, was the parallelization strategy. I I think that's kind of uh a unique approach, right? And uh it's it's really fascinating to see how you folks kind of innovated in in in that space, also obviously combining it with running on on GPUs. Um I I think that's absolutely remarkable.
SPEAKER_01No, thank you. It's it's a you know the the whole parallelization and the you know strategy behind it is real what really makes it work. But I I want to also just reinforce you know the importance of good software architecture, right? Um you know it was programmed in C for a reason. Um and one of the things that I'm gonna do.
SPEAKER_04So I guess it wasn't vibe coded.
SPEAKER_01No, no.
SPEAKER_04I I mean the the I I'm just mean to my AI uh counterparts here.
SPEAKER_01Yeah, I I mean the the vibe coding is I don't want to say that it's like inherently the the AI like coding is is bad or not. Because like, you know, there are many instances where I'm like, oh, I need a script that will pull data from the output of this software and manipulate into the cruft work process that you'd give to a junior developer. I don't even know if I'd give it to a junior developer. I think I'd just look it up on Stack Overflow at this point. Yeah, copy a function. Yeah, and and one of the things I'll point out that it is very, well, I wouldn't say it's very good at. I would just say that it it speeds up is the ability to install this very esoteric scientific software. Um, like, you know, we only recently have been actually able to ever run DMRG because for the past, you know, basically like three years, nobody has been able to actually install block two at our at our company. Like it had been the most frustrating experience. And then, you know, using cloud code, um, it definitely went around in loops, and I'm not even entirely sure what libraries it behind the scenes had and modified, but it eventually got it to run. Um, but there was a lot of like changes in mathematical libraries behind the scenes that it was making, and it was just like it's just because it can basically run a thousand commands very quickly, right? Yeah, and I'm sure it was just one of those commands that was needed, but it ran a thousand and it it got it to work. So, you know, that's what I what I I see I'm pretty very good at. But in terms of like architecting a code from scratch or architecting a code so that it's incredibly efficient, no, that that was the the paralyzation strategy, you know, had been worked on for basically like almost three years. It is incredibly complex and nuanced. And um, we actually don't let the AI actually touch that stuff because it's like you it is serious, even though it is incredibly well architected, you know, commenting out one line would probably cause the entire thing to implode instantly and start working.
SPEAKER_04And I would assume there are also patterns that because are not common in the public world, right? The AI does not really have what to learn from. Um, it's probably there is a good part of it that's unique to your approach. AI. So those are kind of the scenarios where I think at least currently, AI models and everything, they're just breaking. Uh they can't cope with things that they haven't seen a thousand times in a thousand different places.
SPEAKER_03Right. Right. That's the key. I mean, the I mean, I don't want to diversify in AI, but but it's it's it's kind of enabled what you were doing. Because if it weren't for the if it weren't for AI, NVIDIA wouldn't be building the chips that you're you're leveraging. I I think the the right way to think about it is is if you're innovating intentionally, AI is not your friend. If you just need to rebuild something that's that's work a day, then yeah, AI is useful. But we still need people to guide, to point, to guide, to judge and to blame in the AI world. So uh, you know, the like I said, the cruft work, the the the the stu the junk stuff, that's definitely a good space for that.
SPEAKER_01Yeah, um I I would just comment that the uh the paralyzation strategy was developed well before we had GPUs that were designed for AI. It was uh developed on gaming GPUs. So I would actually attribute um computer games or the graphics cards that you would have in your desktop to be. So it's Doom and Quake. It's Doom and Quake. It is, you know, IQCC is, you know, it's like the Doom square root algorithm, right? It is kind of very similar to that.
SPEAKER_03Very cool. Very cool. Anything else? So we're we're over half an hour. We still got a little bit more time. Is there anything else you wanted to uh get out? This is very in important news. It's it's um it's great to see something that's been developed that's already being used in products um and and or or going forward into products that that'll affect everybody's lives. Anything else you want uh our audience to know before we we cut we wrap it up?
SPEAKER_01Well, I'd like to say, you know, this thank you for having me. This is uh, you know, I'm I'm very excited to share this uh monumental milestone. Yeah um however I'd like to say that this is actually just the beginning in my eyes for this. It's not, you know, the the journey is not nearly completed. Um and so you know, I'm sure even this year we are gonna have multiple other um innovations coming out uh and new and interesting applications, you know, whether those are as you know, whether they translate as well to um, you know, direct tangible output. I, you know, obviously we have to explore these things um and kind of ponder the you know the broader implications of this. But you know, I just, you know, I know it it does sound like a little bit of, in some respects, it may sound like a bit of doom and gloom for um some architectures or certain algorithms related to quantum computing, but I think this is more so like, you know, I think it serves as a concrete definable benchmark for those hardware companies.
SPEAKER_02Oh yeah.
SPEAKER_01And yet it also says that, you know, some of them need to uh start upping their game.
SPEAKER_03Well, you're moving frontiers, and that's a good thing.
SPEAKER_04Yep.
SPEAKER_03Yeah. But no, we enjoy talking to you. And uh we're we're we're definitely please please let us know when you're ready to uh share some more. And uh we'd love having you on the show. Uh appreciate your your your perspective and uh and sharing this invention with us.
SPEAKER_01Oh, thank you for having me on.
SPEAKER_03Thanks everybody.
SPEAKER_01Very exciting. Thank you. Thank you.
SPEAKER_00See you soon. Bye. Cybercrime is one of the biggest threats to businesses of all sizes and industries. With almost half a million open cyber positions, the problem is compounded by the lack of available talent in the marketplace. At Pulsar Security, our elite team of highly credentialed experts collaborate with you to assess your current defenses and develop solutions tailored to your specific needs. With services ranging from cybersecurity education to advanced penetration testing and red teaming, you can start reducing your risks today. Visit pulsarsecurity.com and let's secure your digital future together.