Infinite Machine Learning: Artificial Intelligence | Startups | Technology

AI's Role In Physics, Chemistry, and Beyond

April 09, 2024 Prateek Joshi
Infinite Machine Learning: Artificial Intelligence | Startups | Technology
AI's Role In Physics, Chemistry, and Beyond
Show Notes Transcript

Anima Anandkumar is a Bren Professor at Caltech. Her work developing novel AI algorithms enables and accelerates scientific applications of AI, including scientific simulations, weather forecasting, autonomous drone flights, and drug design. She has received best paper awards at venues such as NeurIPS and the ACM Gordon Bell Special Prize for HPC-Based COVID-19 Research. She holds degrees from the IIT Madras and Cornell University. She has conducted postdoctoral research at MIT. She was previously principal scientist at Amazon Web Services and senior director at Nvidia.

Anima's favorite book: Hyperspace (Author: Michio Kaku)

(00:00) Introduction
(00:10) The Impact of AI on Science
(02:25) AI Disrupting Physics
(03:02) Challenges in Fluid Dynamics
(06:21) Achieving Orders of Magnitude Speedup
(10:43) AI Discovering New Laws of Physics
(11:45) Complexity of Fluid Dynamics
(15:54) Simulating Physical Phenomena with AI
(22:23) AI for Drones in Strong Winds
(25:16) Optimizing Experiments with AI
(28:19) AI in Quantum Chemistry
(32:38) Technological Breakthroughs in AI
(33:23) Rapid Fire Round

--------
Where to find Prateek Joshi:

Newsletter: https://prateekjoshi.substack.com 
Website: https://prateekj.com 
LinkedIn: https://www.linkedin.com/in/prateek-joshi-91047b19 
Twitter: https://twitter.com/prateekvjoshi 

Prateek Joshi (00:01.557)
Anima, thank you so much for joining me today.

Anima (00:05.086)
Yeah, thank you, Prateek. It's a pleasure to be part of this podcast.

Prateek Joshi (00:10.389)
You've done phenomenal research on building AI models for science and it's in so many different directions. So maybe can you start by explaining how science has been studied in the past in general and how is AI changing that?

Anima (00:30.206)
Yeah, certainly. I think this is really an exciting time for AI, for also science and engineering, right? I think people mostly think of AI as language models or chatbots, but I think there is a lot of untapped potential for scientific domains. And I think it'll transform the way we do science and engineering. Because if you think about it now, how is the scientific method, right? I mean,

not much has changed since the times of Newton, although the specific processes and devices may have. The scientific method is where, you know, a human, right, a scientist comes up with ideas or hypothesis, and then you go to the lab, you test them. Those experiments may take long. I mean, they may even take weeks to months, depending on the complexity of it. And if the idea doesn't work out, then,

is still like back to the drawing board and wondering what went wrong, you know, what could I do different? And again, the cycle kind of proceeds. So it's this repeated feedback loop of coming up with ideas, then going and testing out in the lab. And AI like these language models, chat GPT may come up with now, you know, ideas, right? It's debatable if it's better or worse than human scientists, but it can come up with new ideas.

But what it cannot do right now is go and replace those lab experiments. And to me, that is the biggest bottleneck. And having AI be able to, you know, good way, judge what those experimental outcomes could be and be able to rule out the bad ideas very quickly is gonna be game changing. And those are the aspects that I'm working on.

Prateek Joshi (02:25.269)
Yeah. And you mentioned Newton and in general, physics has been a very important part of humanity. I mean, for centuries, people have been studying physics. We have made big strides because of physics. Now, within physics, can you talk about areas where it's arrived to be disrupted by AI, either because it needs a lot of compute, a lot of

data, just something that humans will take an extremely long time and AI can just do it really well today.

Anima (03:02.782)
Yeah, absolutely. I know I should just mention that physics is my first love. You know, in high school, I grew up reading Feynman's lectures. And so coming to Caltech was really a dream after that. So dream come true. So, you know, I really like how there are building blocks in science and especially physics. Right. So we have

Prateek Joshi (03:09.173)
Ha ha ha ha.

Prateek Joshi (03:20.853)
Yeah.

Anima (03:29.214)
mathematical models for all kinds of processes that may look very complicated, like fluid dynamics, you know, how the weather evolves on this planet, molecules interacting with one another. So they all follow Newton's laws of physics, or if you go to the quantum level, you have like, right, the quantum physics and chemistry. So we have mathematical models. Usually they're very compact to write down, those equations are short to write down.

but incredibly expensive to solve, right? So if you're thinking about simulating in the aircraft wing, how it'll evolve as it flies, what are the aerodynamics? And if you also wanna incorporate, for instance, in a drone, it's not just that there is flight, but what about the noise made by the drone propellers?

So that's what we call multi -physics. So it's not just one area of, or one phenomenon being captured, but it's multiple different equations, right? Coupled to one another. And the weather on this planet is a highly coupled partial differential equation system, if you model it that way. There is clouds that are like moving around and they're turbulent. So what we call like fluid dynamics where...

the movement is very hard to simulate and capture. And there's also, of course, the heat from the sun, there is ocean currents. So there's so many different phenomena happening simultaneously. And that's why traditionally it's been some of the most expensive HPC, right, operations with traditional numerical methods. And this is where I know AI

To a surprise of a lot of people, I was able to completely replace numerical solvers and be able to come up with accurate predictions. So when we launched the ForecastNet, which is the first high resolution AI -based weather model, what we were able to show is that it's competitive with respect to the current weather forecasts, but tens of thousands of times faster.

Anima (05:52.382)
And so what used to take a large supercomputer, right? This whole big system of CPU cores to run where the model now can run in a local desktop with just a gaming GPU. So to me, that's incredible. The ability to miniaturize this scientific simulations and predictions and be able to save costs to such a tremendous degree.

Prateek Joshi (06:21.685)
That is incredible. And to achieve orders of magnitude speed up, obviously so many things had to come together. But if we had to explain in layman's terms, what was the top one or two reasons that such a massive speed up was achieved? Is it compute? Is it modeling? Is it a combination of both? What led to that?

Anima (06:47.134)
Yeah, that's a great question. So I want to give the analogy for those who are seeing this podcast, right? How computer vision was before deep learning came about. So what people were doing were like hand engineered features, right? They thought, oh, this specific feature could be good in recognizing faces. This could recognize like the borders or edges in an image. And, but everything was like,

humans thinking what would be a good feature and then going and trying it out. And then what we see with deep learning is the features are learned from data itself, and that is much more powerful and more accurate, more efficient than what you could do with hand -engineered options. And to me, it's the same exact thing here. If you think about numerical solvers today, it's...

it's scientists looking at, okay, you know, I have to solve for fluid dynamics, right? How the fluid is gonna flow over time and space. So what I'm gonna do is look at points that are nearby in space and time, and sure, there should be strong correlation between them. So let me try to solve all of that together. But then if you try to do it in a naive way, or in a direct way,

you need to have a very fine grid because the points need to be very close to one another. Otherwise there is a big deviation in their behavior. And so once you require points to be this close, right, then, you know, it's a big memory requirement, big computational requirement. And also the methods are iterative. You're saying, oh, just like, adjust like the values between nearby points and.

That's very sequential and you cannot parallelize it. So there's been a lot of effort to bring traditional numerical methods into GPUs, right? And you might get like a 2X to 5X speed up, but not much more than that because these methods are inherently iterative. And you're kind of making a decision, oh, should I just do this on the native space -time grid if it's an equation that is in space and time?

Anima (09:10.462)
or I design features myself or basis myself. Maybe the spectral domain is better. Maybe I choose another different basis. So people are continuing to design ways to choose a grid, choose a basis, and then do the computation, which is iterative. And instead, with deep learning, with a method called neural operators,

What we showed was you can still have the same ability to capture these processes at any resolution. And on the other hand, because it's a deep learning framework, it has nonlinear transformations. You can learn better features. So think of it as instead of solving now this fluid flow directly in space and time, now you're transforming it through nonlinear transformations.

to a latent space. And on that latent space, you may not need very fine grid. You may get away with a very coarse grid. And you also do not need a lot of iterations. You may need just a few iterations, which are represented as layers of a neural model. And then, you train it end to end and it can predict the solution very accurately. And I think to me, that's very powerful, but I think for a lot of deep learning people,

It's kind of obvious that this should work better because that's what has happened with computer vision and other areas.

Prateek Joshi (10:43.381)
Right. That's actually a wonderful, wonderful explanation of how a concept that has worked well in computer vision. I mean, we use it every day and everyone uses phones and face recognition. Like we use AI in our day -to -day basis. We rarely, we barely realize it now. And I think taking that and applying it here is brilliant. And also fluid dynamics in general.

It's been a fairly complex topic. And actually, one of my friends made a joke that for a long time, Navier Stokes didn't let him sleep because he just couldn't get fast, he couldn't stop thinking about it. So maybe in simple terms, why is fluid dynamics so hard to understand and model as compared to some of the other things in physics which are...

more or less solved. What makes fluid dynamics so complex?

Anima (11:45.694)
Yeah, certainly, you know, fluid dynamics and some of these equations, right, are very well studied, still lots of open problems. There's in fact a Millennium Prize that relates to fluid dynamics. So, you know, and I'm not saying that deep learning immediately solves all of those issues, right? But deep learning gives you the tools to especially overcome some of the computational burden. And I mentioned that,

analogy with computer vision. So is the answer then just take a standard neural network, whether it's right, UNet or a transformer, do I just plug in and magic happens or do we need something more? And that relates to also the complexity of fluid dynamics itself. So if we take a UNet or a transformer, right, that's applied in computer vision, you know, the space time,

Evolution is essentially a video, right? So why can't I just do these video models? So the main difference, and this only comes up as the fluid flow gets more and more turbulent, what you require is not just learning at one fixed resolution. So if the resolution is too coarse, you cannot go zoom in further. So think about if I have a fixed number of pixels, I go try to zoom in further.

into how the fluid is moving. It just gets blurry, right? There is no further information. And for computer vision, that's fine. If Sora is trying to generate a video of a dog or a human, what you need is kind of like for it to look okay, for it to look okay to the human eye. You don't need to capture the very fine detail.

But for seeing how the fluid is gonna move and evolve in future, what you need is to be able to get to those finer details. And that's what makes it so incredibly hard to understand, right? It's a highly nonlinear evolution, but also where the traditional methods require that high level of computation because you no need to get to the fine grids to capture all of that complexity.

Anima (14:00.51)
And so this is where neural operators are able to overcome that because we are now extending learning from fixed resolution to one that can be scaled up or down to any resolution. So once you've trained the model at inference or test time, you can ask the model to come up with predictions at any resolution. And so with that ability, we are...

Prateek Joshi (14:24.597)
Right.

Anima (14:26.782)
removing the requirement that these deep learning models can only learn at one resolution. And once we give neural operators these abilities to learn mappings, essentially between now we're no longer thinking the input and outputs as images or videos with fixed number of pixels. Instead, we are modeling them as functions. Like how, if you think of graphics, there is...

standard right raster graphics or there is vector graphics. So vector graphics, you can keep zooming further because you're modeling the shape itself or the function itself. And we are doing the same here. So we are modeling the inputs and outputs as functions that are in some continuous domain rather than only sampling them at some fixed set of pixels.

And because of that, we now have the ability to infinitely zoom in with our models and also incorporate any additional physics constraints and other information. So it's not just only purely based on the data that it's seeing during training, but you can add the constraints from physics at a finer resolution than the data that is available. And so this way we can get to high fidelity solvers using this framework.

Prateek Joshi (15:54.357)
Right, that's, I love listening to you. It's such amazing explanations of this. So maybe taking a slightly higher level view, the world when we saw GPT, like GPT -3, GPT -4, you go in, you ask a question, and it was trained on internet scale knowledge, and it provides some answer. Now, if you take that concept,

Anima (15:57.534)
I'm going to go ahead and close the video.

Prateek Joshi (16:23.605)
and apply it to physics. Like what can we expect from a model that knows physics? Meaning is it a combination of just a lot of data or is it data plus mathematical equations plus like physics realities you cannot violate the laws of established like known physics? So what can we expect from such a model?

Anima (16:48.606)
Yeah, that's a great question. I mean, to me, if we are now asking, right, this question of general models, right? So model that doesn't just understand fluid dynamics or waves or, you know, can model any kind of physical phenomenon. So it's able to, you know, tell you what is the simulated behavior under different conditions, right? And it could even be multi -physics. So you have multiple different.

kinds of physical phenomena model by equations coupled together. What it requires is, of course, data, right? So at this moment, we don't know how to train models without data. Because you could say, oh, give me a model that solves all of these partial differential equations that model different phenomena.

But the optimization landscape is impossible to tackle. And this is where there was this approach of using physics -informed neural networks, where we just use a neural network to try and optimize the constraint that you should solve this equation without any data. So that's called PIN or physics -informed neural networks. And those do not solve harder PDEs because...

the optimization landscape is too difficult. And that's what our AI revolution with language and vision has shown that you need lots of data. And I don't think that's any different here, but I think we can augment that data in interesting ways to add those laws of physics, right? You don't want to violate conservation laws. If that's something you want to impose in a setting, you want to have symmetries. We see that with the

modeling molecules, for instance, right? There's rotational equivalence. So meaning you rotate a molecule, the properties are either invariant to that rotation or they are similarly rotated as well. And you want to build that either in the architecture or do relevant data augmentation. And so this is why it's a lot more nuanced, right? So you want to incorporate the right kind of constraints and additional.

Anima (19:12.894)
information into the model in addition to data.

Prateek Joshi (19:17.685)
And do we think that once the model is trained and ready, can it help us discover new laws of physics? Is that too much to expect? What I'm saying is, if we conduct a large number of experiments and supply or input the data to a model, could it have discovered Newton's first law, for example? Or is that too much to expect?

Anima (19:42.526)
Well, I mean, it depends on the data that is being fed into it, right? So let's say the model only saw a lot of data and you ask it to do a symbolic regression and it comes up with a Newton's laws of motion that's very much possible. And lots of people have shown that you can rediscover existing laws. But to discover new ones, first of all, you need data that is of high fidelity.

Why did the Higgs boson take so long and so much repeated validation and careful design of experiments? And you need those error bars. It's a needle in the haystack. Is it just noise or is it a new particle? And that's why that cannot just be an algorithm in isolation. It has to be with those very carefully done experiments to separate signal and noise.

Prateek Joshi (20:24.021)
No.

Anima (20:38.686)
And so I don't think currently, right, these methods are about discovering new laws because that requires this very fine data. The other aspect is, you know, do we, what about, okay, maybe not entirely new particles or those that violate existing laws, but maybe what about some compact ways to express existing equations? And this is where I think there are different schools of thought. And personally, I believe that,

why do we bother new ways to symbolically describe the world when we can neurally describe the world? And that's what our weather models are doing. So they are incorporating all the data that is historically observed. And indeed, there is knowledge of the physics because that historic data is not just raw observations from satellites, but there is some correction with physics -based models because you need to.

filling the spa, you know, because the satellites only have sparse data and you're filling that in. And then the model is digesting it, right? And then now, you know, you're not asking it, oh, is there a new law of how the cloud is moving? Or is there a new law of how a hurricane is forming? But it knows that implicitly. And we see that with language as well. It's much harder for it to...

accurately say all the rules of grammar or other structure, but it mostly gets it right. And I feel like that is much more approachable as well as practically relevant because without going into the symbolic realm, if we are in the neural realm, we can get very big speed ups.

Prateek Joshi (22:23.957)
Right. You've published and talked about new AI methods that can help drones land and fly in strong winds. And given the size of the drones, like winds can cause a huge problem because it's unstable. It's hard to figure out how to approach that. So can you explain the method you published and also in general, like why is that?

such a big problem. People would have thought like, hey, we know how to land planes in like pretty tough conditions. So why is this a problem here?

Anima (23:04.798)
Yeah, so when it comes to planes, right, there's first of all, like, you know, you can take the cross section of the aircraft wing to begin with, and you can kind of, you know, buy, there's a well established set of like simulations to do. They're very expensive, but right, they are like CFD packages and large supercomputers to do them. On the other hand, drones, the designs are right, a lot more varying. And if they are like,

these quadrotor or different designs. You know, the aerodynamics is first of all, in a way more complex to model because you have these multiple different parts. It's not just a cross section of the wing. And you also want the drones, that flexibility of being able to go right near buildings and other kinds of like what we call effects like ground effect and the effects of walls.

that you wouldn't get the plane to do. And the plane has a, like big aircrafts have a lot of, they're very conservative in terms of how you design them, right? So you don't want them to be kind of anywhere close to that. But with drones, you do want them to be able to kind of be near buildings and others and still not give up safety and other requirements. So just kind of at a high level, they are,

quite different design problems. But the benefit is with drones, we now have like at Caltech a fan array where we can do experiments so we can capture real data, right? And then train on that. And that's what we did in that piece of work. And this way we can sidestep the requirement of having CFD solvers, which are first of all, very slow. So in real time, you can no way...

have a CFD solver, go predict what's going to happen with a turbulence as the drone is flying and take corrective action. But with these models, we are able to do that because they're super fast. So you can react and handle turbulent winds. And that's what I think is a game changer.

Prateek Joshi (25:16.085)
Yeah. And you mentioned experiments. And in general, when it comes to experimental physics, can AI help us in designing and optimizing experiments so that we can maximize learning? Is that a topic of study? Is that impactful? And in general, is that an area that AI can really make an impact?

Anima (25:41.694)
Yeah, so there's lots of right approaches to try and design which experiment to be done with AI. But to me, the next level and the most impactful one is what if we could replace those experiments with the internal simulation with AI? And it's actually not, you know, completely unapproachable as people may think. For instance, one of the projects we designed a medical catheter where AI came up with the optimized design.

Prateek Joshi (25:55.829)
Yeah.

Anima (26:11.518)
So we only needed to go to the lab once to validate and show that it resulted in a hundred times reduction in bacterial contamination. So we have this ability to have AI if it can accurately capture phenomena in this example, like fluid dynamics well, and how the shape of the catheter will determine the change in fluid dynamics and how bacteria can contaminate and into the human body.

then we can do away with experiments. And to me, that's the biggest potential for AI, which are simulating these physical phenomena to ideally replace most of the experiments. And if the AI is somewhat wrong and you do the experiment, you can take that feedback and fine tune that AI model. So it keeps getting better, even if in the beginning you may need to do a few physical experiments.

And so just to kind of give a bit more intuition of what we did in that catheter, what we showed was if like in the catheter, there's fluid flowing out of the human body and the bacteria tend to swim by the wall of the tube and infect the human body. In fact, this is one of the biggest sources of infections in hospitals today. And we proposed a simple design where we...

added these ridges or triangular shapes inside the pipe. And now the question is, what is the best such shape and what does it do? What the shape does is create these vortices. So if there are vortices happening as the fluid flows, then you do not have bacteria able to swim upstream because the vortices just push them on. And now how do you optimize and create the best and the right level of vortices? And that's what AI came up with. And this is...

To me, the new paradigm of generating better designs, optimized designs with AI, but ones that are physically valid and not just a stable diffusion dream, one that actually can work in the real world.

Prateek Joshi (28:19.957)
Okay.

Prateek Joshi (28:25.333)
Right. That's an amazing example of a design that's supposed to solve a specific problem. Let's talk about chemistry for a minute here. Your ordinate molecule model is able to predict quantum chemistry, it's used to design new drugs, and also help understand COVID -19. So quickly in simple terms, how does it work? And also how...

How does it do? How does it predict? How does it help in the design?

Anima (29:00.766)
Yeah, absolutely. To give you the intuition, right? So if I think about atoms within a molecule and also interactions between molecules. So think about how a drug molecule goes and tries to attach itself to parts of the protein. You can model them as forces, like Newtonian forces of, you know, there's different velocities, there's different forces, and how do they bind to one another.

And this is somewhat accurate, but not enough because there are now distances that are close enough where they're no longer just particles, but like wave functions. So this is where quantum realm comes in. So we need these finer scale modeling. And I think that's the general theme about scientific modeling as well. You may look at phenomena at a certain scale or resolution.

But then if you zoom in further, there are other effects. And especially in chemistry, in a lot of these situations, incorporating the quantum effects is really important, but incredibly expensive. If you think of Schrodinger's equation that models every property we see, every material we see, but why can't we just run it? Because the brute force way to solve it directly is incredibly expensive.

something like if you try to do that for a molecule with 100 atoms, it would be longer than the age of the universe on the current computer. So, yes, all our supercomputers are getting better, but when it comes to science, there's gonna be always harder and harder problems to solve. And what scientists have done, and in fact, some of the Nobel Prize winning work is to come up with approximations. Again, very genius, but hand engineered approximations.

to these equations. And now the question is, can AI learn better surrogates, right? That can be much faster than what these traditional methods can do for quantum chemistry, but also still be physically valid in many ways. So in OrbNet, what we did was to incorporate symmetry, like this orbital features that have a lot of physical validity. And so it's a hybrid model that is physics -informed parts of the

Anima (31:21.982)
features are informed by traditional methods, but lots of it, like the graph transformers are deep learning based. And I think that's where, you know, the benefit of this is not just working well on molecules that look similar to training data, but also on much bigger systems, right? So for instance, we only trained on molecules of up to 30 or 40 atoms.

but then we can extrapolate those predictions to know molecules that may be 10 times bigger. Like this, how the coronavirus binds to calcium ions inside our lungs is something we modeled with these systems. And these are like thousands of atoms, much more complex than what would be possible with traditional quantum chemistry. But with AI, we can improve our predictions and be still.

computationally feasible.

Prateek Joshi (32:20.277)
Right. I have one final question before we go to the rapid fire round and it's about the technological breakthrough. So today, like what technological breakthroughs in AI are you most excited about?

Anima (32:38.493)
You know, I'm certainly very excited about all the work when it comes to the use of scale, right? So what scale can do in terms of language models we've seen, multimodal models, but how do we bring that to bringing in universal physical understanding? So not just the physics of the world we can visually see, but of all the world with its details that are hidden from us. So how do we capture all of that with

the scale, but also I think with nuanced algorithmic design.

Prateek Joshi (33:15.541)
Amazing. All right. With that, we're at the rapid fire round. I'll ask a series of questions and would love to hear your answers in 15 seconds or less. You ready? All right. All right. Question number one. What's your favorite book?

Anima (33:23.454)
Oh, it's gonna be. All right, let's let's try this.

Anima (33:34.43)
Hyperspace by Michio Kako that just really inspired me to think beyond again the visible world and the three or four dimensions, right? The space and time to this world with as many dimensions as we want.

Prateek Joshi (33:51.637)
Yeah, I love his books, his writing, his talking. Everything is just so... Yeah, that's a great point. I just love Michio Kaku. Next question. What has been an important but overlooked AI trend in the last 12 months?

Anima (34:07.614)
I think that there is data beyond just text and images and videos. So that's the most natural because that's widely available on the internet, but especially data that is scientific, that is modeling different phenomena. But again, the language of math allows us to connect them together. So they're not just disparate data sets, but a way to bring them together is something that I think more people should think about.

Prateek Joshi (34:36.949)
what's the one thing about building AI models for science that most people don't get?

Anima (34:44.766)
that it's really spanning multiple different scales and multiple different domains. So a lot of deep tech tends to be like, let's go deep into one problem and solve it. AI is very horizontal and AI for science is gonna be both vertical and horizontal because it'll capture multiple different domains but still do very well in those.

Prateek Joshi (35:09.653)
What separates great AI products from the merely good ones?

Anima (35:16.286)
I think the ability to really reach deeper into consumers or users and capture their imagination, right? So where, you know, a great AI product isn't just used in the way it's intended or what the original purpose was, but there are just so many new creative ways that people come up with in terms of how it gets used and evolved further.

Prateek Joshi (35:45.109)
What have you changed your mind on recently?

Anima (35:51.806)
I, okay, this is always a hard one. I mean, in terms of how close we are to AGI, right? Because the definition of AGI itself keeps like moving and, and even the few ones that talk about requiring physical understanding in AGI talk about mostly the visible world, like robotics and the

having the three -dimensional view of the world. But I think the aspect that you need to capture all of these other complexities and the definition of AGI isn't just based on human intelligence, because all of this is already superhuman, means we need to come up with new approaches. And yeah, I don't have an answer to it, but I think there is a lot of still mismatch in terms of when people talk about AGI.

Prateek Joshi (36:51.285)
Right. What's your wildest AI prediction for the next four months?

Anima (36:58.686)
Everyday things are changing so quickly. I mean, I think... Let me see what would...

So I don't want to pin down a number, but my prediction is a model of some very small size, much smaller than what GPT -4 is rumored to be, will be GPT -4.

Prateek Joshi (37:23.509)
Amazing. Final question. What's your number one advice to people who are building AI products?

Anima (37:34.43)
to think about customers and the users. What do they want from the model? How are their pain points and bottlenecks? And always be willing to adapt and change.

Prateek Joshi (37:52.149)
Amazing. This has been a brilliant, brilliant episode. I can't wait to publish it and I'm sure our listeners will enjoy it. Thank you so much for coming onto the show and sharing your insights, especially in all these episodes, we have never done like science and this is why it's super exciting. So thanks again for coming onto the show.

Anima (38:11.806)
Thank you Prateek and thank you for helping this tech mainstream. I think a lot more people need to be aware of it. We need a lot more people to work on this, so I hope we get there.