The TechEd Podcast

AI Engineering: The Emerging Field Poised to Secure America’s AI Advantage - Pramod Khargonekar, ERVA Co-Principal Investigator and Vice Chancellor for Research at UC Irvine

Matt Kirchner Episode 220

A new era is emerging where engineering drives AI—and AI transforms engineering

This week Matt Kirchner is joined by Dr. Pramod Khargonekar—Vice Chancellor for Research at UC Irvine and lead author of the ERVA report AI Engineering: A Strategic Research Framework to Benefit Society. Dr. Khargonekar unpacks the emerging discipline of AI Engineering, where engineering principles make AI better, and AI makes engineered systems better.

From robotics and energy systems to engineering education and data sharing, this episode dives into the flywheel effect of AI and engineering co-evolving. Pramod explains the real-world impact, the challenges ahead, and why this moment represents a generational opportunity for U.S. leadership in both innovation and education.

Listen to learn:

  • How AI is changing every branch of engineering—from mechanical to civil to industrial and beyond.
  • Why manufacturing, energy, and transportation are ground zero for “physical AI”
  • What the 14 Grand Challenges of AI Engineering reveal about the future of innovation
  • Why systems thinking is the key to building AI products that actually work
  • How colleges must rethink engineering education—and what industry can do to help

3 Big Takeaways from this Episode:

1. AI is transforming every branch of engineering—from design and simulation to manufacturing and operations. Pramod explains how fields like robotics, fluid mechanics, and materials science are being reshaped by tools such as reinforcement learning and foundation models. This shift isn’t just about efficiency—it’s enabling engineers to solve problems they couldn’t approach before.

2. Engineering will play a critical role in advancing the next generation of AI. Pramod highlights how engineering disciplines contribute essential elements like safety, reliability, power systems, and chip design to AI development. These contributions are vital to scaling AI into real-world, physical systems—what he calls “physical AI.”

3. To lead in AI Engineering, higher education must integrate AI into every engineering discipline. Dr. Khargonekar outlines how universities can start with shared foundational courses, then build field-specific AI applications into majors like mechanical or electrical engineering. He also emphasizes the importance of short courses, professional development, and industry partnerships to support lifelong learning.

Resources in this Episode:

Connect with ERVA on Social Media:

X  |  LinkedIn  |  Facebook

We want to hear from you! Send us a text.

Instagram - Facebook - YouTube - TikTok - Twitter - LinkedIn

Matt Kirchner:

Matt, welcome into The TechEd Podcast. I am your host. Matt Kirkner, every week we do the important work here on the podcast of securing the American Dream for the next generation of STEM and workforce talent. I'm also a huge believer, by the way, that securing that dream for the next generation is not going to look the way that it has for past generations. We are in a world of advancing technology, of so many great innovations taking place in the world of education, across the entire economy. How we secure that dream in the future is going to look different. That is a big part of what we are going to talk about on this episode of the podcast with a really, really distinguished guest. I'm going to take just a moment to talk about this individual's background. I should note that his name is Dr Pramod kargonker, and we are really, really fired up to have Pramod on this episode of the podcast. Let's take a look for just a moment at his background. He is a distinguished professor of electrical engineering and computer science and the Vice Chancellor for Research at the University of California, Irvine, called in short of course UC Irvine. He's the former Assistant Director for engineering at the National Science Foundation. Think about that Assistant Director for engineering at the NSF, leading national strategy on engineering research and workforce development is what he did there. Currently, he is guiding the what we call ERVA, the engineering research visiting Alliance. That organization brings together academia, industry and government together as we shape the future of engineering in America. Finally, he is the lead author of the 2024 ERVA report, AI engineering, a strategic research framework to benefit society. Now I will tell you, having read his background in his bio, that we are just scratching the surface on our guests experience. I'm sure we're going to get into a lot of details over the course of our discussion, but for now, let me welcome to the studio of The TechEd Podcast. Dr Pramod. Kargankar Pramod, it's so awesome to have you here. Thanks for coming

Pramod Khargonekar:

in. Well, Matt, I'm so excited to be on your podcast. Thank you for inviting me. Great to be here.

Matt Kirchner:

It's so awesome to have you. We're going to have an awesome conversation just thinking about all the work you're doing. And as our audience knows, and you will soon know if you don't already, I, like, totally geek out on artificial intelligence and anybody who's doing really important work in AI, either in industry or education, those are some of my absolute favorite conversations. So I know we're gonna have a lot of fun on this episode. Let's start with this. Though you have had really a front row seat to the evolution of engineering here in the United States of America. We talked a little bit about your leadership at the National Science Foundation, your current work at UC Irvine, and then, of course, the engineering research visioning Alliance. What has prepared you to lead this incredible conversation around artificial intelligence and engineering? And why do you think, if you do think it's such a pivotal moment, both in engineering education and in technology, why do you feel that way? Matt,

Pramod Khargonekar:

I think you mentioned my background in your introduction. I've been a faculty member in engineering for almost 45 years now. Wow, did

Matt Kirchner:

you start when you were five? You're aging really well, by the way, well,

Pramod Khargonekar:

thank you. I was dean of engineering at the University of Florida. I was chair of the EECS department at the University of Michigan. You already mentioned my role at National Science Foundation, which was at the really senior leadership of the premier organization that sponsors research and education across the entire United States. So that's sort of my background from my own field of work. I'm a control systems person, so it's not an AI field, but it's in what I call AI adjacent field, so it's totally close to AI. I personally have done a little bit of work, so I don't consider myself an AI expert, but I've done some work more recently on AI and control. So that's sort of the disciplinary background I came from. But I think for this particular conversation, it was really my role as CO principal investigator on the NSF funded ARAVA project. We call it ARVA, engineering research visioning Alliance. Got it. Okay? ARVA, Yep, got it. Our mission is to identify and highlight and articulate high impact, high potential areas for the future of engineering. So that's really what we focus on. And the idea for the AI engineering event actually came within weeks of chatgpt release. So you remember in October 22 very well, GPT came out, and then within weeks of that, I realized that we in engineering, needed to work on this tremendous opportunity that was in front of us. And so we started to work on that idea, and we held the event in October of 23 Okay, so that's how I got into this, and that's sort of the background that prepared me to lead that event. Got

Matt Kirchner:

it and just a quick question. You know, out of my own curiosity, when we talk about controls engineering, I'm by background a manufacturing guy, right? So worked around control systems, in my experience, at least starting out primarily in the manufacturing space, was that your kind of area of expertise and interest, or was it another controls that you were involved with?

Pramod Khargonekar:

A great question. Matt, so I started out as the theory. I mean, I'm still a theorist, so I did sort of the mathematical theory of control. In fact, my advisor was somebody whose name you might know, Ari Kalman, okay, of kalman filter fame, got it. Yeah, sure, I was one of his last PhD students. So that's the background I came from. But especially when I was at the University of Michigan, I got heavily involved in control of semiconductor manufacturing. Okay? And we, in fact, had an NSF Engineering Research Center on what we called reconfigurable manufacturing systems, RMS, RMS. So I was involved in that ERC, so I have had quite a bit of exposure to control as it related to manufacturing, but then I've also done control for power grids and smart grid and renewable integration, so many different applications of control understood

Matt Kirchner:

manufacturing and energy and so on. And I mean that whole field, we could probably do a whole podcast just on controls, technology and engineering and the evolution there, but we'll stick to the AI topic for now. We've heard this category called AI engineering. So talk about what is AI engineering, and how would it differ from maybe a traditional AI or computer science field or focus,

Pramod Khargonekar:

right? So first, let me get one thing that is potentially confusing out of the way. Okay, so the this term AI engineering sometimes is used to refer to a job category of sort of computer systems software engineers who, you know, build and deploy AI products. And so it's a job category people specialize in that and get our definition of AI engineering was much broader, okay, and it refers to this convergence and bi directional interplay between AI and engineering. So the idea behind AI engineering came out of that event that I mentioned, that was done under ARAVA sponsorship, and what we realized is that there is a virtuous cycle where engineering helps AI. AI helps engineering. And this virtual cycle Matt becomes very real when you think about physical AI. So I don't know if you have seen Jensen Wong of Nvidia talk about, you know, his vision for the future, certainly. So he says we are right now in the agentic AI. Next is physical AI. So when you think of physical AI, this AI and engineering convergence, in my opinion, is going to be absolutely crucial. And so the fundamental idea of AI engineering that we articulated was aI helping engineering. Engineering helping AI engineering writ large. So engineering in all different fields, benefiting from AI and different engineering fields, allowing us to build more useful, safer, reliable, productive AI systems, particularly physical AI system. So

Matt Kirchner:

let's talk about, I mean, I think most of our whether they're familiar with the term agentic AI or not, and we're creating a an AI agent, some version of software that is using artificial intelligence and leveraging a large language model to solve one problem or another. And there's now 10s of 1000s, if not more, AI agents that people have developed. So certainly you think about chatgpt is just as just one of 1000s of examples when you talk about physical AI, or when Jensen Wong talks about physical AI, give us a definition to help us understand what you mean by that. So

Pramod Khargonekar:

my take on physical AI, which is a it's something that is happening and is going to happen over time, is AI integrated into physical systems, okay, so think manufacturing, which is your background so AI getting integrated, improving all aspects of manufacturing. You can say the same thing about power systems and grids. You can say the same thing about transportation, whether that's self driving cars or transportation system that controls my life in Southern California, yeah, right. It's like, okay, two hour traffic jams. So when I think of physical AI, I think of AI integrating into the physical world in which we live and operate. It also includes AI into healthcare systems. So that's what physical AI means to me. The other piece that I think we in AI engineering talked about is how engineering is crucial to advancing AI. A great example is semiconductors, like you cannot imagine doing today's AI without state of the art graphics chips. That's all engineering. But it's not just that. I like the whole issue of power and energy for. AI data centers. Another aspect of this physical AI is this AI is not just information processing. It is sort of powered by actual physical energy. There is the whole issue of data that's going to drive development of physical AI. Where is the data going to come from? What has happened is, you know, we have used all the internet data and things like that to train AI until now, but when it goes to the physical world, and we can talk more about that, we are going to need data from the physical world. And so that's going to be another piece of this AI engineering that we see ahead of us. You

Matt Kirchner:

know, I knew I was going to love this conversation when we were fortunate enough to have you join us promote. But I will tell you that some of what you just said and our audience will recognize as well my fascination, some might even say, an infatuation with the whole idea of the edge to cloud continuum and kind of what you're talking about there, where we've got, you know, we've been talking about in manufacturing, smart sensors and smart devices on the edge, communicating with each other, making decisions on the edge, not necessarily having to communicate, in my case, with a programmable logic controller or a control system and or a computer network in manufacturing that's allowing the proliferation of data acquisition out on the edge and all kinds of really cool decision making and advancing manufacturing. And then we talk about that data going from there to a network, to the fog, to a regional, more local data center, to the cloud, and using the data, and then AI for all kinds of predictive analytics and so on that. I mean, that's the world we're living in today, and super advanced manufacturing companies are leveraging that technology today. And then we say, Look, we can have that same continuum, whether it's a smartphone, whether it is in healthcare, whether it's in energy, whether it's in defense, hospitality, retail, it doesn't matter that same continuum is happening. And what I'm hearing, and you can tell me if I'm getting this right, when we talk about physical AI, and then the role of engineering. You know, traditionally, in a lot of engineering disciplines, we think about them working with the physical whether it's a manufacturing operation, whether it's a smart grid, whether it's a power plant, what have you working on the theoretical side, but then applying that theory to physical assets and to design and to upkeep and maintenance and all these other aspects of whatever that market space is. What we're really saying, if I'm understanding you right is look in the age of artificial intelligence, and you think about maybe somebody's most familiar with something like chatgpt, we're going to have as much focus on how that integrates with physical assets and physical items in our economy and in the world in which we live and our Engineers, whether they are electrical, mechanical, industrial, aerospace, biomedical, whatever. Now they've got to have this really tight relationship with artificial intelligence, because, to your point, AI is going to influence engineering. Engineering is going to influence AI. We're going to create this flywheel, which can be really, really exciting, but only if we've got engineers in that loop that are understanding how AI is going to change our physical world and vice versa. And so now we've got a whole new way of thinking about engineering. That was a long soliloquy on my part to get to the point. But is that the way we're thinking about this? And am I getting that right?

Pramod Khargonekar:

Matt, you got it perfectly. And, in fact, you don't need me. You can, you can explain all these things, because you understand it absolutely perfectly. Matt,

Matt Kirchner:

well, somebody's got to be doing all the incredible work, research and promulgating all this incredible information around the concept. And you know, I would say I get to ask the questions and make some comments, but brain's much more advanced than mine. Is are the ones that are really doing the cool work, and I would include you front and center in that work promote. So let's talk about the report that I mentioned early on in the conversation as we're introducing you to our audience. And it describes AI engineering as two things, bio, directional and reciprocal. So engineering makes AI better. Ai makes engineered systems better. You mentioned a little bit about that in your last answer, but let's go a little bit deeper. How are you thinking about that? And if I'm an engineer in the field, or I'm a student in an undergraduate or graduate or PhD engineering program, how do I need to be thinking about that? That flywheel and connection?

Pramod Khargonekar:

Yeah, so it's bi directional. So AI improves engineering. Engineering improves AI. So let's start with AI improves engineering. We see AI improving all aspects of engineering, all the way from fundamental things that you learn in your undergraduate engineering classes, all the way to analysis of products and services, to design to manufacturing to operating these products in the real world. You know, lot of our engineered systems are operating in the real world, whether that's your car or that's your power grid operating them, and even end of life, recycling and all of that. So we think that AI can help improve every aspect of this and can lead to greater efficiency, reduced cost, customizability, the. Into market. I mean, so it has implications into all aspects of what we as engineers do in our different domains. So that's sort of this one direction, and ultimately, I think it can improve our products, reduce their cost and overall sustainability of the whole engineered world. Sure, on the reverse side, we think that engineering fields have something really substantial to bring to help improve AI so one of the things that we made a point of, and this is not to disrespect people from computer science and from Ai fields, one thing that we engineers take extremely seriously is safety. I mean, it's sort of built into our right ethic, like we never field a product that's unsafe. We take that as professional ethical responsibility. But it's beyond just that commitment to safety. It's also, you know, we have intellectual traditions like information theory, control theory, signal processing, communications, all of these things are really important for next generation AI. And so, you know, take a field like reinforcement learning, which really is a lot of stochastic optimal control, something that I learned as a graduate student, right? And so we think there is a whole bunch of fields in engineering that improves AI. I already talked about the whole chip design area, right? It's like today. Semiconductor design is changing because of use of machine learning and AI algorithms. So you use advanced machine learning and AI algorithms to design the next generation chips, which in turn enable the next generation of AI so there is that virtuous cycle in semiconductor we talked about power right now, having power supply to these big AI data centers is a huge bottleneck, so we think that we are going to need advanced power systems engineering to enable the next generation AI data center. So that's that direction of flow engineering, improving AI enabling AI. And I already talked about AI enabling engineering. There is one example I want to use matt to show another example of a virtuous cycle, and I think that's around data. So as we know, AI systems require large amounts of data to train the models and use them, et cetera, et cetera. We think that for physical integration of AI into the physical world, we are going to need data, and that data is going to come from engineered systems. So whether that's a manufacturing system in which you have deep background, like if you're going to incorporate AI into manufacturing system, in any part of it, you're going to need data, and that data is going to come from the physical manufacturing system. So there is that virtuous cycle again, that AI can enable data collection, and the data collection enables advanced AI

Matt Kirchner:

absolutely well. And it's really, really exciting for anybody in the field of engineering. And I think about, first of all, you mentioned the term reinforcement learning, which you I mean, I just think is like, the coolest thing, and folks, when they hear about it, that, of course, reinforcement learning being a sub topic, or a subset of machine learning, which is a subset of artificial intelligence, and the whole idea that we're giving an AI little reward, so as it gets closer and closer to the result that we want to see from it, it's getting that positive reinforcement, almost like a child who's getting positive reinforcement from a parent, so that the AI learns over time and gets closer and closer to our ultimate goal, whether that's optimizing a power plant, whether that's perfecting or optimizing material flow through a manufacturing environment, whether it's optimizing the activity of an autonomous vehicle, reinforcement learning really, really important in that aspect of it, and then thinking that That is just one subset of artificial intelligence. And then thinking about the incredible amount of data that, for instance, a machine learning algorithm, in the case of reinforcement learning, is going to need, and then how do we get all that data? We think about the example I always use. I'm sure you've been in a Waymo car. Have you? Have you driven in a ridden in a self driving vehicle? Yet you've done that? Right? Not yet. Oh, so I was in Phoenix a year and a half ago, and I was running late for a meeting, and I jumped in my Uber and I ordered up a car, and it said, do you accept a ride from an autonomous vehicle? I didn't have time to, like, argue with the Uber app, so I was like, Fine, you know, I'll take the ride. And this self driving car shows up and picks me up along the curb and drives me three miles to my next meeting, and there's no driver, and we're driving down the city streets of Phoenix, Arizona with no driver in the car. And you start to think about why that works, and all the data that that car is collecting as it's driving around, about what's a mailbox, what's a person, what's the weather doing, what are the traffic signals doing, sending all that information to probably a local data center and then ultimately to the cloud, but all the cars are doing that all at the same time, gathering all that data. All the cars are as smart as every one of the individual cars is at any given point in time. That's how we leverage AI. That is a little bit of you know what we're talking about with, gathering data in the physical world and then having an AI learn from the data that we're gathering. So we put an. Engineer in the middle of that entire conversation or that entire process. How do we think about so if I'm an engineer and I'm in mechanical, I'm in civil I'm in electrical, what have you. How is that evolution, both in terms of data, in terms of artificial intelligence and so on, changing my role, and what do I need to be doing differently as we look to the future than maybe I was doing five or 10 years ago.

Pramod Khargonekar:

So Matt, as we see it, almost all fields of engineering are going to change. Are already changing, and are going to change even more because of the AI machine learning advances. So let's just take some examples. I would say robotics is like a poster child for what we are thinking So robotics, you know, mechanical engineering, electrical engineering, computer engine, you know, all of these fields are crucial to robotics. But if you look at say what's happening in robotics right now, things like reinforcement learning and, more recently, foundation models are being used to make these robots much more dexterous, being able to, out of the box, do things that they have never trained to do. Just absolutely amazing advances are happening in robotics. So if I was, say, a mechanical engineering specializing in robotics, I think first thing I would need to do is to make sure I'm at the cutting edge of RL or foundation models like what Gemini robotics just released a couple of months ago as a Design Suite for next generation robots. I think that reality is is coming. So to me, that's an example of how a sort of mainstream engineering topic like robotics is transforming in front of our eyes. But let's take autonomous cars you mentioned all the way more and all of that. I mean, if you were a mechanical engineering inspired by cars, like a lot of people that in Michigan were right, like they all grew up around Ford and Chrysler, and they all wanted to become auto engineers. Well, you know, if you're going to be an automotive engineer for tomorrow, you need to be thinking about AI and machine learning. Same thing with aerospace, right? I mean, all aspects of aerospace are changing, like uncrewed drones and space travel. I mean, all of that is changing. Let's go even beyond kind of these clear examples. Take a classical field, like fluid mechanics, okay, like something that we teach you know, navier stokes equation and how do you solve all these things and numerical computations? I mean, there are conferences and workshops happening as I speak to you on how we can bring in AI and machine learning tools and data driven approaches to advancing fields like fluid mechanics or thermodynamics or just name it, and it's all potentially advancing now. It could be many things. It could be that we do the things that we have been doing, but do them more efficiently, faster, cheap. But it could also be that we solve problems that we haven't been able to solve before. So I think both are possible. If I look to next five or 10 years, I would think that we are going to see really great examples of advances of both kind, even in fundamental fields. And so I think if any field of engineering is changing both in terms of its sort of practice in the real world, as well as the fundamentals that we teach our students, and graduate students do their PhDs in them. All of these have potential of being changed because of AI. You

Matt Kirchner:

know, I think you make a really interesting point. And when I think about if I were to have just kind of thought through this conversation, all of my thinking would have been on the physical side. And we've talked about energy. We've talked about the huge demand for energy that our data centers are going to require. We've talked about autonomous vehicles, space travel, drones. I mean, first of all, imagine 20 years ago, thinking that 20 years from then, we'd be having a conversation with words like that, as if they were commonplace, and now here they are. If nothing else, it's an indication of just the incredible times that we're living in. But I would have thought all about those physical assets and how they relate to artificial intelligence, what probably wouldn't have come to mind, which I think is really fascinating, is on the theoretical side of engineering. So whether it's fluid mechanics, whether it's, you know, and that can be everything from water flow to hydraulics to pneumatics to and so on, you think about the theory on the side of whether it's temperature control, or its physics, or, I mean, any of these aspects of engineering that are kind of like when we think about the traditional engineering pathway, things that engineers just learn, right? I mean, they've got their pathway, their course sequence, in the different aspects of engineering, in the physical world and so on, and science that they need to learn in order to be effective and good engineers. There's a whole AI side of that as well that I hadn't even thought about. You think about, have you read, by the way, Kissinger's Genesis yet? Yeah, yeah. And there's some really, really good books on AI, and I've read most of them, or at least a lot of them. That one is just especially for somebody who, like myself, isn't, you know, I'm certainly not a PhD engineer. I'm just a guy that's interested in AI really brings a lot of those concepts down to. Earth, but talking about the future of material science, and how AI is going to be involved in the development of materials that are going to be lighter weight, more sustainable, way, more applications that are just going to happen because AIs are working on this, obviously, with an engineer in the loop somewhere, but all of those aspects of engineering that are going to change. So I'm right in understanding it's not just the physical world, but how we deliver learning and engineering, how we prepare our engineers for the theoretical side is going to be do you think it's equally as important? More important? How do you feel about that?

Pramod Khargonekar:

I think it is certainly equally important. Of course, future will tell to what extent it changes the fundamentals of what we teach. I think one direction seems very clear, is role of data is becoming greater and greater. So like you know when, when I got my training, you had to get closed form solutions to problems in order for them to consider to be solved, right? And then we changed from those closed form solutions to okay, if you can give me an algorithm to compute the answer, then it's okay, and I will run this in my, you know, IBM computer, or the desktop computer. So we changed our definition of what it means to solve a problem. I think if I'm sort of open to all possibilities, this new revolution that is upon us will also re will also alter how and what we mean by having solved a problem. And so I think we have to be open that fundamentals could change quite significantly. Of course, the equations won't change, right? What we do with those equations and how we deal with them, and how that allows us to deal with the with the world in better ways. I think that has every potential to change. I mean, you know, an example is like reuse of knowledge. Let's think about design, right? We would all love to do more reuse of design that we have already done, because if you can reuse it, then you short circuit so much, right? You don't to develop a new process, right? Don't have to do a new manufacturing thing. And I think it is less than perfect, right? I mean, our reuse of existing design, you know, it's constrained by like, how the data is stored and how the data is reused. I think all of that could potentially change. So, yeah, I think fundamentals, the fundamental physics, is not about to change, but how we deal with it is going to change. To me, an inspiring example is alpha fold. So protein folding was this huge problem. I heard about it back in the 90s. I never worked on it myself, but I heard about it and in a lot of very, very smart mathematicians, computer scientists, biologists, worked on it. Not too much progress. Set that up for

Matt Kirchner:

our audience, if you will, promote so when you reference that today, help them understand what it is that you're referencing in layman's terms, and then go on. Okay,

Pramod Khargonekar:

so protein folding is the problem that if I give you the linear structure of the molecules in a protein, protein is a three dimensional structure. So that linear structure kind of folds in a really exquisite three dimensional pattern driven by the molecular forces of attraction and repulsion, because it's all electrically charged, right? And so, so this thing will fold into this beautiful, three dimensional shape that controls our biology. So the a fundamental challenge in protein folding. But if I gave you the structure of the protein as a linear array of molecules laid out on, say, table, sure and you let it. You just let it go. How will it fold into a three dimensional pattern? Can you predict this? Because if you can predict it, it affects your drug discovery and drug targeting and all of that. So that was, like the fundamental. You can call it a math problem or a chemistry problem, for 3040, years, people worked on it, but it really wasn't a minimum. It didn't really happen. And then Demis Hassabis, who won the Nobel Prize for it, along with a colleague at Deep Mind, applied modern RL type of algorithms that they used for AlphaGo for the beating the world champion in the game of Go, they used similar ideas for solving the alpha. So that's an inspirational example. I mean, I think those stories that want to repeat in other parts of engineering,

Matt Kirchner:

yeah, absolutely. And I mean to me, that is just so, so exciting. And you think about just the human benefit to being able to target drugs, I have people who are really close to me in my life that have benefited tremendously from those advancements, in many cases, that never would have been possible a without AI and without tremendous research, and when you know, we wouldn't have even been talking about 10 or 15 years ago. So and we're just scratching the surface. I mean, we're just getting started on that aspect of the applications for artificial intelligence, which is just a could be an episode in and of itself as well, and also something to be really excited about. None of this happens on autopilot. None of it happens easily. I know in the report that you, that you authored, we identified 14 you call Grand Challenges in AI engineering. So let's talk a little bit. I, if we had an hour, I would love to sit down with a piece of paper and actually see if I see. Many of them I could write down on my own, but tell us, what are some of those grand challenges. Give us a couple examples.

Pramod Khargonekar:

So like my kids, I love both of them equally. So I love all 14 equally, but because you asked me to pick a couple. So you know, to me, if I had to pick just one out of 14, I would say manufacturing, and we have talked enough about it, so I won't go into manufacturing, because, to me, the scale of the potential scale, there is just incredible impact. So really unusual one Matt had to do with rare events. So we know, we know that in engineering, we worry about very rare but catastrophic possibilities, like a bridge failing or so we talk about how, in this AI engineering systems, we are going to worry about rare events, because machine learning is sort of data driven, and for rare events, we are not going to have lots of data, right? So that is a to us. It's sort of an intellectual Grand Challenge. Is like, how are we going to deal with rarity of failures in a data driven machine learning AI driven world. Sure, have some initial ideas on how to do this, but we think that's like a fundamental Grand Challenge, intellectual Grand Challenge. Let

Matt Kirchner:

me just ask, is that because it's an anomaly and it doesn't fit a normal pattern in data, why would a rare event? What aspect of it makes it a makes it a challenge,

Pramod Khargonekar:

just that it won't be in your training set, right? Got it. So if it's not in your training set, well, AI is going to have a little bit harder time in sharing with those things which are not in your trade. So that's the fundamental problem. Is that they are rare, and so they don't show up in the how they want to deal with it, sure. So let

Matt Kirchner:

me take that just one step further, because this is interesting to me. Is it fair for me to assume that a it's not going to show up in your data? So it's that much harder to predict? On the other hand, once it does happen, and if we do have the data that inform that or influence that, that rare event, we build that into our LLM, and we're that much more likely than the next time at least, to be able to predict where that may be a risk. Is that part of the thought process? Yeah, absolutely.

Pramod Khargonekar:

You know, those are the kinds of thoughts that go into what we articulate. And then grand challenge that, yeah, it is a big problem. And we, here are some initial ideas that, and we reference some papers that are coming out dealing with this. Got it so, yeah. But you are on the right track. I mean, that would be one approach to doing it, but you can also imagine use of simulations and simulating extreme conditions, and use like a combination of real data and simulated data. So there are other possibilities of dealing with this. Again, I don't think we have the answer, but which is why it's a grand challenge, exactly, but we think it's an important one. Got it another one that I want to highlight in the third Grand Challenge is data. So you know, unlike other fields of science, and astronomy comes to mind, chemistry comes to mind. There is a tradition of shared data. In fact, the only data they have is the shared data. Right in engineering, sharing of data sets is very uncommon, and there are many reasons, including proprietary sure business value profit doesn't allow for data data sharing, or it's not easy to do data sharing, but we think that having shared data to advance the engineering fields is a crucial bottleneck, and it's going to require leadership from government, from industry organizations To solve it, and we kind of put a finger on share data being a bottleneck, Grand Challenge,

Matt Kirchner:

interesting. And, you know, I think, and that's one of the, one of the topics that I put a fair amount of thought into. In fact, just had a meeting earlier this week with a group of healthcare providers and talking about, you know, AI and applications of AI and healthcare, which, obviously, there's just huge, huge use cases for AI, and many of them are already being executed on. But that is a challenge, right in terms of, how do you share healthcare data? Especially, you know, it's one thing, if you're a hospital system that's, you know, generating $20 billion a year in revenue or whatever, and you've got all this infrastructure built into your organization. A lot of healthcare providers, whether it's a, you know, it's a nursing home, it's a mental health clinic, or whatever, much, much smaller by orders of magnitude, in terms of their business model, probably could benefit the most from applications of AI, but are also most at risk of how they share their data, how they protect it, how they create a data set that allows, allows us to leverage the learning without, you know, without disclosing an individual's private, you know, healthcare data as an example. Are you hearing any examples of how organizations are working around that, or what might be a potential solution to that grand challenge?

Pramod Khargonekar:

We thought that material science is like one of the first ones where this problem has been talked about. So, you know, when I was in at NSF, we started this Materials Genome Initiative, sort of, you know, inspired by the Human Genome Initiative. And the idea was, can we get different investigators to share data? So we put together tools and formats for people to share data out of their labs. It's a hard problem, because. Experimental conditions are different across labs. It's a difficult problem. It's not unsolvable, sure, but it requires collaboration across different kinds of institutions, funding agencies, et cetera. So I think that's one example that we thought where some amount of progress has been made in terms of sharing of data. We think industry organizations, professional organizations, have a role to play, which are kind of neutral convening bodies. So, you know, in your in your world of manufacturing, right? So society for manufacturing engineers. I mean, could they play a role in anonymized or de identified data for manufacturing systems so that no single company can solve the problem, but taken together, maybe we can really create very advanced tools government can play a role when they sponsor research projects to sort of encourage data sharing across companies and academia. So those are some of the ideas that we thought of. But this, it's not going to happen bottoms up. It's going to require leadership from the top,

Matt Kirchner:

absolutely one. And that's one of the you know, you think about here in the United States of America. And as much as we're focused on securing the American Dream for the next generation of STEM and workforce talent, and our whole system of, you know, at least, almost a conversation about capitalism, about how we innovate within private companies, which in other parts of the world, sometimes folks don't have an appreciation for how democratize is probably the wrong word for it. Maybe the completely wrong word, but, but how open some societies are in terms of sharing information and not protecting IP almost might be some bad benefits in terms of of creating larger data sets more quickly, as opposed to here in the United States, where we so value innovation, engineering, ownership of ideas, ownership of technology and improvements, and being able to leverage that into, you know, financial and other benefit for individuals and companies. It's just going to be a really fascinating thing to watch over the course of the next, you know, probably the next several decades, as we as we really try to find more and more ways to leverage data while still protecting what makes our economy here in the United States work the way that it does, and makes us such an innovative nation. I know one of the things that's going to be required, certainly in the past and even more so in the future, from an innovation standpoint, promote, is this whole idea of systems thinking, right? I'm a big believer in figuring out and making sure that people understand how what they do fits into a larger system, into a larger process. We're not just working on individual components. There's a goal, there's a system, there's a loop. It's a loaded question, because I know you're a huge believer in systems thinking, but talk to us about the importance of systems thinking, especially as we move into this, this new age of the flywheel between AI and engineering? Well,

Pramod Khargonekar:

I think Systems Thinking is absolutely crucial. And I would even argue that the reason we don't have a killer app for AI is because we haven't taught in terms of systems. So, you know, chatgpt is a consumer product. It has its own sort of dynamic so I won't go there, but anything else is going to be aI inside a system, AI alongside other technologies to build a system that's useful for human beings, right? So I would say system seek is absolutely crucial to create those really high value products and services that people are going to benefit from. And there, I think it goes beyond the engineered system or engineered product. We really have to be thinking about the product in the wild, working with humans. So this sort of human technology interface as to how humans experience a product and engineer a product. And you can think of self driving car as an example of that human technology interface. I think it's going to be absolutely crucial,

Matt Kirchner:

for sure. So the whole idea of the human in the loop, that we've got an individual kind of in this system that is interacting with physical systems, interacting, you know, behind the scenes, with artificial intelligence, and then being part of that, that overall experience. Am I getting that right? Absolutely,

Pramod Khargonekar:

and that's so that's the kind of thinking we need, and that is only going to come from systems level thinking,

Matt Kirchner:

and it's going to come from all these different disciplines of engineering as well, you know. And you think about with my time in engineering programs at universities, my time working with primarily electrical systems controls and industrial manufacturing engineers, quality engineers and manufacturing we've built up in some ways, some silos, in certain cases, some organizations are better than others between those different engineering disciplines, talk to us about how we're going to see maybe a convergence of some of those disciplines, and then how, you know, partnerships and research between individual engineering disciplines, between organizations and educational institutions and between private employers. You know, are we going to see more interaction and reliance and overlap between all those different disciplines and organizations.

Pramod Khargonekar:

I think absolutely yes. In fact, you know, we talked about systems level thinking. Well, that's going to require interdisciplinary collaborations. And you know, in both AI for engineering and engineering for AI, we talked about how, like, four. Fields like controls or signal processing or information theory, they are all going to have to integrate with machine learning and AI fields, but we talked about thermal and fluid sciences, so I think you're going to need internet collaboration at the level of fields and individual researchers, but I think we are also going to require collaboration at organizational levels. So whether that's like departments in a college of engineering or in a university writ large, between university and private sector, between government, university private industry. So we are going to need collaborations across people and fields and across organizational boundaries across our entire society, and I think that's the only way we are going to realize the biggest benefits of this new technology, enabling the next generation of products and services for our people. There's

Matt Kirchner:

no question. And you see the convergence. We see it again. Not to keep going back to manufacturing examples, but I have so many of them, you know, in the convergence between it and OT, between operations technology and information technology and manufacturing. And those used to have huge walls between them. And you know, it never wanted to have anything to do with the manufacturing floor. Manufacturing floor wanted nothing to do with what was going on in the computer service servers. Never the two shall meet. And now we're seeing tremendous overlap there. We're going to start to see more overlap between disciplines in engineering, especially on the education side, but across engineering disciplines, both private and public and educational. Now we start to think about, how do we prepare the next generation of engineering talent for this new world? You and I can talk all day about how important it is that we or how this is coming, and how we have to prepare for it, but, but what does it mean to actually prepare for it? I spend a lot of time around engineering programs. You see every different version, right? We see some universities at this point that are kind of creating separate disciplines about around artificial intelligence, separate engineering programs. Maybe it's connected to data science or computer science or or something in that realm, and that's fine, but I know you and I agree that that's not going to be enough, that we really have to drive into AI technology and AI thinking into every discipline. How do we do that in a university where we're still teaching traditional electrical or mechanical engineering, and we all know in many cases that academia can be, especially at the four year and above level, a little bit slow to change. How do we do that? How do we drive that transformation in undergraduate engineering? I think

Pramod Khargonekar:

that's really, really crucial. And what we think is, at a minimum, every engineering undergraduate needs to have at least one exposure course to AI, and that would be the full full gamut, analytic, predictive, generative AI. So cover the whole thing at least one and in many, many fields, you will need more than one course to expose all the students to the fundamentals of AI and how the field is evolving and how it is impacting traditional engineering disciplines. Now I can imagine doing this through a common course for all engineering students. But you can also do it through, like for some majors, a deeper course for some other majors, you know, a different offering to expose them to AI. And I think we have done similar things in computing so or statistics. You know, how we meet the statistics requirement or computing requirement? I think we are going to have to take creative approaches to exposing students to AI fields depending on their major. But beyond that minimum requirement, this is absolutely like, it's like a literacy type requirement, we are going to need deeper exposure depending on the field. So if you're a robotics major, while you do some, like a collection of courses. And if you're a power systems, you know person you do, you know. So I think we're going to have to do some mix and match through a collection of courses. And I think here, in terms of the reality of an engineering school, we have the limit on the number of credits, so we have to get around that to free up space to have aI courses, it's a huge barrier, so we're going to have to work on that with the accreditation abet type of constraint. But also I think, I really think that collaboration across engineering departments and computer science departments is going to be crucial to creating this kind of flexible curricula, so that students can come up to speed for their professional future. And I also think there is real opportunity here for industry to play a very big role. Industry can offer data sets. Can offer example, real world examples of how AI is changing that industry. Almost every engineering college has local industry around it, so I think this is like a really big opportunity for deeper connection between industry and university to educate the next generation of engineers who will create the products and services of the future.

Matt Kirchner:

You know, I think about a manufacturer, for example, or an. Employer that's partnering with a with a university, and a lot of times, we might think about sitting on an advisory board. We might think about some of the manufacturing engineering programs, donating materials, maybe donating a piece of equipment that the students can learn on. You know, it occurs to me something you just said, and I have a couple, a couple other follow ups to your last answer, but we're almost in an age where we have to think about donating data in the same way, and that, you know, as important as it is to donate equipment to make financial gifts to a university to endow a scholarship, I mean, all those things, none of it's going away, is still going to be really, really important, but making data available, that's something that I hadn't really thought about in that way. I really like that aspect of it. I also think that you know what you said. And again, as someone who I love, anybody who is disrupting the traditional model of education for all kinds of reasons, I think it's just absolutely necessary. And and I do get especially on engineering programs, two objections. And the first, and you mentioned both of them, it's just interesting. Number one is, well, abet says we can't. And if you ask abet, they will say, we actually never tell anybody they can't do it. We just say, you have to show us how your meeting requirements for accreditation. And we'll, we'll work on that together. So I that one, and we've talked to the folks at the highest levels of a bet about that. And so a little more flexible than a lot of people give them credit for, although certainly can be a hurdle. We also think about, well, I've already got 120 credits. What do you want me to take away, and it's a real problem, right? I mean, something's gonna have to you can't just add content and add learning to a program. Something's gotta be squeezed out. I heard a little bit about, hey, maybe we just need to rethink how we're doing statistics and computer science as part of some of those programs. Is that one way, I guess, is just rethinking about the current courses that are already in a degree program, and how we deliver some of the same learning outcomes, but do it with a bent toward AI, yeah. I mean,

Pramod Khargonekar:

I think that definitely is one path to follow, but also kind of rethink what is a must have in terms of an engineering degree curriculum, and kind of free up a little bit there. Because one thing we know, we cannot teach students everything that they need to know, right? We can only teach them how to learn. That's been my principle. You don't teach them everything. You just teach them to learn on their own. But if you take that point of view, AI actually becomes a big enabler, right? If you become really good at it, you can learn on your own. So I think there are degrees of freedom that we haven't thought of.

Matt Kirchner:

There's no question. And I think opening our minds to what could be, it's almost like zero based budgeting, right? I mean, rather than saying, Here's what we what do we change? It's like, let's start with a blank piece of paper and say, All right, what is the engineer in the next five or 10 years? Need to be really good at. Need to understand. Need to Know. And let's build a program around that that starts to answer some of the questions about the evolution in higher education. Let's say that I'm an engineer. Now that's 20 years in I graduated from UC Irvine, and, you know, whatever the year 2004 and now here it is, 2025 I love my job. I feel like I'm doing really important work, but I also recognize that my world is going to change fundamentally in the next 10 years. What do we need to do around lifelong learning for established engineers? Hugely important

Pramod Khargonekar:

question. Again, a really big opportunity for industry and university to collaborate on. But like you know, thinking of optimized short courses, evening programs, certificate programs for working professionals, absolutely crucial. And if you actually fail to do this, it will have a huge impact on jobs and work for working professionals. So I can't emphasize enough how important what you just asked is, and

Matt Kirchner:

I think that's a message for two groups of people. The first one is the engineers themselves, who, I think, yeah, I talked to a lot of them, and I have a lot of them in my, you know, in my friend group as well, who are already asking these kind of questions, like, what is, you know, what is my role in the world of artificial intelligence? So a message for them, of, look, you know, they've got a lot of these lifelong learning opportunities. Whether it is you know any, it's not, you're not going back, necessarily for another baccalaureate degree or moving on to do, you know graduate work. It's, it's about, how are we going to just create these opportunities for upskilling through certificate programs, through shorter bits of learning around certain disciplines that prepare engineers for what's coming. And I think, not to undersell the fact that you've got, as a you know, you're a 2030, year tenured engineer in manufacturing, you've got an incredible amount of knowledge that a graduate who may be coming out of a program with a little bit more on the data science and AI side doesn't have use that to your advantage and layer the AI side of it, on top of it, but also a message to anybody in higher education or education in general, about the opportunities to cater to these individuals, of learners, that it's not always about the 18 year old college freshmen that we need to recruit into a engineering program or into a university, but it's these opportunities to serve these vast number of people that are already in the workforce. You mentioned the fact that the stakes are pretty high if we don't find a way to educate that next generation, certainly for the individuals in that particular situation, what are the stakes for the United States in general, if we don't wrap our arms around the future of artificial intelligence and really understand how it's transforming our economy and make sure we're creating that next. Generation of talent that's ready for it,

Pramod Khargonekar:

yeah. So to us, it's a generational opportunity that's going to impact economic competitiveness, economic growth, national security and all aspects of our society. So we just can't miss it. The stakes are just too high in this globally competitive world, and I think we have every asset that we need to make it happen. So I just can't over emphasize how important this generational opportunity is. So to steal

Matt Kirchner:

a line from Apollo 13, failure is not an option, exactly. We have to find a way to do this. Many, many great minds yours, included, are working on incredible ways to do it. Super, super excited for the future. Super excited to ask you two more questions. Promote as we wrap up our time on this episode of the podcast, they're questions we love hearing from all of our guests, because everybody's had their own journey. Everybody has their own background. I'm going to ask you first, especially given all the time that you spend in the world of higher education, what is one thing about that you feel about the education? How we educate people here in the United States and around the globe? What's an opinion or a belief that you have that would surprise our audience a little bit? Well,

Pramod Khargonekar:

I don't know if it will surprise or not, but I think AI is fundamentally altering the Foundations of Education. So I'll just go with assessment piece of it. Right? So how do we assess students when they've been educated? We give them a bunch of questions, they give their answers, and we grade them on their answers, right? That whole model is completely obsolete. Okay, thank you. I couldn't agree more. Makes no sense anymore. I mean, nobody knows the answer, but my own answer is, let's grade them on the quality of questions they ask. So let's focus our attention on teaching them how to ask good questions. It completely changes the way we think about education. Absolutely,

Matt Kirchner:

I, you know, I talk quite often on the podcast. It seems like it's almost every episode. Now, is my belief that in the in the old world of education, school is where we went to learn, and home was where we went to practice. Right? We go to school to listen to someone deliver a lecture, we did some reading, what have you, and then we would go home to do our homework. That's where we would practice, and that's totally flipping now, and we can learn anywhere, and school is going to be where we go to practice. And part of that practice is asking really, really good questions. I think that's a really, really deep answer to that particular question promoted, and I appreciate the way that you're thinking about that, but also appreciate your thoughts on our final question, which is, and here again, it's a question we love hearing from every single one of our guests on, if you could go back in time all these years, you know, 45 years that you've been doing, the work that you're doing, obviously tremendously decorated and well respected and an incredible amount of success that you've had, let's go back before that. Let's go back before what happened, even before that, to that 15 year old promote in your maybe, you know, here in the US, a sophomore in high school, kind of that age. What advice would you give to that young man if you had the opportunity?

Pramod Khargonekar:

So it was 1971 when I was 15, and I was in India in high school, just like you said, love it. And, you know, do you remember this movie called The graduate? Oh, sure, it doesn't happen, you bet. And he gets his advice. Plastics, yeah, if I go to 1971 and me as 15 year old, advice computing, pay attention to computing. But I think the more important aspect of your question, Matt is, what should a 15 year old today pay attention to? Okay, is it AI? Is it bio? Is it energy? Is it? And I don't think the answer is obvious, but it's a fantastic question. And you know that 15 year old today, my age, add 54 to it. Okay, right? Yeah. Okay, so that's like, you know, 2079 okay, what's that world in 2079 a 15 year old? Outstanding question, Matt, outstanding question well, and an

Matt Kirchner:

outstanding answer as well. And whether it's thinking about all those years ago, whether it's plastics or computing, I almost wish we could pick our own bumper music. We'd be playing Mrs. Robinson here on the way out of the podcast, which, of course, came from that same movie The Graduate then just an incredible answer. And then to the 15 year old of today, if it was plastics or computing back in the early 1970s what is the answer to that question? And it probably has something related to artificial intelligence and machine learning, but maybe in a specific discipline that interests that young person, without regard to what their answer would be. Certainly, they're going to learn a tremendous amount promote from what we talked about today, such a great conversation walking through artificial intelligence. Your incredible research, the report that you wrote, The 14 grand challenges that we touched on. By the way, we will link that report up and many other resources in the show notes to make sure that our audience has an opportunity to take a look at that information. I know they'll be fascinated, but thank you so much for taking some time for us. Dr Pramod kargankar, with, of course, UC Irvine, we talked about his incredible pedigree and his role as we started the podcast. But Pramod, thank you so much for taking some time with us today.

Pramod Khargonekar:

Thank you, Matt. Thank you for having me. It was a marvelous discussion, marvelous

Matt Kirchner:

indeed in a marvelous time. For our audience as well. Thank you so much for joining us on this episode of the podcast. So many cool resources we talked about, of course, as we do every week, we will link those up in the show notes. Let's put those at TechEd podcast.com/promote that is TechEd podcast.com/p R, A, M, O, D, that is where you will find the show notes, the best in the business, as you know. And please check us out on social media. We are all over social media. We are on LinkedIn. We are on Facebook. We are on Tick, tock, as the kids are saying these days, we are on the gram. You will find us anywhere you go to consume your social media when you're there, reach out, say, Hello. We would love to hear from you. And until next week, I am Matt Kirkner, and this is The TechEd Podcast. You.

People on this episode