EDGE AI POD

Transforming Edge AI Education: Insights from Harvard's Dr. Vijay Janapa Reddi

EDGE AI FOUNDATION

Join us for an insightful conversation with Dr. Vijay Janapa Reddi from Harvard, who takes us on a journey through the evolving world of edge computing and machine learning education. Discover how his family's affinity for East Coast seasons inspired his academic path, and learn about his groundbreaking open-source book on machine learning systems. This episode celebrates the rebranding of the tinyML Foundation to the Edge AI Foundation, reflecting an expanded focus that transcends embedded devices. We tackle the complexities of machine learning education, where excitement often masks the real challenges of developing robust AI systems.

Dr. Janapa Reddi’s vision for his work-in-progress book draws from his teaching experiences at Harvard, aiming to universalize machine learning system principles akin to core concepts in operating systems. We explore the challenge of crafting educational materials that cater to both beginners and seasoned professionals, providing a roadmap to guide diverse audiences. The discussion highlights the importance of hands-on learning, especially in data collection and lab work, with contributions from notable figures like Marcelo Rovai in the tinyML space, emphasizing practical applications for edge AI systems.

In a fascinating discussion on data science versus data engineering, Dr. Janapa Reddi elucidates the foundational role of data engineering in successful machine learning projects. We also delve into microcontroller programmability and the integration of frameworks like TensorFlow and PyTorch into curriculums. As we explore the role of AI tools like ChatGPT in programming, the conversation shifts to the exciting potential of AI-powered educational assistants, transforming the future of learning through interactive and personalized experiences. Whether you're a student, educator, or industry professional, this episode offers a wealth of insights into the intersection of AI, education, and the future of edge technologies.

Send us a text

Support the show

Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

Speaker 1:

Thank you. Okay, well, I guess that was our intro. We're having this morning with a live stream, so I'm coming here from Washington, where we have had a severe windstorm and no power, so I'm in my Jeep, but fortunately, jenny in Amsterdam. Good morning. Jenny, good afternoon.

Speaker 3:

Thanks, Pete.

Speaker 3:

And I'm glad you're safe there in the storm. That sounds like a lot, but thanks for joining in and so we can talk to Dr Vijay Janapareddy today about the MLSIS book. And I'm really excited to Dr Vijay Janapareddy today about the MLSYS book and I'm really excited to talk. And yeah, so you do cut out a little bit in and out, pete, but if for after you talk, I'll do a quick little translation of what you said to the audience to make sure everyone understands. But hello everybody and welcome. Yeah, today we're going to talk to Dr Vijay Reddy from Harvard about the MLSIS book that he's published on the internet and it's open source. So I'm excited to talk to him about that and his previous courses and what he's up to.

Speaker 1:

Sounds good. Let me just bring Vijay on here and I'll actually jump off so I can save some bandwidth, and then I'll jump back on again. Let's bring Vijay on. Vijay, welcome.

Speaker 2:

Awesome Howdy everyone Nice being here. Thanks for having me.

Speaker 3:

Thanks for joining us.

Speaker 1:

I'm going to let Jenny take over for a bit while I do some tech stuff here with my power.

Speaker 3:

Great there you power. Great there you go. Cool so, vijay. So this will be a bit of a more of a casual conversation. Today. This is also the first Edge AI talk, because the TinyML Foundation has recently changed their name to the Edge AI Foundation, so we're no longer the TinyML talk and their name to the Edge AI Foundation. So we're no longer the TinyML Talk and now we're the Edge AI Talk. So you get the honor of doing the very first Edge AI Talk with us. So it's really, really exciting. I think the change of name really opens up a whole new world of things and a whole new audience as well, a bit of a, you know, from a smaller tiny email crowd to a more wider range of things, because edge is not just embedded devices. It can be anything that's not connected to the internet or whatever your definition of edge AI is.

Speaker 3:

So yeah, I just thought we'd talk about a little bit about maybe you can talk a little bit about yourself, and then we can go into the book. Let's just start there.

Speaker 2:

Sounds good. I'm a faculty member at Harvard I think I graduated from here and I came back here because my wife loves the East Coast and she likes the four seasons. I'll tell a little story which you'll probably kill me for later this evening. But I was a professor at UT Austin for several years and then she wanted to come back east yes, that's where Jenny's from and so I tried to convince my wife to move west and we were out in California for my sabbatical for a few years.

Speaker 2:

And then I just remember one day it was a Saturday, beautiful, gorgeous day just sitting in the backyard kid was just kind of running around the backyard. My wife looks up and she says, oh, how can anyone live here? And I was like, uh-oh, this is not a good statement for a Saturday morning. I know where this is going. Her point was it's another perfect day, it's perfect every day, and I totally knew what that meant was. Basically, she needs the seasons, and so well, right on that day I pretty much knew we're going to be back on the East Coast. So that's what brought me back again to school, to Harvard, that's amazing.

Speaker 3:

I felt the same. I felt the same way about Texas. You only really get two seasons. You get hot summer and frigid winter and there's like no in between. So, yeah, I moved to Amsterdam a lot, for a lot of the same reasons. It's nice to actually have four seasons, know, and when it's actually christmas, you feel the vibes, you know so. So I'm uh, but I am excited to go back to texas for christmas this year and see my family. But I totally uh vibe with your, with your wife there. The it's just. It's just so the samey all the time and it just. You can't live in a place that perpetually feels like summer.

Speaker 3:

At least I can't yeah well, you know, but California is always there. So yeah, but so in the past you have had a couple of courses previously published on other websites, like what is the difference between this new? Or maybe you can talk a little bit about the overview of what this new book is and what gaps are you trying to fill with this knowledge that you didn't have in your previous courses that are still alive and very much active? But what gaps are you trying to fill there and what audience are you trying to target? Maybe we can talk about that.

Speaker 2:

Yeah, I think the way I'm going to kind of talk about it is from perspective that what's really going on with machine learning? Right, obviously everybody's excited. I mean, you know our foundation got rebranded into Edge AI. Like there's a lot of excitement, but when you kind of open the hood and you kind of really look at it, I feel like it's the heydays of. You know, machine learning is kind of having its heydays of, like computer science, if you kind of think about it.

Speaker 2:

Everybody wanted to program, all right. So everybody started taking computer science and enrollment started skyrocketing. And now everybody wants to do AI, you know, which obviously builds on top of computer science. But it's like you know, everybody wants to do all these algorithm designs. But the reality is that the simplest analogy I can use everybody wants to be an astronaut, nobody wants to be the rocket scientist, right, you want to build the algorithm, you want to build that model, you want to see it getting used and, yeah, that's a pie in the sky dream if you don't actually get up into space.

Speaker 2:

The question is, who's actually going to get you into space? And I think there's a bit of this gold rush and I think it's like a little bit of naivety from the ecosystem which is kind of like, okay, a lot is to be done on that. But I really think we're that's just the first wave and the second wave is kind of coming and the second wave is really going to be who are the people who are actually going to know how to build these systems and deploy them? And I'm talking about deployment at the scale of massive data centers or it could be deployment the extreme edge, edge right, like the tiny amount of kind of things, like you need highly specialized engineering skills. And that's really the sort of void I'm trying to fill is, yes, there's a lot of interest in the algorithms. There's lots of material out there. You know plenty of courses. You know virtually every university is teaching. You know AI, software aspects, but no one's really focusing on the machine learning, systems engineering aspect, aspect, and I think that's that gap that I'm really interested in kind of filling.

Speaker 2:

And so, to get to this point in the book, the book was kind of really written out of class notes and so forth that came out of these online courses, that when tiny ml was getting started you know I have the fortune of actually working with peteen on the TensorFlow like micro project back when it was a skunkworks kind of thing Back when he was kind of putting this team around, we were actually brainstorming this idea. Oh, we should write material around this, because edge AI and all this stuff is unlike traditional ML, which is not like In a traditional ML. You can just focus on the algorithm.

Speaker 2:

You often don't think so much about the system elements, like how much memory I have computer mind do I really need, like how is this actually going to work in a real-time deployment? Right, and you're talking about maybe like loose latencies in the orders of, like you know, 200 milliseconds on milliseconds. Like that's really not that crazy. If you really look at Edge deployments, like if you want to have an autonomous vehicle, well, that real-time constraint there is pretty damning. That really requires people who understand a little bit of that language about machine learning from a theoretical sort of perspective, but also understand the engineering aspects, and I think that's that really unique niche that I'm really super passionate about.

Speaker 3:

So do you find, like, with all the hype with like chat-GBT and stuff and these big LLMs and transformer models, you do find that people who get people get excited, and maybe even students they get really excited about these big models? And like, even in industry, do you find that people are not really paying attention to optimizing for smaller hardware, because I mean, in this world we're finding these big warehouses full of GPUs that you don't need to optimize if you can have unlimited compute. So this course is primarily trying to focus on showing the world that this is what needs to be done for the future of AI.

Speaker 2:

Originally I did start with that sort of a mindset. Like you know, I was kind of teaching these embedded ML courses and at Harvard and online, you know, I was teaching this tiny ML courses, but when I started really just kind of, you know, taking my class notes and just trying to scribe them out. In all honesty, you know, this online book is still very much a work in progress. It's it's not a textbook yet. It's in its transition phase right now and I would say more, like you know, we're done with all the things that I have in my head about what I think students need to know.

Speaker 2:

And as I started, writing it originally it was literally called Tiny Machine Learning Systems kind of thing, because it's just class notes from my book, and then I started writing it down. Then I started realizing something quite critical. I was like it didn't matter Sometimes some of the concepts I was talking about. It didn't matter if it was a big system or a small system. It's like an operating system, right, if you go back to a conventional sense, you kind of think about it.

Speaker 2:

Almost every single one of us has taken a computer organization course or an operating systems course where you really teach the fundamentals, right, let's say, if it's an operating systems course and you learn about scheduling, you talk about virtual memory and memory management principles and all these you know, device management and so forth. Those are core principles. Now, it doesn't matter if it's like you know, it's a big operating system. Yeah, it's a big operating system. Managing something at the scale of, like you know, a hyperscaler, right, managing the. A hyperscaler, right, managing the clusters. Yeah, it still has to do scheduling. Yes, it has to do memory management and all of this.

Speaker 2:

Just the numbers change. You go to an embedded operating system same thing, you still have to do scheduling, and the mechanisms, the way you implement them change, but the fundamentals stay the same, and so the sense that I'm really getting at is like it's not whether it's a big ML system or a tiny ML system, it's really just an ML system, and that is what I myself personally kind of discovered. I was like, oh, that is when the light bulb really went off in my head, which I was like, oh my gosh, this can be thought at a massive scale and should be thought at a massive scale, like we teach one-on-one digital logic design or any of those kinds of courses. So that was sort of an eye-opening moment for me, and that's when I really started thinking okay, it's just machine learning systems. Now, of course, there are advanced concepts to the points that you're talking about, which is like okay, if I'm going to build a hyperscaler, I need to think about massive distributed interconnects and so forth that I need to have. But that's an advanced concept.

Speaker 2:

And I feel like most of the time, like when you look at institutions that do teach these things, like my school or you know my colleagues at other schools, we all already know all this material and so, in some capacity, we already know it's in our head, so we teach them in our classes yeah but when you're talking about it, trying to make it like a universal kind of a curriculum that you should just know about, right, just like you teach python or just like you teach c and so forth then you really want to get into what are the fundamental principles that everybody needs to know yeah that's kind of really been the fuel that's been kind of driving me and getting feedback, you know, in terms of what needs to be covered, and so that's the way I've been approaching what we should put in the book.

Speaker 3:

That sounds definitely aligns with my own education at UT, because the electrical engineering program is very much a bottom-up approach where you start from binary and you go all the way to Java, so definitely feels like it would definitely fit into that education. They're probably already having a course along those lines at UT Austin as well, so that totally makes sense. So I guess that segues into my next question. So you told us that you saw, ml systems engineering is something that definitely needs to have a gap filled. But who is this online book really designed for? Is it for beginners who are just trying to get started out in AI? Is it people who are interested in embedded engineering? Is it people who are interested in generative AI? I know your book touches on all those topics but, like, who is like the ideal person to get started? What prerequisites would they need in order to get started with this book?

Speaker 2:

Yeah, it's a work in progress, as I've said, and so, keeping that in mind, I was just talking to the publisher yesterday was kind of helping me sort through. Okay, who's going to be the audience, right? Yeah, you're asking and I I'm really right now focused on if this was to be thought at an undergraduate level as core courses you need to have as part of your package of understanding how you build computer systems in the future. What are the things to talk about? And so that's where it's been really kind of targeted that like junior senior year students, 30 or fourth year students or intro graduate level students who understand this.

Speaker 2:

I've also tried to cater it for people who understand a little bit about, uh, machine learning but they want to understand the system implications. All right, um, so, for example, like you know, people will come up with the model. But, okay, what if I want to trade off, you know, accuracy versus safety or accuracy versus you know some performance, like, okay, well then, what sort of optimizations are these systems people talking about? What does it mean to say, do unstructured pruning?

Speaker 2:

yeah people might have heard about pruning, but often we talk about structured, unstructured and so forth, so try to kind of cater it for a bunch of different people, and so one of the things that happened just recently was that I ended up in the about the book section. I ended up actually adding the section about like okay, depending on which persona you are, if you are a tiny ml newbie or you're just an ml systems person, or you want to be a full system, you know full ML expert, you know full stack expert. And so I kind of wrote out like a little roadmap of these are the chapters to touch.

Speaker 3:

Yeah, that's awesome. Okay, so it's junior senior students, early graduate, but it could also be people who are just trying to advance their knowledge of these other areas of ml systems in industry as well yeah, yeah, absolutely yeah.

Speaker 2:

I I honestly think at this point it'll probably be a lot easier, candidly, for industry people to kind of be able to pick it up, because when I started writing it I was just kind of like okay, these are the core concepts to kind of get through. But that kind of implicitly assumes that you have a little bit of context about what's a microcontroller. There's students who kind of go through it and they're like you just kind of talk about these microcontrollers, the system's like, like you know, it's kind of like second nature yeah now an industry practitioner would totally would not think twice about that word and probably just gonna move on and be like, of course I know what that is roughly um.

Speaker 2:

So that's where, in fact, like I think it's much more accessible right now for industry people or people who have just been playing around generally in space and want to pick pickup ml in fact, yeah, transitioning over to the students, actually being a faculty member, that's where my head is most of the time. It's almost like a higher bar, in fact, you know, and so that's where I'm really spending a lot of my time is trying to lower that material down. In a way students can progress from one class to the other. So, yeah, glad you kind of pointed that out.

Speaker 3:

Yeah, we think we talk about that a lot at Edge Impulse, especially with our documentation, where there's so many people coming from so many different angles, personas of the documentation is such a big thing Because, yeah, you're totally right, there's completely different routes for people who don't know what a microcontroller is versus what is they already know or they already have a microcontroller that they want to deploy onto.

Speaker 3:

So those are completely two different knowledge ranges. So it completely changes the routes and what you need to include in the book. But again, like there's almost an infinite combinations of different people's knowledge versus what you can include into a book, so you really have to like, choose, like what are the most important pieces. So I'm sure you and your publisher are gonna work really hard on what is going to be the most important thing. So yeah I'm looking forward to that. All right, so you mentioned publishing. Will you?

Speaker 2:

will this go on like amazon, or well, um, I'm working with a publisher to try and convert this into a hard copy, because people are looking for it, although I think the field is still moving.

Speaker 2:

So a hard copy is like okay, I mean I have hard copies, but each of these books is it's a hard copy, not because I opened the book, so much because when I was a student, like it dramatically influenced my thinking process and that's why I have like a handful of books, pretty much because if you look at the books you'll know how I tend to think yeah, yeah I think like it was interesting just because yesterday I got an email from someone saying like I really want to have you know um a published copy of the book, and I was like oh, it's not ready.

Speaker 2:

But they were like no, I still would love to be able to have it. And I was like yeah why? And it's like oh, it's because it's it reminds me of things, and so it's almost like yeah, yeah, like a visual cue.

Speaker 3:

Visual cue to like this is what you're interested in today, like robotics, ai, da, da, da, da, da Like. Yeah, I find that with people who have my book too, like I'm like, why are you carrying around this huge book when you can get the PDF? And they're like well, because it's just, it's nice to have the physical knowledge in your hand. You know so. So I totally get that. So we have actually a question from the audience. Let me read it to you. I can't pull it up on the screen because I don't have the same powers magical powers as Pete. But Zaya from LinkedIn says thanks, vijay, for the perspective. How do you approach the tools, hardware and software for your ML systems course?

Speaker 2:

Yeah, that's a good question. I'm super fortunate to be working with a couple of people who are very passionate about, like you know, helping out, and so the book is really not just me in my head. It's, like you know, a lot of people actually communicate with offline and one of the things the beautiful aspect of that hands-on component there's a lab section in there and those lab sections have all been contributed by a colleague, marcelo, who's actually quite well known in the TinyML space.

Speaker 2:

Yeah, marcelo Robai. He's the one who's actually contributed all of the lab stuff and it's been great because, you know, we have this TinyML 40 effort that Marco and Brian help lead and Marcelo is sort of like is critical in that piece and he's been doing lots of workshops and so one of the things that we did was like we took that labs portion, which is really kind of you know the hands on component, right, and I think that's one of the fascinating things about you know the systems kind of thing, because the question is like you know, think about it, if you do ml. I'll start by asking students how many of you have actually built a ml data set?

Speaker 2:

the answer is almost zero zero yeah if you want to build a big model, you can't. I mean, if you have a decent sized model, you immediately need like a million images. You get that right. So by and large, you just download someone else. But data is the way you program these systems.

Speaker 2:

So if you don't understand how to actually program the system through the data collection process, then it's pretty much like you know you're just playing around with a black box. That's why everybody calls it black box. It's not as bad a black box as people make it seem. That's just because you avoid the piece that you actually program it with. So if you actually understand the data side, you know what's going on. And that's what makes, like, the whole tiny ML piece super interesting, because in Marshal's labs it starts off with the whole data pipeline. You're collecting data, you're deploying ML on these tiny ML systems, which is very accessible. Then suddenly you only need 30, 40 examples, ninety percent of the time. I honestly like that.

Speaker 2:

It's actually all based off of Edge and now that I think about it, um, because it's very accessible for students to be able to, like, just through a gui, do that and you still get the fundamental elements.

Speaker 2:

Um, yeah, you know. Yeah, ideally you take the course and then of course you're understanding what edge impulse is doing behind the you know, behind the scenes and so, yeah, a lot of that hands-on component is really based on, like you know, what marcelo kind of put together and we've been very systematic about it and trying to make sure we cover different modalities of sensory input coming in. Just you know, different kinds of input data, because you have to deal with them differently and very structured and making sure when we support it on different platforms, that the same set of labs are actually supported on different platforms. Yeah Right, so so that's sort of a thing. Supported on different platforms yeah right, so so that's sort of a thing. So then there's a little bit of portability, and this is why we don't support any arbitrary thing, and through that we've been trying to kind of curate all the material into one cohesive thing. Um, so yeah, so that's kind of how we approach the hardware software interface it's so.

Speaker 3:

It's so funny what you said about the data collection, because we talk about this a lot at the the previous tiny ml foundation summits, now the edge ai foundation um conferences, um, but we talk about this a lot at the previous TinyML Foundation summits, now the Edge AI Foundation conferences, but we talk about this a lot that people are just not interested in collecting data sets because of all the massive ones that are already out there.

Speaker 3:

But I find at Edge Impulse like because we have such an easy flow and other AI tools also have easy flows but of course I'm biased for Edge Impulse I find that the data collection part is some of the most important and most fun part and I also think it's some of the most relevant part for actual industry engineering for edge AI applications.

Speaker 3:

Because once you get into a company where they're actually doing AI for various automobile sensors or AI for window detecting your window open and closed and burglaries and security, there's not going to be like a preset data set on the internet available for you that works exactly for your use case. So it's like it's a really important skill in order to think about, like, how do you orient? You, know your different ML classes, how do you, how do you collect the data accurately and securely? How do you get it over the air? How do you store it on the device? So I think that's actually one of the most fun parts, and we find at Edge Emplos, too, that it's one of the most crucial parts. So it's really it's heartening to hear you talk about that and I think about it like.

Speaker 2:

people make this mistake when they take ML courses and so forth. They think data science is actually data engineering. There's a big difference between the two. It's a fool's thing to think just because you know data science it translates to data engineering A lot of the things that Jenny just mentioned, which is how you manage that data and how you think about what aspects to focus on, and so forth. It's not necessarily data science. Data science is more the curation part, the think about what aspects to focus on, so forth. It's not necessarily data science. Data science is more of the curation part. The engineering part is a little different. This is where, again, it goes back to that engineering element that's kind of overlooked so much and in practice, I think this is why there are crazy stats on the internet that basically say that 87% 90% of industry experiments on AI trying to kick off an ai group failed because that's data science project.

Speaker 2:

it's not a machine learning systems project yeah what they're really doing is just toying around with it and they're like oh, I got a proof of concept. I'm like, yeah, that that doesn't work. You have, it takes a massive new team of people who understand this lingo to pull it off yes yeah, there's a lot of this misconception or like this gap in what's actually out there and what's actually needed versus, like you know, what people mostly talk about.

Speaker 3:

Yeah, well, people are interested in what's the most flashiest thing. You know so. But yeah, data engineering is arguably one of the most important parts of ML systems engineering. But we have another question from the audience for you, and it's a simple one what microcontrollers are used in the labs important parts of ML systems engineering? But we have another question from the audience for you, and it's a simple one what microcontrollers are used in the labs?

Speaker 2:

Oh yeah, so we use. We try to grade them intentionally by cost, just because we've learned that different people have different levels depending on where you are in the world. A dollar goes a long way or a very short way. So, um, the simplest ones that we have are like the, the shower sense. Um, seed studio has been a really good supporter of us, have donated many kids, and so the shower sense is something that we use um, because that's only about 15 bucks. Still in some places that can still be a lot. Um, I'm talking about devlet countries and so forth, where we do outreach. Um. Then we have the next one. Up is a pretty beefy system which is like the Nikola vision from the Arduino, which is more of like a professional grade embedded microcontroller embedded system. And then, of course, it goes up to. Marcelo just recently pushed up the RASPi support In that one of the cool things that he actually added in was generatively high at the edge.

Speaker 2:

So you know oh yeah, so I'm like a small language model. It was one of the most recent updates, so just much like the tiny foundation has been expanding its scope, like one of the things in the labs we have intentionally been expanding is going from microcontrollers to a decent scaled system. Um, and this is not like I'm thinking that we'd be tracking what is happening in industry.

Speaker 2:

It's more that we tend to think about it from like a very pedagogical perspective, like what is going to be accessible for students at scale, at university level, and what are those course in a box packages that we can actually put out.

Speaker 3:

And I find, especially with the microcontrollers and the boards that you've chosen for the book, they are like like it's just one funnel with lots of documentation, but it's also the foundations and fundamentals that you learn from programming for the ESP32 or for the Raspberry Pi. Those fundamentals that you learned for that board apply to almost every other microcontroller that you can find as well. You know you could get a completely different board and as long as you understand, you know how to compile the code and how do you, you know, incorporate new sensor drivers. The fundamentals are there from what you've already learned for the ESP32 that you can apply towards, you know, the NVIDIA Jetson or something. So I think those are really good. Especially those boards are so well documented and they're so widely used. You haven't any sort of like compile errors. You can just literally copy and paste it into Google, and someone has already experienced it before.

Speaker 2:

ChatGPT.

Speaker 3:

Yeah, chatgpt can help you. Like, do it now, like I don't even think. I don't even think of those terms just because I didn't go through school with ChatG, bts. I'm always thinking like Google, google, google. But you're right, now it's chat to BT and even the chat to BT web browser. Yeah, that's right.

Speaker 2:

Yeah, that's an amazing time right now.

Speaker 3:

So we have another question from Cesar. Jaxx is now supported by LiteRT, formerly TensorFlow Lite, offering a simple and efficient option for tiny ML implementations. The ML workflow would be highly simplified for beginner users. I guess the question here would be have you considered talking about JAX in the book or in your courses?

Speaker 2:

Yeah, definitely, yeah, people have definitely given me some really good feedback in terms of, like the AI framework section, for instance, to be to be, you know, very blunt and transparent. It was like it's very biased towards TensorFlow ecosystem.

Speaker 3:

Yeah.

Speaker 2:

And honestly it kind of comes from just you know, I know the TF ecosystem really well inside out and so when you know again, these were coming from my class notes and when I was talking about it I like to be able to cover the full spectrum of the stack and help people understand, okay, what's the difference between intensive flow versus tens of flow, light versus tens of flow like micro or like you know. And so one of the criticisms there is, like it's a very tf based kind of thing and the two pieces that people have asked is, like you know stuff around jackson as well as, like you know, around pytorch, because pytorch is now trying to starting to spam that ecosystem, and so we're actually actively working on pulling in jackson, pytorch into the into the curriculum pieces. It's not there and honestly probably won't be there until I would say like March April is earliest.

Speaker 3:

I would think that we would actually push some new updates would this be an opportunity for someone like in the community to provide some, you know, like a GitHub issue or GitHub PR, to add some content?

Speaker 2:

yeah, absolutely yeah, that's a key thing that Jenny just kind of touched on is like well, you know, when we started doing this book thing, it's it is an open source project and one of the key things like if you go to the, the mlsbookai, you'll see that there's an acknowledgements kind of tab on the left side, right at the top, and it's because you know there are contributions that come in and I don't care if it's a contribution is like oh, here's a major section with some crazy technical depth, or if it's like, hey, I found these typos and you know these mistakes that people caught, I give kudos to everybody.

Speaker 2:

And so if anybody who just kind of issues any sort of issue and you know files an issue and you know gives a patch of any sort, it automatically pulls that person up into the acknowledgements thing. So I'm trying to very much kind of build it around the open source kind of a thing. So the idea yeah, jenny, thanks for bringing it up is how do we kind of create this learning more from not just one individual, because we all know more than you know collectively. We know so much more. So if you have thoughts or ideas and want to just, you know, work on something just ping me or just kind of you know, follow it or work on it.

Speaker 3:

I mean there might be a little incentive here to like really get on it, because if you do it soon you might be mentioned in the hard copy, right.

Speaker 2:

Oh, yeah, that's true. You're good, I didn't even think about that. Yeah, that's true.

Speaker 3:

So everybody who's watching get out there and submit your GitHub issues and your PRs and have the team at Harvard review them. Then you can be in a book.

Speaker 2:

Yeah, yeah, it's been awesome, like you know, a good number of contributions A lot of them are actually Harvard students and so forth that I teach.

Speaker 2:

When I'm teaching the class, I also get the students to kind of contribute back in, like, mostly they contribute in the context of like oh, here's a seminal paper. So this is one thing that we focus a lot on in the book, which is a lot of what's in there is really built on fundamental research. So when it's fundamental research, I try to point people in the writing to be like oh yeah, here's the paper that we're actually building off of, because this was a seminal paper, and I think that's actually very important for students and practitioners to kind of learn, because it tells you about which conferences people are publishing in and just kind of. You have to know that language, because I think ML is so different than classic space where things are still evolving, so you need to know where to go and look for the latest information. And that's where, like later on, perhaps, we can touch on that AI bot that we created, because, partly, we're working on these kinds of things. How do we keep things updated automatically?

Speaker 3:

Yeah, very interesting. Ok, so another question, let's see From Emila. Thank you. I wonder if in the lab DSP special features also the pre-processing FFT wavelet windowing is analyzed from the hardware resources point of view. Or is FFT probably too heavy for microcontrollers?

Speaker 2:

Yeah, so that's a good point. Yeah, there is a dedicated, you know, general lab, independent platform, independent sort of you know section where we talk about feature engineering and so forth and so, yes, so some of that preprocessing one could argue is sort of heavy lift. But again, you know, it really depends upon what is your quote unquote microcontroller right? I mean these microcontrollers are getting fairly advanced because ARM is obviously caught on to this notion that, okay, well, ml is going to be at the extreme endpoint. So what might be today might be a bit of a heavy lift. I don't necessarily see that being a heavy lift in the future.

Speaker 3:

Yeah.

Speaker 2:

Because it's just become obvious that, okay, now we're going to be running all this kind of code and it needs some specialized engine. So I would think more about it from a how do you future proof the learner?

Speaker 3:

um, again, that's where we're coming out from yeah, I mean dsp is also a fundamental thing. That is is really key to understanding for especially optimization optimization of ai for edge devices and resource constrained devices. So that's really important. But, as I say in all my webinars with Edge Impulse, what is true today will not be true within at least two years. So it's really exciting to be in a field that's constantly evolving and changing so rapidly, but it does make it such that you know the courseware, the books. They constantly have to be updated.

Speaker 2:

So one of the things that I dread about such that you know the courseware, the books they constantly have to be updated One of the things that I dread about making it into a hard copy. To be honest, I think I honestly have it easier than, like you know, the stuff that, for instance, jenny's book, or stuff that Pete Worden had originally written with Dan, because in those books you know you have to, you're driving, you're teaching through example, which is a very powerful concept, but the code changes so fast. Yeah, and I buy those books even though I know you know the code won't run right, but I still get the purpose of having them on the shelf. That's where I kind of like the online aspect of you know, having keeping things online and so forth.

Speaker 3:

Yeah, no, I completely agree. If I was ever to do another book, I would want to do it the way you've done it as well. But because me and Dan we're going to be by the way, my book is AI at the Edge, just not to plug it, but that's the book that I'm talking about. But next summer Dan and I are planning to do to be updated. Every single line of code does not work anymore. So it would be a lot easier if we could just do a GitHub PR. But you know it's all about.

Speaker 3:

GitHub what.

Speaker 2:

People do like that hard copy element. That's what I was learning they do.

Speaker 3:

I like it too. Not going to lie, it's really fun to be able to call yourself a public.

Speaker 1:

Yeah.

Speaker 3:

But yeah, no, we have another question from Zayed. One challenging aspect of Edge AI is the variety of domain expertise. I find it challenging to provide this expertise in an engineering course. Any suggestions on how to create this balance in a course?

Speaker 2:

Yeah, that's a good question. I'm just thinking of my head side.

Speaker 3:

No, take your time.

Speaker 2:

Aspect of Edge AI is the variety of domain expertise. Challenge you to provide this expertise in an engineering course. Let me try. I feel like we've thought about this. Like in teaching these courses, the labs specifically focus on different modalities, like we have. We have a computer vision thing, we have an audio signal kind of a thing, and then we have a theories kind of a thing. You know, I am new kind of thing and that's always been the three. I think this dates back way back, even when I was talking to Pete, like when we're working on the original frameworks and stuff about what are some good examples to actually have. Yeah, I think largely the community has kind of landed on these modalities because you're right that the domains where these things are deployed are highly specialized, but the modality is not that varied in some capacity, buried in some capacity and to that end, while the three might not be comprehensive, uh you know video signal, audio signal and telemetry kind of input, they do form the basis and that's the way at least you know marcello myself, like you know a couple others that I know we all approach it and say okay yeah

Speaker 2:

that's what, that's how we explain the labs and that's why, pretty much when I teach my own course, I kind of of structured around that I didn't say, okay, they're going to be three labs plus your project, and that's the way to kind of, you know, teach that. Okay, well, this particular modality has all these use cases and you know so. But we'll just take one slice of it. Jenny, are you there? You're frozen, to me at least.

Speaker 3:

Sorry, I'm back. I think I'm back.

Speaker 2:

Sorry, yep.

Speaker 1:

And you're frozen again. Hello, howdy Pete, I'm coming in. Oh, here we go. I thought maybe Jenny was frozen. Oh, she might be frozen, frozen, yeah. So I'm just just to give people context. Didn't tune in in the beginning. I'm in from Bellevue, washington, and we had the kind of while he's in Boston terms, we call it a wicked, wicked bad winter storm here yesterday. So we lost power. So I'm actually in my Jeep, I'm parked at the Starbucks Doing a live stream. I'm living the dream I saw I was backstage and I saw Jenny was frozen, so I thought I would jump.

Speaker 3:

You're good. So I have this weird setup paid at my house where I have a literally like a 50 foot ethernet cable running through my I have I literally like a 50 foot ethernet cable running through my entire house. What?

Speaker 3:

happens is like one of the pets will like, and then it's not the best solution, you know, but it's an engineered solution. So apologies for that. But I did have one comment on the last question we were talking about and we encounter this question a lot at Edge Impulse with our solution engineers. We don't have the domain expertise for every single domain in the world, you know. So that's why when we're talking to our customers and people that we want to engage with with the Edge Impulse platform, we always emphasize that they should be well, they should be, or they likely are the expert of their own domain, of their own data set of their own, what they want to do with their product. And if they don't have that expertise, it's usually like then it's usually time to like bring in another person or bring in a different solution engineer that already has this experience before.

Speaker 3:

So I think it's like you said. It's a focus on the fundamentals of domain expertise, or fundamentals of various types of sensors, computer vision, time-based frequency. So focus on learning how to deal with those fundamentals and then extracting more information about your specific domain once you get into industry. So I just think it's impossible to teach about every single type of sensor or use case out there in a course. I think, yeah, teaching the fundamentals is more important, for sure.

Speaker 2:

Yeah.

Speaker 1:

Yeah, you had mentioned about this kind of AI tutor or AI powered tutor. Is that part of the book itself, or is that a side project that you're just integrating, or what's that all?

Speaker 2:

about. Yeah, so as context for people listening in, one of the things obviously that's happening is these LLMs are mighty powerful. Obviously, we interact with them through ChatGPT and all that good stuff, and the thing that I've been very passionate about is kind of like this notion, which I'm gonna call generative learning, sort of this fusion between generative AI technologies and this traditional pedagogical learning, and how do you kind of bring these two together? And when you think about a book, one of the things that would always drive me nuts whenever I'm looking at any book and you know, having been a student too is like textbooks are very static in some sense. Right, you know, there are amazing textbooks because people do spend an incredible amount of effort, you know, trying to write every single word very thoughtfully.

Speaker 2:

But even when you do that, there are different learners to the point that Jenny was bringing up earlier learners to the point that Jenny was bringing up earlier. There are people who are completely new. The people who are, like you know, come in from some of the classes especially. They go to university, like Harvard, for instance, where we really value liberal arts education and kind of mixing things up, like we take a lot of pride in that, like intentionally mixing things up. But when you have that sort of a mix, you have students who come in from many different backgrounds and it's never really quite uniform, and so how do you sort of level set that right and make sure that education is sort of personalized to an extent? And what generative technologies. I think it opens up this whole new space of being able to let the learner learn the material in a way that is accessible to him or her, and that's where we have this um generative bot. Can I share my screen? I don't know if it'll work, but maybe that'll be useful.

Speaker 1:

Yeah, if you hit the present button down there, that could work. Let's bring you back to the. We're in this mode. Hello, look out.

Speaker 2:

Okay, great, so, yeah, so can you see my screen? Yes, we can see your screen, Okay, yeah.

Speaker 3:

So can you see my screen? Yes, we can see your screen.

Speaker 2:

So there's this. On the left side there's this link that's called Socratic AI. So if you click that link it brings up this page which kind of explains what this AI learning assistant is about and how to use it, and blah, blah, blah. But by default it's usually off, and so if we enable it, you'll see this little, you know, on the right side you'll see this little thing pop up and the idea there it's like, you know, you can open this up and you know it's pretty much just a chat bot, right, it's no different than a chat GPT kind of an engine that you have, right, it's just it's completely integrated into the material. And so I'm just going to arbitrarily pick something. And so what we do for example, one of the big things that we try to do is to like reinforce concepts, right, you know, reflective learning is sort of like a critical thinking, a critical aspect, like you need to have students pause from reading stuff and reflect on the material, and the easiest way to do this is to kind of have some very simple questions. So at the end of every major section, for instance, you know, there's this little section quiz, and so when you click the section quiz, it auto-generates a quiz for you. Now, this quiz is completely auto-generated by the LLM, right, so it's not something that's manually coded, so your question might be slightly different from my question and so forth. And then you know I'm just going to randomly pick answers. I don't know what's the right answer, so, and you can see down here, you know I got two of them correct, and then it gives you explanations to why is this the correct answer, why is it this answer, for instance? And then you know, here I picked this particular thing and it says it's the wrong answer. So it kind of helps you understand what's going on. Mostly, this is really kind of meant to kind of just stop you from just kind of, you know, pouring through material, but it's really to kind of help you have a breather and as you kind of do this, um, you know you can also kind of pick certain concepts. For instance, I can say, okay, send this context to the ai and then explain that, like I'm a five-year, six-year-old, right, um, so you can see that it's using Legos to explain this. Nothing too surprising. It's basically a built-in agent that understands how to do this and then also suggests other things that you might wanna ask.

Speaker 2:

Now, how do you sort of personalize this whole thing? There are a couple of settings. For instance, you can say, oh, I'm a complete beginner, I'm an animator person, this advanced in the future you could imagine we'll have like some customization where you can actually give it a prompt. But it has to be structured. Obviously we're academics so we tend to think very carefully about how to put things in. So you know, this is no whole notion of Bloom's taxonomy which sort of understands how to ask questions to really test material. So you know, the agent behind the scenes kind of knows all these things, um, and allows, you know, automatically does that.

Speaker 2:

And you know we want to gamify things in order to kind of make it a little exciting for people. You know, it's like my daughter, it's like it's a duolingo binging. I don't know why, but she loves the streak. She wakes up and the first thing is like she's got to do the deal duolingo because the afraid to get the streak. I don't know where that obsession comes from, because I sure don't have that.

Speaker 2:

But yeah, so we have, like, you know, like a dashboard that kind of generates, you know, stats about how you're doing and whatnot, and then you can download the report. Mostly it's download because, you know, for me as a professor, I need to know if students are reading the material and so forth, so they can download it and upload it and say, ok, I finished the quiz for this particular reading assignment. Some of the things that we're thinking about is, like, you know, you can think like, with all this kind of knowledge in place, as students are kind of progressing, you can actually have the agent learn about what the students are learning and not learning well, and then reinforce those concepts or make suggestions on hey, you should go back and read this chapter and that chapter and this particular section because it has the full context and so forth. So a lot of stuff, stuff behind the scenes, but we have only enabled fairly basic, uh, basic sort of functionality. But good enough that you know it helps people.

Speaker 3:

So yeah, that's fantastic would have been such a game changer to have when I was in university, you know all right, yeah, I use it.

Speaker 1:

Yeah, yeah, magic actually. You know what I also appreciate. I thought the illustrations you guys have in the in the book are actually pretty cool too, the Dali based things, and you put the actual prompt for Dali in there to explain how you generated that image to. That was kind of like feeling tired.

Speaker 2:

I'm like I need some is.

Speaker 1:

Is that your?

Speaker 2:

yeah no, I thought that was pretty cool but you can totally think, like you know, what I just showed you is like, the fascinating thing is that that's just a single modality, right, that's just literally just text. Things in the future are, like you know, multi-modalities, like you know you're pulling in from. You know different kinds of. You know, okay, I don't understand this image. And you can add on all those things. There's a lot. So this concept of generative learning that I have is I think it's going to be radically for once. I truly believe we can get to personalized education. I truly believe this. We've always talked about this in the past, but it's never scalable. It's never been scalable, but through this technology, I think it suddenly becomes very scalable through this technology, I think it suddenly becomes very scalable.

Speaker 1:

Well, and also this is another differentiation of moving to more digital based learning, going back to the fact people do like the paper book and all that, but I think we're going to see more and more of a gap between what's capable in an online learning environment versus the book that you put your backpack. So yeah, it's just another example of quizzes in the agency.

Speaker 2:

Yeah, the nice thing about having an online textbook for me has also been I can actually engage multiple modalities. There's a text, then obviously you have pictures, which has always been around, but then now I embed videos in which the students absolutely love these short videos, and I'm not trying to invent everything from scratch. There's plenty of amazing people putting amazing material online, but the key thing I think that we're all going to face as a major challenge in the future is not whether we have access to content. You can say, well, do I even need a textbook in the future Because I have ChatGPT or I have Gemini, do I need it? I think it actually becomes even more important to have a textbook because you need to know how to think about it with respect to how the world's kind of falling, which, I think you know, no generative agent will ever be able to kind of rationalize. That you know, it is true.

Speaker 1:

So yeah, and you also have some pretty cool chapters in there toward the end around responsible AI, sustainable. You also have some pretty cool chapters in there toward the end around responsible AI, sustainable AI. That was pretty cool to read, at least to kind of get everyone on the page kind of a me almost, of what things are, what those terms are that we hear about.

Speaker 2:

Yeah.

Speaker 1:

Think about them as they're learning materials.

Speaker 2:

Yeah, that comes a lot from like things that I was taught, you know, in working with hyperscalers and so forth, the things that I've learned from the way they approach. It is the feedback that they'd often give us. Or we talk about these responsible ethics and sustainability and all these things as an afterthought, but in reality, it should actually be thought as part of designing a system. It is. It's not a separate team's kind of job, it's actually, you know, part of the core job. In fact, just today, just before I came into the meeting, I was in our data science committee meeting and that's one of the things we were talking about. Okay, where do we put critical thinking? Do we put that as a separate class, or do we actually embed this directly into our core classes and have a week or two? So every class is getting on the points over and over? So then naturally, when you learn the material, you implicitly think about these things. So, and then we're going to see money, yeah, things like the auditability.

Speaker 1:

Those things need to be. Let's see, let's see, let's see Other questions here.

Speaker 3:

There was one from Amanda. Yeah, is tinyMLOps already included in the course? And I did see that there is ML Ops, but if you want to speak about that a little bit more, yeah, ml Ops is definitely in the course because the course is really structured into a key set of pillars.

Speaker 2:

One is focusing a bit on data, then talking about the training, then talking about the deployment and then maintenance. It's like any engineering project you can build a bridge, but if you don't maintain the bridge you've got a serious problem right. And so maintenance has kind of talked about it and TinyML Ops like there was a section where we kind of, like you know, try to emphasize how do things kind of change when you're talking about embedded ML Ops, you know. So I kind of talk about things generally and then talk about okay, what are the unique challenges that start showing up when you're talking about tiny ml ops?

Speaker 2:

Because jenny kind of alluded to this at the beginning it is indeed very hard with the fact that you don't tend to have network connectivity or the fact that you have to estimate 50 of your power budget needs to be dedicated for comms, for barely sending any information up. Like those are fundamental challenges that you don't you never, ever sweat in a traditional ml pipeline. If you're on a kubernetes cluster, I doubt you're going to be sweating sending out a couple of network packets about, like you know how many predictions the model has made and you know whether it's not right and what it thinks about. It's just noise in the massive collection, but in tiny amount it adds up like crazy, especially if you're doing laura van and so forth. So, yeah, there is a section that kind of like talks about some of those unique challenges. It's a good question.

Speaker 3:

This is a bit of a long one.

Speaker 3:

Good question, but it did get cut off. So if they want to finish that question in another method, that would be great. But essentially the question is thank you so much for writing this book. They've already adopted the book's material for their EJI course at the Indian Institute of Science, which is amazing. Their question is about the programmability of microcontrollers for EJI. While C is preferred, the learning curve is steep and many beginning-level students find it difficult to understand its low-level interfaces. I'm assuming the rest of the question goes on. You know, like, how do you teach around this? And you know, how do you adapt this for different types of deployments, for different microcontrollers that don't use C versus C++, etc. Etc.

Speaker 2:

Yeah, I mean to be honest.

Speaker 2:

This is exactly why, when Marcelo kind of put the labs together and we discussed what level we want to share at a massive level, at a massive scale, the answer was picking up something like MATLAB right and Edge.

Speaker 2:

Impulse is practically the MATLAB for a lot of this. At least that's the platform that I'm most familiar with. Again, I'm sure there are other platforms, but we ended up choosing it because it was just easy and like a lot of students would just be able to easily onboard and the docs are pretty good for the students. So I think like that was one way we kind of chose to kind of, you know, sidestep the fundamental issue and my hope ideally, ideally as educators, what we would do is we would take that as something students can be pointed to and be able to use the GUI. I also do that in my own class, but I only do that for the first assignment, to get the micromicrophone working and then have them do a full deployment. After that we rip the band-aid off and say, wow, that tool did a lot of magic. Now let's actually understand all the magic pieces, because at the end of the day it's doing some pretty heavy lifting.

Speaker 2:

Yeah, we just slapped on a very nice GUI and understood how to work with users, not the engineers who actually have to build the stuff inside it, and so that's something that I think the students going to have to that's, on the instructors to be able to do, and so there's no easy answer down there, right?

Speaker 2:

I think this is also one of those areas that makes it difficult, because AI and so forth is really a confluence of embedded systems, however big or small the embedded system is. So you need that traditional hands-on kind of an aspect, and then you have the ML component, and then, as Zaid mentioned earlier, it's the whole domain expertise kind of comes in. So I think that's one of the things that makes us all unique is that an ML engineer cannot simply come in and be like oh you all do an NJI, I can do this. It doesn't quite work that way, because suddenly they're hit with all these crazy constraints and all these quirky things, like you're pointing out about low level stuff, and I think that's what makes the ML systems engineers a unique breed of people who can do all this deployment.

Speaker 3:

And I mean I'm not going to lie starting at C code for a beginner is not easy. But I will say, if the students work hard at it it will unlock every other code language for them, because if you know the fundamentals of C you'll know the fundamentals of almost every other language and Python, et cetera, et cetera. So it's definitely a difficult thing for beginners, but I think it just takes a motivated student as well.

Speaker 2:

I think there are also, like these high-level tools that are kind of coming out right, like Micro, python and so forth. I think it's just that now that we know that, okay, ubiquitous computing is likely on the horizon and it's possible, we're already deploying it. I think it's bound to be that the languages are naturally going to evolve right, like Rust, and all these languages are also going to start kind of like transitioning over, because now there's a value proposition and there's a market for it that will actually pay the.

Speaker 3:

You know, you can hire people yeah, and that actually just uh bleeds into his next question. That got cut off. Or what are your thoughts on using micro python and its future for developing aji applications?

Speaker 2:

no, yeah, I, yeah, it just won't be my first go-to kind of a thing, because, yeah, it just won't be my first go-to kind of a thing, because I guess, you know, I'm like Jenny, you know and so forth we just kind of rip the band-aid off and say, sink in kids, like this is the real stuff, this is what makes us engineers and so, yeah, so I tend to be on that, you know, rusty side of like, well, it's just, you want an engineering title, but that's what makes us special. It's like we can take the beating.

Speaker 3:

Yeah for sure, it's the longevity of really just. It's so difficult but it's. It's the success at the end that gets you through. You know so. But MicroPython I find is a really interesting. We hear about it all the time at Agile Pools, like do? I thought I find is a really interesting. We hear about it all the time at a gym pools, like do you deploy with micropython, can you use micropython and um? I think it's definitely up and coming, um, but I don't know if it's going to be used in you know a lot of industry deployments. Because I think in industry deployments you really need like the bare, the bare bones code, cause you're not going to want it, you only have limited storage, space and memory and all those different types of constraints. So at that point you usually need to do like the C level, c++ level code. So I don't know. I mean, we'll see how MicroPython evolves, but I'm curious as well Maybe your course will revolutionize that space.

Speaker 1:

Yeah, yeah, definitely. Hey, we have a few minutes left. I was going to say we should probably mention that there's a scholarship angle here, that you're with your GitHub stars.

Speaker 2:

Yeah, I was actually.

Speaker 1:

Do you want to explain that a little bit?

Speaker 2:

Yeah, I'll explain that. Let me just share my screen because, yeah, I'm super grateful to the people who support these things. So, back on our main page. So folks who know me know that I'm a very big fan of outreach and supporting it.

Speaker 2:

I often say that if you hold Google's and Meta's and these hyperscalers responsible for all sorts of doing technology, incalers responsible for, you know, all sorts of doing technology in a socially responsible way, then I'd say, like, for education, it falls on academic institutions and nonprofits to kind of do the right thing, which is kind of help support education. You know, outreach and support, so one of the things that the Edge AI Foundation coincidentally, this is not like it's been arranged in this way at all, coincidentally, it happened, like you know, the Edge AI Foundation, thanks to the feet here, um, you know, they're actually supporting these scholarship funds and the idea is that what we wanted to do is basically, you know, support two or three students. It's a fair chunk of change for, like you know, having them actually work on problems that are relevant for the edgier foundation. So, so every star, for instance, that we accumulate, you know, leads to some sort of support in terms of funding behind the scenes from the Edge AI Foundation. Now, just to give a shout out in this, like you know, arduino and Seed actually did this last year where they were sorry.

Speaker 2:

This year they were donating these hardware kits that you know added up into several thousands of dollars, where we ran tiny workshops in development countries with their equipment, workshops in development countries with their equipment, and so now we're trying to progress through that, like from hardware kits to actually okay, we want to bring students on board who actually be able to support them financially, like actually pay for the housing and all that kind of stuff. And it's a decent amount of money and for every star we get, like you know, adds up in terms of the expense that we can actually expense for them to be able to work on projects that are relevant to the industry. So does that cover that, pete? If not, please feel free to jump in. Yeah.

Speaker 1:

No, I think that's great. And, yeah, we have a scholarship fund. I put the URL there If you want to take a look at it. We're gonna be launching some fellowships in 2025 and some travel grants and underwriting workshops. So, yeah, this is a great way to give back a little bit to help cultivate the next generation of AI leadership. So appreciate your support, vijay, cool, well, we are a minute or so away from cutoff on our live stream here, so any last thoughts, vijay, you should leave us with other than go to mlsysbookai and start digging in.

Speaker 2:

Again. My exciting thing, I would say, is that everybody wants to be an astronaut, nobody wants to be the rocket scientist. So here's your opportunity and I guarantee you from a long-term perspective, I think there are going to be plenty of ML, quote-unquote model developers because the model development pipeline is becoming so easy, but there are going to be very few people, relatively speaking, can actually build those systems, and that's what it's going to come down to is being able to deploy these things, because the pipeline up in the front is maturing to the point it'll flow. So you should think about that.

Speaker 3:

We're already seeing that, so you make a great point Sounds good. Yeah. But we're already seeing that, so you make a great point. Yeah, but thank you and also a reminder it's open source, so feel free to contribute to the book if you wish, before it gets turned into a hard copy, and thank you so much for joining us today and everyone in the audience who asked such great questions.

Speaker 2:

Yeah, Thank you folks. Thank you Pete and Jenny, Appreciate it you.