The Embedded Frontier
The Embedded Frontier, hosted by embedded systems expert Jacob Beningo, is a cutting-edge podcast dedicated to exploring the rapidly evolving world of embedded software and embedded system trends. Each episode delves into the latest technological advancements, industry standards, and innovative strategies that are shaping the future of embedded systems. Jacob Beningo, with his deep industry knowledge and experience, guides listeners through complex topics, making them accessible for both seasoned developers and newcomers alike.
This podcast serves as an educational platform, offering insights, interviews, and discussions with leading experts and innovators in the field. Listeners can expect to gain valuable knowledge on how to modernize their embedded software, implement best practices, and stay ahead in this dynamic and critical sector of technology. Whether you're an embedded software developer, a systems engineer, or simply a tech enthusiast, "The Embedded Frontier" is your go-to source for staying updated and inspired in the world of embedded systems. Join Jacob Beningo as he navigates the intricate and fascinating landscape of embedded technologies, providing a unique blend of technical expertise, industry updates, and practical advice.
The Embedded Frontier
#006 - Decreasing Debugging, Increasing Productivity
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this episode, Jacob Beningo discusses the importance of debugging in embedded development and shares several techniques to decrease debugging time. He highlights the statistic that development teams spend 20-40% of their time debugging, which equates to 2.5-4.5 man-months of development. Beningo emphasizes the use of test-driven development (TDD) as a way to prevent bugs and decrease debugging time. He also recommends mastering debugging techniques for microcontrollers, using profiling and monitoring tools, employing assertions, and utilizing on-host simulation. Beningo concludes by encouraging listeners to track their debugging time and implement strategies to decrease it.
Takeaways
- Development teams spend 20-40% of their time debugging, which can equate to 2.5-4.5 man months of development.
- Test-driven development (TDD) can help prevent bugs and decrease debugging time.
- Mastering debugging techniques for microcontrollers and utilizing profiling and monitoring tools can improve debugging efficiency.
- Using assertions and on-host simulation are additional techniques to decrease debugging time.
- Tracking debugging time and implementing strategies to decrease it can lead to increased productivity and innovation.
Jacob Beningo (00:00.91)
Welcome to the embedded frontier, the podcast where we dive deep into the ever evolving world of embedded systems. Whether you're an experienced engineer looking to stay ahead of the curve, a newcomer eager to learn, or a director looking to understand the latest trends, this podcast is your gateway to understanding the intricate world of embedded systems. I'm Diego Benigno, the CEO and founder of Benigno Embedded Group, an embedded software consulting and education company that helps companies and developers modernize their development processes and skills.
so that they can develop affordable, reliable, and secure products in a timely manner. Now on today's episode, I am going to be talking with you about debugging. Now you might be wondering, hey, debugging is not exactly a sexy topic. Why are we talking about debugging? It's not something that's necessarily the frontier of development, right? Well, I would actually like to argue that it is actually pretty critical to every embedded development cycle. Now,
Let me show you a statistic here on how important debugging actually is to you and your development team. First of all, the average development team typically spends 20 to 40 % of their time debugging their system and their software. Now, if you were to think about that, that doesn't sound like a big deal, right? 20 to 40 % of your time. Well, if you actually put that in terms of man months, they're going towards debugging, you would actually find that that range of just 20 to 40 % equals about two and a half to four and a half
man months of development. Now, if every single developer is doing that, my goodness, how much time and productivity is being wasted debugging your embedded systems, right? And so today, what I'm going to do is I'm going to walk through a couple of different tips and tricks and techniques that you can use to help decrease your development time, particularly when it comes to debugging. OK, one of the first things, of course, that we can do is try to prevent bugs from ever getting in there. And so we'll talk about some of those techniques here today as well.
Now, just starting off, I'm going to tell you, you know, when it comes to debugging, originally I was probably one of the worst. I remember early on when I was a entry level engineer, I used to spend something like 80 % of my time debugging my code. All right. And when you think about debugging, really what we're saying is something in a very nice way of saying, yeah, this isn't working the way that it's supposed to. Right. And the more accurate way of saying this is this is simply, you know, work that has to be redone. Right. This is failure work.
Jacob Beningo (02:24.782)
that we're essentially talking about here, right? But we say it's debugging. So we're acting like there's some external force that is coming into our systems. Some bugs are crawling in there and we're fighting with them in order to get our system to work the way it's supposed to, right? And maybe in the old days of these big mainframes when we were using punch cards and people literally, literally we're fighting bugs that were crawling into the physical mechanisms, right? Maybe back in like the 40s or 50s or something like that, in the last century. But today,
with modern embedded systems, that's just not the case. If we're debugging systems and we're spending anything more than probably 15 % of our time, then truthfully, there's something wrong with our processes. There's something wrong with our understanding of the systems that we're actually building. All right? So one of the ways that I went from 80 % of my time spent debugging down to, honestly today, probably less than 10 % of my time. All right? Certainly there's times where I have to spend a little bit of time and come across some big issue.
you really had to debug it. But for the most part, when I'm working on a system, I'm probably spending less than 10 % of my time. And the first technique that really helped me to close that gap of removing the time that I spent debugging was test -driven development. Now, test -driven development, this is, of course, an agile process. It's been around since the 90s. And what this allows you to do is it actually allows, before you write a single line of production code, what you're supposed to do is create a list of tests that you need to develop, right?
and you let the tests drive your production code development. And when I adopted this technique at first, I actually didn't think it was going to be that good of a technique. I was very skeptical about how well it would work. And, but of course, even though I was skeptical, I said, well, I need to learn the technique, need to apply it. And what I ended up discovering was that, man, it really did help me find, not just decrease the time I spent debugging, it dramatically spent up the time that I spent searching for the bugs in the software that I was writing. And...
To be completely honest, this is pretty obvious. If you look at some of James Grenning's work, he talks about the physics of TDD and things like that. Well, if you are debugging a system, if I'm writing all my code first and then I'm testing it later to find the bugs in it, what's going to happen is it's sometimes going to elapse. My mind, even though I like to think of it as a steel trap, it's probably more like a sponge, right? It absorbs a whole bunch of stuff. And if you squeeze it just a little bit, stuff just leaks out of it, right? So what ends up happening...
Jacob Beningo (04:47.854)
is that the longer you wait to debug something, the less likely it is that I'm going to remember what it was I was doing, right? And so it's going to take me a little bit of time for my mind to remember what it was I was doing, how did exactly did I implement that algorithm, et cetera, et cetera, right? And so you're going to end up with a little bit of extra time. And that little bit of extra time, depending on how complex the code is, could be a little bit of time or it could be a lot of time to try to figure out what's going on, right? If you have code that's interacting with several modules, it could be days for you to dig in and find what the problem is. But if you're using test -driven development,
at least at the unit test level, what we can do is we can spot the issues where our tests are failing immediately. Right. And so you end up initially just creating tests, making them pass essentially. Right. Very simple concept. But if my test doesn't pass, it spots the bug right away in my code. Right. that wasn't the right way to implement that. Let me go back and fix that right now. I just wrote the code five seconds ago. So I know what I did wrong. Right. Just like over time, when I use regression tests with test driven development.
If I write some code that breaks something, my regression tests will catch that break immediately. And then I can say, I just changed this function over here. I made these little changes on these two lines of code. And then all of a sudden my regressions broke, right? That means something in those two lines of code I just changed probably broke stuff. Let me go back and look at that a little bit more carefully. yes, that's right. Maybe I shouldn't have made this adjustment. I didn't fully think about all the tests, all the cases that go around this.
And so maybe now I need to add a little bit more code or make a slight adjustment to the code that I was just writing. Okay. That can solve that. That right there can save you probably, you know, you know, I don't want to estimate it because everybody's thing is different, but you know, that could save you right there at extra 30 % of a decrease in the amount of time that you spend debugging just by adopting that kind of technique. Right. And of course, using test -driven development, you can, of course, from there go and do automated testing with using DevOps and continuous integration, continuous development pipelines, right. So that you're not manually running these things.
The system is running in the background constantly making sure that all your test cases are running. Okay. And so this becomes, I think, a very important and, you know, one of the first recommendations, I think, for really helping to decrease the amount of time that you spend debugging, you know, adopt some type of test driven development. Now you might be saying, Hey, test driven development sounds great, but our code really doesn't fit it well. Well, you know, try start to adopting solid principles. You don't necessarily have to have all of your code in a, in a, you know, using TV techniques or in a test harness.
Jacob Beningo (07:13.582)
at least a new code that you're writing, try to get into a harness and catch things as they go. Lots of us are working with legacy code and it just might not make sense to go back and write all the test cases for the stuff that already exists. But any code I write today, any new modules I write today going forward, that can be a great way to start to decrease the number of bugs, the time I spend debugging and just making sure that my code comes out a little bit on the higher side of quality. From there, as we start to look at different things that you can do to
decrease that time that you spend debugging. I'll tell you, I personally hate debugging. It's one of the least favorite things of mine. I just find it to be, like I mentioned earlier, it's failure work. It basically means I didn't know what I was doing or there was something I didn't understand. It can be just fighting with the complexity. Oftentimes it's stressful, right? We got some deadline we're fighting against and this isn't working. Why is it not working? And you gotta dig in and certainly there's, if you're gonna do, if you are gonna be debugging a system.
You want to make sure you understand modern debugging techniques. If you were to go and look at what a microcontroller provides you today, there's data watches, there's the ability to use a serial wire debugger to do streaming and tracing. If you're using an RTOS, you can use tools like Percipio Tracellizer or Sager's. Sager has a system view tool that you can use that will allow you to pull the execution states of your system, even putting your own special events.
so that you can monitor how the system's behaving. And that type of thing can help you monitor your system and help you debug much faster. We can use statistical profiling. We can use the Trace Macrocells, the extended Trace Macrocells that are built in the ETM to be able to get a lot of trace data out of our system in real time. And advanced breakpoints, real time debugging, all kinds of things that we can leverage. Even the use of techniques like using assertions to catch when something is wrong.
is a valid technique that you can use. Breakpoints are often a very slow way to actually debug your system, okay? But you need to look at your processor and see what types of hardware -based capabilities are in there to allow you to debug your system, okay? And if you master those, you got to master those techniques for debugging, because I'm going to be honest, you're not going to get to a point where you spend zero time debugging. It's just not possible. To be completely honest, an embedded system today is just so complex that eliminating debugging isn't even...
Jacob Beningo (09:37.358)
you know, it's not even on the radar, right? But what we can do is try to minimize how much time we do spend debugging. Okay. And so from there, we want to try to prevent bugs in the first place. And then secondly, we want to understand that we know the techniques that we can use to debug a system when the time does come. All right. So those are a couple of really useful things that you can do. And every part is going to be different. You know, if you're using an ARM part, it's going to have a certain, you know, number of default hardware based capabilities built into it. But if you're using 8 -bit or 16 -bit parts,
each vendor might have slightly different capabilities that they have built in their processors. And you just need to look at the parts and come to a conclusion as to what types of techniques apply for what it is that you're doing. Using the Serial Warrior Viewer on an 8 -bit part that's not ARM, that's probably not going to be a capability that you can leverage. But it's still good to know that they exist if you are using a 32 -bit part. Now, I kind of alluded to this other part, which I think is actually a very important technique for us all to keep track of, which is...
profiling and monitoring the performance of our systems using trace tools. Before I started to adopt tracing tools, basically what would happen is I would compile my code, cross my fingers, and then I'd run it. And then I'd kind of look around at some blinking LEDs, maybe at some data that was coming in and out of the system, and pretty much crossing my fingers, I'd say, okay, well, it appears that it's working, but I don't really know.
It just, you know, from looking at it, it looks like everything's OK. So, yeah, let's ship it and, you know, keep our fingers crossed that we didn't miss anything. Right. And to be completely honest, that's not a great way to develop software. OK. Today's tools and techniques out there, like I mentioned, we can use profiling. We can actually, you know, if you're using an RTOS, if you're using even a bare metal system, you can go and get a trace tool. And what that tool will allow you to do is track and trace every event that happens in your system. OK. And it doesn't have to be every event. You can, you know, obviously
tune it based on what you want to see. If I want to see interrupts, I can get, you know, when interrupt starts, when it finishes. If I want to see the different tasks in my system, you know, and I want to see how often they're interrupted, how fragmented are they? Well, I could trace my system and I can see, okay, you know, task A started here. It concluded there without an, there without any interruptions. but you know, when task B started to execute, it was interrupted by C and D and there was a delay time there of, you know, 1 .2 milliseconds before it was able to get back there.
Jacob Beningo (12:02.254)
to actually start executing task B again, right? And maybe those interruptions are okay, but then again, maybe it's not, okay? The thing that a trace tool allows you to do is it allows you to see those task transitions. It allows you to see when your interrupts are firing. It allows you to get a feel for the load on the CPU at specific points in time. It lets you see how those tasks are stacking up execution wise, what the response times are, how long they're taking to execute. It basically gives you a lot of information about that system that otherwise, you know, I don't know how you're going to get it. Blink LEDs, you know?
and use an oscilloscope to try to measure those things. And sure, that's a perfectly valid way of doing things. But if there's tools out there that will just let you run it on a bench and then pull reports to see how your system is behaving, that's going to help you debug the system a lot better. In fact, one of the things that I often do with my systems is I set trace tools up at the very beginning of my development cycle. And I build it into my CI -CD pipelines. I build it into my check -in process. And I will constantly measure the performance and behavior of my system.
And the reason that I do that is that if I find that, you know, I make a, let's say I'm getting ready to add a new feature and my CPU usage is at 15%. If I go and I add the new feature and suddenly my CPU is at 70%, Whoa, something went terribly wrong. Right. And before I commit that, I probably should take a look and see what feature was that I just added and what code did I write that is using so much CPU utilization. Right. And it might be that that's perfectly okay.
But on the other hand, it might be that I injected a whole bunch of bugs related to performance of my system. And I'm going to make it a lot harder for us on the road to actually get the system to have all the features that we actually need. Right. But if I'm monitoring those things, as I'm developing my software, I can spot those types of issues a lot faster. One of the biggest problems I often see is that teams don't monitor like that. And then they end up in a performance crunch where they're trying to ship their product, but their system is slow. It's kind of buggy.
you know, poor response times. And when they measure, we're using like 100 % of our CPU. Well, how to get that way? What adjustments do we need to make? Well, you have no clue if you haven't been checking all along the way, right? And so profiling, using some type of profiling and monitoring tools, it's just a must in today's development cycles, okay? And there's certainly many tools out there that you can use to do that. All right, so that is, I'll be honest, another big technique that I absolutely love to use.
Jacob Beningo (14:28.014)
It basically turns your software. It's basically an oscilloscope for your software, right? And you're not going to design hardware without an oscilloscope. You shouldn't be developing software without some type of tracing tools. Okay. All right. So that's going to help you. I think probably wherever you're at from a debugging standpoint, that's probably going to help you quite a bit. If I were to just pause here for a moment, one of the things that I would actually recommend for you to do and a really great action item would be for you to start tracking how much time you're spending debugging. Okay. Where are you at? Are you in that five, 10, 15, 20?
40, 60, 80 % bracket of how much time you spend. And then try to keep an idea or feel for what is taking all of that time up. And then over time, start to identify maybe out of those couple of techniques I just shared with you, try to identify and adopt them and see how they can improve the rate at which you debug your systems. Now again, preventing bugs from getting in there in the first place is really the goal. Though oftentimes that can come from
the, you know, one of the techniques that I've used is that I don't write a single line of code before I've actually done some design work. Okay. I see a lot of teams that go out and they just start implementing features. They don't think through what it is they're designing. I love using UML diagrams, flow charts, state diagrams, just upfront, just to give me a feel for what the system's doing. How does this feature interact with the rest of the application? And when I understand that, it makes it much easier for me to write code that's a little bit cleaner and actually fits some type of architectural
design and so that I don't have to keep going back and rewriting my code over and over and over again. I know a lot of times with agile, you know, one of the things is you just start throwing code out and we just keep iterating on it over and over and over again. And while that can be a great approach and I certainly adopt agile and use many agile processes in my own development cycles, I still look at it as when I write code, I really just want to try to do it once. I hate rework.
You know, again, I kind of look at it as failure work. If I have to keep redoing things over and over, I don't mind doing something to get an understanding of how a system works so that I can write the right code. But, you know, doing the same thing over and over again, not exactly something that's very productive. Right. So I'd recommend that before you write a single line of code, you know, maybe leverage TDD so that you have a test that are going to drive the direction you're going to go, but also have a design, have a nice plan that you've thought through. And that plan does not have to be complicated.
Jacob Beningo (16:52.654)
You don't have to spend hours and hours developing a development plan to be able to prevent bugs from getting in your system. Simply grab a napkin or a whiteboard and just kind of draw out what it is that you're thinking. If you can see it, visualize it, you might find some issues with what you're trying to do before you ever start writing code. Okay. Trying to architect a system while you write code or letting the software just kind of come out, be emergent. It's not something that I see.
as a successful endeavor, pretty much ever. Okay. Maybe really good software developers can do it, but the average developer, we need to be able to see what we're doing ahead of time. All right. So make sure that you design something up front. And again, it doesn't have to be super complex. You don't have to spend hours and hours upfront, right? Because at the end of the day, I'll be honest, action is what gets the job done. And you know, you can strategize all you want, but at the end of the day, you know, rubber's got to meet the road. The software's got to meet the hardware.
we have to be able to implement our systems. So design before you start coding. I've talked about mastering our debugging techniques on our microcontrollers. I'd also say, look at the languages that you're using. What capabilities are built in there to help you identify bugs as they occur? One of my favorite tools to use are assertions. And those can be static assertions or runtime assertions. Basically, the idea of an assertion is that at a certain point in your program,
you are expecting certain conditions to be met. And if those conditions are not met, then that's a bug in your software. Right. And you can take assertions and create design by contracts. You can validate inputs and outputs, essentially. Like I could design a function and say, I'm expecting an input for these parameters between zero and 100. And I can set up an assertion that creates a contract with whoever is using that function that says, hey, the input value here has to be between zero and 100. And I put an assertion there.
And if I hit that assertion, it means whoever is using that, that function, that library, they're not adhering to the contract. And so that is a bug. All right. And at that point, it's okay for an assertion to fire off a compilation error message or for, you know, something that's running, maybe to set a flag or to just pop break right where the assertion is. When you're using assertions at runtime, you got to be a little bit careful because you could have motor spinning. You could have, you know, sensors collecting data. You could have something very interactive happening, right?
Jacob Beningo (19:18.51)
And so if that's the case, you might have to put your system into a safe state. You might have to just log what the state of the system was when that occurred, and then kind of put the system after that into a safe state. You don't just want to halt, you know, and have some bad thing happen that could be a safety issue. All right. Okay. So from that perspective, assertions can also be a fantastic tool for you to leverage. Now, one final technique that I'm going to mention to you, and I'll summarize all of these again for you and give you some homework.
to follow up on, of course, at the end of it, right? The last technique that I would highly encourage people to use, and I don't see it used very often, and I'm very actively developing content and blogs and webinars and all kinds of stuff around this idea, is the use of on -host simulation. It's certainly not a new idea by any means. Simulation has been around for decades. But I don't see a lot of embedded software teams utilizing it.
Instead, we all go straight to the hardware. We tightly couple our application business logic to the low level hardware. And people just aren't able to debug their systems fast when they're dealing with hardware. Right. You could have some some bug that occurs once every three months. Right. And it's very hard to recreate those conditions. But if you're simulating your system, you can tweak the conditions and make that thing that happens once every three months happen whenever you want to. And when you can do that.
then you can actually cause the issue to occur. You're gonna figure out what the root cause is and fix it without having to wait for three month iterations to go by. So then you can try to figure out what's going on, right? And simulation, the debugging is much faster on a host than on your embedded target, okay? If you think you have an issue, you gotta go through and you gotta cross compile. Then you have to erase your microcontroller. Then you have to deploy to the microcontroller. Then you have to go through the slow step -by -step debugging process on the microcontroller.
Right? If you have an issue though, on host, you can usually compile a lot faster. And then you can actually, you know, use host -based debugging techniques, which are a lot quicker to figure out what the issue is. Right? You remove that whole erase, flash, step through on the target cycle. That's not needed on a host. Okay. And certainly if you have a bug in, you know, your low level code that touches the hardware, that becomes a big issue. Right? But if you follow some solid principles and you use hardware abstraction layers,
Jacob Beningo (21:42.638)
You can feed in your own sensor data to your application logic and you can debug the application logic a lot faster on your host using simulation techniques than you can if you're trying to debug it on an embedded system, okay? On your actual target, all right? Okay. So with those ideas in mind, are you ready to reclaim your time and boost your productivity? Right? Are you ready to take your debugging time from 40 % down to 20 %?
Right there, that could save you two months of work, right? Two man months on a project. If you're able to do that, I think you really should be arguing with your bosses that you should get two extra weeks of vacation time, right? Or two weeks to do whatever you want, right? Give the business some ROI and you get some as well, right? When you increase the productivity that way. Now, that might be a little bit of a joke, but it's also a little bit of a seriousness too, right? If you're dramatically increasing your productivity, everybody benefits, right? The company, the developer.
and so on and so forth. Maybe you don't want vacation time. Maybe you just want the company to be more profitable so that you can get a raise. Right. So whatever it is, let's talk about what you can do to start implementing some of these strategies today. OK. First of all, I've listed off several of them for you. So I'm going to kind of walk through these again. And what I want you to do is think through the technique that is of most interest to you that you think could help really, really raise the bar for you. Right. And what you should be thinking about here is
What's the low hanging fruit? What could I do in the next couple of weeks, in the next month, in the next quarter that could help me go from whatever the percentage of your debug time that you spend. How do you, how do you decrease that by, you know, 25 %? You know, is there some easy technique that you can use? You know, sometimes it's as simple as just design something before you start coding, right? That could be the first one. That's good bug prevention, right? Maybe it's employing static and runtime assertions in your software.
Right. Just a little bit of extra code, kind of check things out as you develop your software. Maybe it's implementing code reviews. Code reviews are actually a great way to spot bugs. OK. You know, I didn't talk about that earlier, but code reviews right at when you're getting ready to commit or merge your code can be fantastic. You know, I've even found pair programming to be useful at times, although, you know, that technique sometimes is hit and miss. Right. But, you know, that can be a good option for you as well. Certainly profiling your system for performance. Right.
Jacob Beningo (24:06.862)
using tracing tools. That's a great one that you can use. Off target simulation, like I mentioned, automated regression testing with CI -CD. Oftentimes that requires us to adopt test driven development in order to do that and solid principles, which I highly encourage you to do anyways. But those are some of the main techniques, right? From there, you should master your debugging techniques for your microcontroller. Are there any hardware capabilities you're not leveraging? You know, instruction trace macro cell, the ITM, you know.
That's a great tool as well. I didn't mention that one. The ETM, the Extended Trace Macrocell, the SerialWare Viewer, using some of the statistical profiling there, the data watch points that are available. All of those sorts of things are techniques that you can think about. And so what I would recommend you do is pick a couple of those techniques I just mentioned, dig in a little bit deeper into those techniques over the next couple of weeks, and try to start implementing them in your development cycle. I think if you start implementing those strategies today,
you'll find that maybe you'll go from 40 % down to 30%, and then you keep working at it until you go from 30 to 20 and then 20 to 10. Now, one last technique that I'm gonna give you today before we close is when you do come across issues, if you have an on -premise AI tool, you can use those types of tools, or if your company lets you use TrackGPT, you can use those tools to get ideas to spot issues a lot faster than you might.
normally, you know, otherwise. Okay. I'm starting to see that one of the big use cases for artificial intelligence in embedded systems, while you can certainly generate code with it, I have been finding that a number of teams that I work with are using it to debug their systems. They go in and they give us, you know, provided a script that they wrote or, you know, some Python code to exercise the system, or maybe there's some algorithm that isn't being very efficient and they put it in the AI and say, Hey, can you make this more efficient?
You know, those types of things. And I have started to see an uptake in the number of people using artificial intelligence for debugging. And so I would encourage you to also keep that one in mind. I know we talk a lot about artificial intelligence on the podcast. So I don't want to kind of, you know, continue down that route, but in all honesty, I think that's a great technique that you can also leverage and see to help you lower your debugging time. All right. So with that in mind, I do hope
Jacob Beningo (26:28.622)
that some of these ideas and techniques are ones that you will consider adopting. You know, if you're certainly, if you're spending more than 20 % of your time, you know, you don't want to be spending more than two months, two man months a year debugging software, right? We want to be delivering on time, on budget. You don't want the stress of, you know, constantly having to debug systems and certainly the productivity improvements. But more importantly, if we could put more time back in our development cycle, it'll allow us to focus on innovating for our customers and delivering to them.
you know, a higher quality system that they need in order to get, you know, to improve their lives as well. So, all right, with that in mind, I want to thank you for taking the time to join us today on the Embedded Frontier podcast. I greatly appreciate it and I hope you found a couple of useful techniques to take back to the office to improve the way that you develop embedded software. Until next time, I'm Jacob Beningo and happy coding.