EDGE AI POD

Enhancing Field Oriented Control of Electric Drives with tiny Neural Network

EDGE AI FOUNDATION

Ever wondered how the electric vehicles of tomorrow will squeeze every last drop of efficiency from their batteries? The answer lies at the fascinating intersection of artificial intelligence and motor control.

The electrification revolution in automotive technology demands increasingly sophisticated control systems for permanent magnet synchronous motors - the beating heart of electric vehicle propulsion. These systems operate at mind-boggling speeds, with control loops closing every 50 microseconds (that's 20,000 times per second!), and future systems pushing toward 10 microseconds. Traditional PID controllers, while effective under steady conditions, struggle with rapid transitions, creating energy-wasting overshoots that drain precious battery life.

Our groundbreaking research presents a neural network approach that drastically reduces these inefficiencies. By generating time-varying compensation factors, our AI solution cuts maximum overshoots by up to 70% in challenging test scenarios. The methodology combines MatWorks' development tools with ST's microcontroller technology in a deployable package requiring just 1,700 parameters - orders of magnitude smaller than typical deep learning models.

While we've made significant progress, challenges remain. Current deployment achieves 70-microsecond inference times on automotive-grade microcontrollers, still shy of our ultimate 10-microsecond target. Hardware acceleration represents the next frontier, along with exploring higher-level models and improved training methodologies. This research opens exciting possibilities for squeezing maximum efficiency from electric vehicles, turning previously wasted energy into extended range and performance. Curious about the technical details? Our complete paper is available on arXiv - scan the QR code to dive deeper into the future of smart motor control.

Send us a text

Support the show

Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

Speaker 1:

Thank you. Essentially you may have missed this morning the workshop that we ran with Matworks and essentially the workshop was detailing, for a couple of hours with Anton, this topic. This topic is essentially fundamental in the automotive space and I would like to mention that the first that's a research paper that has been accepted, the research track. So first author was my student Martin. So first author was my student Martin and another key person in this work was Brenda Zwang from Matworks and we had the support of the Martin University supervisor, Professor Facchinetti from Università di Pavia. So on DRB-ALF, I'm going to comment this project. So essentially, why to be interested on that? Because here we are trying to apply fixed function AI in the context of motor control. And why this is quite interesting? Because at the same time we need to tweak the accuracy. It's not a classification problem, it's a regression problem. But for latency-critical system, because here the solution is going to be inserted in the control loop of a permanent magnet synchronous motor. So the challenge is currently quite high and in the future will be even more challenging. But the way we are commenting this word, the way we developed this project, was from a methodology point of view, and so what we proposed at the workshop and in this paper was a methodology based on MatWorks solutions. So Simulink and MatLab and their capability to shape, to support the design of the neural network, including a number of optimizations, plus the ST technology, called the core technology, deployed on the developer cloud to automate and make efficient mapping on microcontrollers and measures. The cost of deploying such type of workload measured directly on a physical microcontroller, a family of microcontrollers. So all these details were in this today workshop.

Speaker 1:

So why is the topic important? Because we are focused on automotive. Yeah, currently the electrification is a macro trend. Why is the topic important? Because we are focused on automotive? Yeah, currently the electrification is a macro trend. We all know, and especially in Europe, not sure in the US. Electrification is challenged today, but I'm sure that the progression will have its own trend and the amount of silicon due to the electrification of the car will increase in the next year and essentially will be on two dimensions the battery management system and the traction inverter.

Speaker 1:

So let's go to the traction inverter, so let's go to the traction inverter. In the traction inverter and in particular in the permanent magnet synchronous motor, you have two essential components the stator, with a lot of windings, and the rotor which is featuring magnetic material. Then attached, attached to it, attached to the windings, there is the vector control unit. And well, what are the properties here? Consider that, for example, control vector has to run at 20 kilohertz, which means 50 microseconds in total to close the loop, but the trend is to go to 100 kilohertz, 10 microseconds. Now you start to understand how latency critical is this type of chip solution. So definitely that's a different field from image processing. Auto processing, that's not 20 frames per second, 30 frames per second. That's really running at the frequency I was mentioning before.

Speaker 1:

So a little bit of context. We know Maxwell equations. If we in a winding let flow a sinusoidal current, then we have a magnetic field which is oscillating okay, sinozoically okay. And then if we add multiple windings on which sinusoidal current flow, then we generate multiple magnetic fields which interact with the rotor which has its own magnetic field. So essentially, we are trying to move, to convert electricity through time-varying magnetic fields into mechanical energy in order to move the car, and vice versa, because it can be inverted from the mechanical motion, generate energy. And here the game is, we have typically three phases. So if you let these three magnetic fields, these three vectors to interoperate, to, to combine. Then you have in three space a magnetic field represented by a combined vector which is essentially describing the dynamic of of the field.

Speaker 1:

Now the point is that if you sit on the rotor then the rotor shall rotate. So this is a different reference system composed by two components the DEAQ. Obviously you want to minimize the D-force, you could control directly and only the rotating magnetic field. But this implies that you have to control directly in speed and torque these three phases, while if you apply transformations, the well known Clark and Park transform, you move from this reference system to the DQ reference system and then the control gets simplified. But then you need also PIDs. So the PIDs controller essentially are linear controllers and therefore with the P part you essentially factor the current differences between the reference and the measured speed. With the integrative part you get the history of the history contribution and with the derivative you try to understand the trend.

Speaker 1:

Now the point is that this can be put in crash in very challenging conditions. So I tell you just maybe it can sound a joke for you, but if you remember a guy which was called Michael Schumacher driving the Formula One, this guy was using the accelerators as a switch only released or only touched. So this let me let me think that maybe, despite the reference, I can play between two states with the step functions. So I designed two cases. Case one we have two transitions per second in terms of speed, with a step function of increasing, of increased dimension, and in blue, clearly, I put in hard conditions the PID, especially the derivative part, and then you get these overshoots, which is a waste of energy, and you know any energy coming from the battery management system and to the battery management system has to be saved. These are drops of water that we cannot waste. First case. The second case 10 transitions per second. Here we have steps and ramps and the situation is even worse. The PAD cannot handle that. You see, continue to oscillate. So how to solve this problem Methodologically?

Speaker 1:

Here comes into the picture the proposition between ourselves and Matworks. So Simulink is the de facto standard in automotive applications. Every guy that plays algorithm and controls in automotive are familiar with Matlab and Simulink. And now with the deep learning toolbox we can design the neural network and play a number of optimizations. But when you design a neural network you need to be aware that it's deployable in a microcontroller, a microcontroller that can already today run the control loop at very high speed, so for example, in 20, at 20 kilohertz it can close on the Stellar E1 the control loop in 10 microseconds. So definitely we need to make sure that it's deployable on edge microcontroller automotive grid. So optimization are required. Which type of optimization? Hyper-parameter optimization, pruning, quantization maybe.

Speaker 1:

And before to launch huge training steps, we need to make sure that these workloads are deployable, for example, on the Stellar microcontroller. To do that we use the developer cloud. Today we saw the application. You upload the neural network on the developer cloud. You are attached on the back end. Essentially, on the back end there are attached physical microcontroller and the deployment is automatic. You get back memory occupancy figure without unwriting any code, and execution speed. There is no emulation, just execution on the target.

Speaker 1:

So what is the solution? Essentially, you saw that there are elongations, there are overshoots between the desired answers and the PID. So this generates an error. We would like to essentially remove this error from the response of the PI. So the AI, the neural network, needs to generate a compensation factor, you know, like in sensor calibration, and then subtract this factor, which is time-varying from the one generated to PID, you know, like in sensor calibration, and then subtract this factor from which is time varying from the one generated to PID and hopefully the result is compensated, free from overshoot.

Speaker 1:

Now we need to prepare a data set. So essentially we get the behavior from the PI and we generate an ideal answer, an ideal output, and then we train with such a curve, the neural network for case one. For case two, the ground truth is generated, also through a compensation factor which is exponentially decaying, and then this has been used to create the data set, and today we saw how to generate that through MATLAB. And then a neural network is similar to a time varying convolutional neural network. But there is no convolutions, there are dense layers, there are two branches plus residual. So at the end, fully connected layers were good enough.

Speaker 1:

Good enough, how many parameters? 1,700. And to order of magnitude, in term of ratio between the available samples and the parameter of the network, so that generalization can achieve. So what are the performances at the end? On the left you see huge overshoots on the case one, which are definitely mitigated by the neural network. So in terms of performances we have a 10% increase on the case one about the max deviation, but in terms of average and max overshoot, there is a clear benefit. The more challenging case also this case two is evident that there was a decrease less than 2% of the max deviation average deviation and 70% in term of max overshoot. So the neural network is doing its work, and so that's the good direction.

Speaker 1:

Then, thanks to the Matworks facilities, we can optimize. What does it mean? Well, it is well known that hyperparameter optimization, based on Bayesian optimization, is the way to go, and it can help really to mitigate the cost. Then projection Projection is quite important because here in particular, the algorithm proposed by MATLAB is through PCA. So essentially, we analyze the principal component of the activation and then we decide which weights need to be removed in the topology, which is quite good, and then quantization.

Speaker 1:

So then, obviously, all these variants have been deployed on two kinds of microcontrollers. For industrial applications, this is typically, in our proposition, the STM32G4. For automotive it's the Stellar 1. So, with integrated flash and memory, and I think the G4 runs at 160 MHz, the Stellar 1 E1 at 300 MHz. So, no matter what we did, essentially in the best case it was a 70 microsecond inference time. Good enough? No, definitely no. I was saying that without AI, we close the loop at 10 microseconds on the Stellar E1. So that's a first step.

Speaker 1:

Obviously, hardware acceleration is required, especially in perspective, and therefore there is still work to do on this respect, which means to develop fast hardware acceleration, very low latency, and this is something to be done in the future. And there are other challenges. People in the motor control expertise wonder why we didn't use a model-based predictive control. That's too complex to be embedded and we need high-level models, maybe to generate data and to train even better the neural networks. And there is still much work to do on the traction control. So I'm done. Any question, please. And there is the paper in arcsiv. If you want to read all the details, just scan this. Thank you, taste just candy square, thank you.