EDGE AI POD

How Edge Matrix is Transforming Video Monitoring Through AI

EDGE AI FOUNDATION

Smart cities face an impossible challenge - monitoring countless security cameras 24/7 with human operators alone. Edge Matrix is solving this problem with innovative AI technology that transforms how urban environments approach security and monitoring.

Founded in 2019 as a Cloudian spin-off, Edge Matrix has developed sophisticated video AI solutions designed specifically for smart city applications. Their systems can continuously monitor multiple camera feeds, instantly detect anomalies, and alert security personnel when issues arise. What makes their approach unique is the comprehensive nature of their offering - they don't just provide software, but build the rugged hardware needed for outdoor deployment in challenging environments like roadways and public spaces.

The team at Edge Matrix has engineered remarkably resilient systems built around NVIDIA's Jetson platform, incorporating features like supercapacitor backups and secondary control systems that can perform complete power cycles remotely when needed. This ensures their equipment maintains continuous operation without manual intervention. Their newest product, Edge AI Station, bundles ready-made AI applications for common use cases like traffic analysis, people counting, and fire detection, making advanced AI monitoring accessible to organizations without extensive technical resources.

Edge Matrix represents the future of urban security - intelligent, always-vigilant systems that transform ordinary cameras into powerful monitoring tools. As cities continue growing smarter, solutions like these will become essential infrastructure for maintaining safety and security at scale. Want to see how AI is revolutionizing urban monitoring? Visit edgematrix.com to learn more about their innovative approach to smart city technology.

Send us a text

Support the show

Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

Speaker 1:

Okay, great, Well, we're here today. It's kind of late afternoon for me in Bellevue, Washington, but it's nice and early in the morning in Japan and we have some special guests and new partners Edge Matrix that are here, and so why don't you introduce yourselves to our audience?

Speaker 2:

Okay, Thank you for giving this opportunity to us, and my name is Hiroshi Ota from Edge Matrix. My background is I worked in a mobile carrier before, but after I started to establish a company called Cloudium who is providing some big data storage object storage and we spin off from Cloudian about five and a half years ago and we started an edge AI business in Japan. Got it.

Speaker 1:

Got it Fantastic. And Sugihara-san, do you want to introduce yourself briefly?

Speaker 3:

Yeah, I'm Sugihara. Yeah, sugihara-san, do you want to introduce yourself briefly? Yeah, I'm Sugihara. I was working for some companies, especially the longest time is I spent at the Intel Corporation and about 20 years and I was joining Sony's system development, sony's PC and some others. Now I'm in charge of the product development, all the products from the Edge Matrix.

Speaker 1:

Okay, great, fantastic. Well, thank you for making the time. Yeah, and Edge Matrix joined about three, four months ago, I think, which is great. Great to have you. We have partners in Japan. We have Sony there, renaissance it's great to see Edge Matrix there. I think the community in Japan is just. There's a ton of innovation happening there, and so it's great to have you in the community. We're trying to really kind of diversify not only the kind of types of companies involved, but also geographically as well, and we'll get into what Edge Matrix does. But you do something very interesting and you provide some really interesting solutions that leverage Edge AI and you also build things for your customers, which is kind of cool. So do you want to switch into? I know you have some slides to kind of go over a little bit of the company overview. Should we switch into that mode now? Or you would need to share the slides and we'll? We can cut this out too. We'll edit it.

Speaker 2:

Okay, can you see my thread?

Speaker 1:

There you go, yep Ta-da. So yes, go for it.

Speaker 2:

So this is a company profile page. So we have focused on the development of video AI and and target the smart city market. So it matrix started its business in july 2019. So we started our ai businesses, as I told, a spin-off from the cloudia Cloudium, but we needed funding to develop a deployment platform for VideoAge AI. So we received a total of about 20 million USD investment from the companies shown on this slide. So we are currently engaged in business. So one is a product business, another one is a solution and service business. So one is a product business and another one is us. So in the product business, we offer the Edge AI box like this and recently we released Edge AI Station. It should be explained from Sugihara-san later. So in the solution and service business, we use a J-Box developed custom AI system with various video AI applications and visualization systems. We also offer ready-made AI system to expand sales of the variety of versatile applications. Got it? So customer can immediately start using AI application by developing the ready-made AI system Like that and so you're building the equipment.

Speaker 1:

Oh, you're going to talk more about it. You're building the equipment, but you're also providing kind of a solution software on top as well that's right.

Speaker 2:

yes, and this is our overall system concept and function. So, as I uh said, we are targeting the smart city market, but the large number of the cameras in smart city cannot be monitored by human 24 times 365. Yeah, that's for sure. So, on the other hand, using the Video Edge AI allows you to monitor all cameras in real time 24, 365. And also the result of the video analysis converted into the small data and then compiled and visualized for use in businesses. But it is important that the AI issues an alert when it detects an abnormality. Right, so our system can handle both local and remote alerts, as shown in this slide, and allows security guard to check real-time footage from the camera where the alert occurs. Got it?

Speaker 1:

And do you see that in your deployments? I see that these are sort of these cameras. Are these are like kind of legacy on VIF type cameras that are connected into your gateway? Is that typically the configuration that you're working with?

Speaker 2:

We are actually supporting the IP camera, which the protocol is RTSP camera, okay, rtsp. That can be connected to the hour boxes, got it? Got it, okay? Interesting, it also supports the recording, which allows you to take a snapshot and even video recording into our SSD and it's stored in the JBOX so when an abnormality occurs, so the user easily can download the video recording.

Speaker 1:

Right, okay, so it does snapshots and things like that, right? Hmm, great.

Speaker 2:

Hi, and this is our product lineup, but some sort of product. So the first important thing is realizing AI computing is a choice of the hardware, edge hardware, right. So the edge environment. We implement not only indoor but also the installed outdoor, such as a load railway and reverse. So such location requires a high, reliable hardware that can operate 24 times 365 and has remote control interfaces. We are developing hardware product called HAIBOX that reflect our experience in various fields. The Edge AI station is a product that provides a deploying function as a smart city OS on the Edge AI box. So this is much easier to handle because many functions can be supported like OS functions.

Speaker 1:

These are all primarily NVIDIA based. Is that kind of your platform that you've been?

Speaker 2:

Right, so we are just to focus on currently, just to focus on video.

Speaker 1:

Yeah, yeah, and the NVIDIA platform.

Speaker 2:

Right, yes, supporting the streaming type some processing yes, got, it Got it?

Speaker 1:

Yeah, and I would assume that, like you mentioned, these have to be deployed in some kind of challenging environments, right? So, the kind of the weatherization or the….

Speaker 2:

Right the kind of the weatherization or the uh right, uh the poc, probably not so difficult to use uh many tourists in in the open source. But uh, if you know the real system should be implemented to the market, say, for example, securities or how to operate from the remote. That kind of the function is a very important. Also the some you know monitoring function, also very important. That's all things implemented in this OS.

Speaker 1:

I see so a lot of remote monitoring, telemetry, remote control, firmware updating. I mean the whole thing can be remotely managed.

Speaker 2:

Yes, so we are much, much focused on the kind of the deployment platform.

Speaker 1:

Deployment yeah.

Speaker 2:

Deployment to the field.

Speaker 1:

Right, right. So you're building the hardware platform based on NVIDIA, you're adding your own software capabilities, you're building it into a kind of weatherized containers, and then you're also deploying the solution in the market for the customer too. So it's kind of you're a solution provider, device builder, you know you build the whole thing, which is pretty interesting.

Speaker 2:

Yeah, that's cool. And another one we are recently developing the AGI Station Alpha. This is a boundary. You know some bus tire applications on the OS, so for example, the parking road measurement or road traffic measurement, or like people counting, or smoke fire detection, those applications should be bundled and all in one in this box. Then the user can immediately use the applications.

Speaker 1:

So it's kind of like a turnkey solution for a specific use case, exactly.

Speaker 2:

Right Makes sense.

Speaker 1:

All right, yeah. Now, that would definitely help accelerate deployment, you know especially, I think, maybe for smaller businesses too, or smaller operations that want to probably get a more cost-effective solution just for their needs.

Speaker 3:

Yeah, not a lot of custom engineering there.

Speaker 1:

That makes sense, fantastic. Hi Fantastic Hi yeah.

Speaker 2:

This is our Airboxes lineup.

Speaker 1:

Yeah.

Speaker 2:

We have from high-end to low-end menu variation, but currently we are much focused on these three models because this is a very high-performance GPU is installed, so we mainly using the NVIDIA embedded type GPU called the Jetson series, and the performance is very aggressively increasing right now, so the 3-box has very high performance.

Speaker 1:

Yes, I think the new one's called Thor. Is that the new? I think that's the new chip. It's called Thor.

Speaker 2:

Right Jetson Orin series. Yes, the Orin series.

Speaker 3:

Yeah, they have the Thor also, but we have not implemented it. I see it's a little bit more high-end it's not a good fit to the box.

Speaker 1:

yet I see. So it's more data center, or well, maybe not data center, but I think I've seen it in some robotics platforms, some physical.

Speaker 3:

Yeah, robotics, but it's actually hot.

Speaker 1:

Yeah, pretty hot. Yeah, I can imagine the thermals on that thing. Yep, fantastic. What are some of the? I guess I had two questions for you. One, what are some of the challenges you see in kind of developing and deploying these types of solutions? What are some of the common challenges that you see? And then, I guess, the second question, which is a different question which is around, you know, are you exploring generative AI and visual language models and things like that on some of these platforms?

Speaker 2:

Okay, the one is, you know, 24 times 365 support. It's a little bit difficult because AI using the very high level OS environment like the Linux type, right, but sometimes the system down or system stopping For example, in case camera might be stopped or communication module like a LTE or 5G module stopping or a whole system stopping then the reboot function will be needed. So we developed this board. This is including the supercapacitor for the battery and the very low power consumption CPU. So this old monitoring is a system condition and some code boot or LTE reboot or PoE reboot. Those functions are automatically done. So then the 24365 can be possible.

Speaker 1:

Got it. Yeah, sometimes they call that a dead man switch or a uh, basically uh, you're able to reboot the system from from any kind of catastrophe. That's right yeah, if you see it in the outside?

Speaker 3:

uh, maybe you. You just showed in a very short time power down. It's very common in the world. Of course, In that case the system is not working properly. So this one can detect that status and it do the cold reboot. That means power off and power on again and keep the system working.

Speaker 1:

Right, so you have basically like a secondary platform to control the primary platform to actually do a complete power off reset if needed.

Speaker 3:

That's right. So the importance is it's in the system in the box.

Speaker 1:

In the box. Cool, that's fascinating. And so, in terms of, there's a lot of discussion around generative AI and visual language models and using generative AI to put context around video streams, is that something that you had any? I'm sure you've been experimenting with it. Do you have any thoughts on that or where that's heading?

Speaker 2:

So currently we are focusing not generative AI, but also the discriminative AI like YOLO probably.

Speaker 1:

you know, okay, discriminative ai is much easier than generative ai for embedded environment okay, so discriminative ai would be uh like more precise detection and uhating, I guess, different scenarios.

Speaker 2:

Right right. Much focus on dedicated some purpose functions.

Speaker 1:

Right, yeah, my understanding is like the generative AI models take about a gigabyte of RAM per billion parameters. So an 8 billion model, which is pretty small, would take 8 gig of RAM. So I guess you know that is one of the challenges right now with running generative AI on edge. Equipment is the memory footprint and cost, and I see you have one in here that has 30. Is one a couple of 32 gigs, 16 gig, but that gets pretty expensive.

Speaker 2:

Um yeah, interesting but but we also start some investigation about uh generative ai, then probably uh, the hybrid, uh discriminative AI and the generative AI might be a good solution in the future?

Speaker 1:

Yeah, excellent, wow, this is great. And so have you done deployments. I mean, primarily, your business has been in Japan, right, that's been kind of your primary business, but you're expanding now to go worldwide. Is that the next big step?

Speaker 3:

Yeah, Hi that's right yeah.

Speaker 2:

So and Kugihara-san probably explain a little bit about AI Station.

Speaker 1:

Okay.

Speaker 3:

Maybe three minutes. Okay, I'm going to explain about Edge AI Station that recently we announced. I'll show you this software stack and some of the GUIs and some major functions.

Speaker 3:

And we have a very short video how it works. This slide shows you the software stacks running on Edge AI Station. As you said, the basic hardware is the Jetson Olin-based. We have the three models Olin, nano, olin NX and AGX Olin and running on that hardware and the Jetpack module we put some of the middleware that support major functions of how we are running the AI processes, how we operate the system itself, and it's very good for the deploying system, your AI. Just move to the next four words, please.

Speaker 2:

Next one.

Speaker 3:

Yep, this is major functions we have. Next, next four words, please. Next one yeah, this is major functions we have. Let's start on the top left and the clock counter. Wise, it has the of course, nvidia based deep stream. It's support the deep stream in this box and that means we allow the multiple inputs and multiple pipelines you can run and orchestrated by the Triton server system in this box. So also we put the AI processor development. Ai processor means AI applications. We call the AI processor the AI application and you can develop or you can implement your own custom AI program running on this system and we deliver the development tools to do that Right. And another point in the system is how you use the output of the AI processor or put the node red in this system and know that you can make your own business logic to the output of the AI processors and you can route that data to the other system using the Node-RED.

Speaker 3:

This is a really different way to see, then, how you build business logic. It's not really depends on the AI process application itself, but we put Node-RED. And next one is open APIs. This is really intended to the communication with supervising system like Niagara running on the SCADA system, and from that supervising system you can control this edge AI station using the additional open APIs. So you can build the many edge AI station and one SCADA control system.

Speaker 1:

I see.

Speaker 3:

And of course we have the backup and restore software and also we provide the software update, of course, and put the security and put the system monitoring for the debug mainly for the debugging. And we'll put the security and put the system monitoring for the debug mainly for the debugging. And we'll put the remote access. This is just for us. That means we use the remote access for the maintenance support to the customers. And for the GUI we'll put a web server and of course you can set the device itself Right right. Next slide, please. This is an example of the GUI. Top left it shows the AI processors. This example has the three AI processors. It's different AI core network and you can use each of the one to build the pipeline and running the pipeline on the system. And top right is so-called the recording. That means you can set the trigger and record the videos or you can get the snapshot by the trigger. And the bottom left is Node-RED. Maybe this is very popular.

Speaker 1:

The editor.

Speaker 3:

And you can edit the output of the pipeline and maybe routing the data to the S3 cloud system and, of course, the supervising system. And last one is the bottom right matrix that shows the CPU utilization and memory utilization performance, those things. So actually you can, really you can use this one for the debugging and how you get the stable system in in the months, in the week and in the day you can see it uh yeah, and lastly, give you the very short you next time. Sure let's see the video it's running.

Speaker 1:

No it's running.

Speaker 3:

This is just processing the Just One HAI station. You can see the two lines or the line crossing and the two areas of the ROI setting. Yellow area is ROI. We'll put two areas counting the people within the area and two green lines. It's middle left and bottom right, two lines you can count the people crossing the line. Those things you can do in just one application. This is a example. We just announced the release of this product. Got it Interesting.

Speaker 1:

Yeah, this is the new AI station. This is more of a turnkey solution for different scenarios, right?

Speaker 3:

Yeah, it's really easy to start, and start in a short time and, of course, really low cost, comparing, if you know, developing the your own system, right, right, this is. Yeah that's just not the for the development platform, but it's also operating the service to the customers.

Speaker 1:

Yeah, fantastic. Well, this is great, yeah, I mean, it sounds like, as I mentioned, you're building the equipment, you're solving, you know, doing the solution, deployment, the commercialization, and now you have something that's more kind of turnkey for different vertical scenarios, so people can deploy this quickly, without a to uh, to develop and deploy solutions based on AI and not have to have a big development environment and big two, two years worth of project, whatever Great.

Speaker 1:

Well, I you know I really appreciate the time. Um, I think folks can learn more at your website edge matrix and uh and get in contact with you, and I think you know what you're doing here is really fantastic and we really appreciate you being part of the foundation community.

Speaker 3:

Yeah, we just you know, of course, looking forward to working with you and we want to see you know, many customers have the interest in our service.

Speaker 1:

Yes, yes and try. Definitely Excellent. Okay, sounds good. Thank you so much Appreciate it.

Speaker 3:

Thank you, thank you very much, thanks, thank you, thank you, thank you, thank you, thank you, thank you, thank you.