EDGE AI POD

Cleaning Our Oceans with Edge AI

EDGE AI FOUNDATION

The crisis of ocean plastic pollution demands innovative solutions, and Brad's team at Ozone Technologies is answering the call with cutting-edge AI technology. Their collaboration with The Ocean Cleanup has yielded a remarkable system that's transforming how we detect and map marine debris.

At the heart of this solution is ADIS (Autonomous Debris Imaging System) – a compact, rugged camera powered by edge AI that mounts to merchant vessels traveling across our oceans. Using sophisticated computer vision models running on NXP i.MX 8M Plus processors, these devices scan the water's surface, distinguishing tiny pieces of floating plastic from wave crests even in challenging marine conditions.

What makes this approach revolutionary is its scalability. Rather than requiring dedicated research vessels, ADIS piggybacks on existing shipping routes, creating an expanding network of detection points across global waters. The system operates completely autonomously, requiring minimal power and no continuous internet connection. It stores detection data onboard, uploading it only when ships return to port.

The technical challenges overcome by Brad's team are impressive – from waterproofing the hardware to withstand immersion and pressure spray to developing specialized tracking algorithms that can differentiate persistent floating debris from transient wave patterns. Perhaps most remarkable is how they've simplified deployment with fool-proof installation instructions that require no technical expertise from ship crews.

This mapping data serves a critical purpose, helping The Ocean Cleanup optimize their System 3 recovery operations in the Great Pacific Garbage Patch. Their massive 2.2km floating barrier (nicknamed "Josh") funnels debris into collection zones, but knowing where to deploy is essential for efficiency. ADIS provides that critical intelligence.

Beyond mapping, we're seeing the completion of a virtuous cycle as recovered plastic finds new life through creative recycling – including limited-edition Coldplay vinyl records made from ocean plastic. Want to see more innovations tackling our planet's pressing environmental challenges? Join us at the upcoming AJI Foundation Summit in Austin or explore edgifoundation.org/events for opportunities to connect with leaders in this space.

Send us a text

Support the show

Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

Speaker 1:

Thank you, good morning, we're back. We're back, another AJI Foundation Blueprints episode. So Davis, welcome, welcome, thanks, pete. Yeah, yeah, tuning in from the cold white north, but I'm in bellevue, so, uh, we're, you know, typical 40 degrees and rainy, so it's like that classic winter scene here.

Speaker 2:

But uh yeah, yeah, we're also a week away from a pretty important uh event in the world of edgy I one countdown is on one week to the summit.

Speaker 1:

Yes, yes, austin 2025. So that's our PSA for the morning. If you haven't registered yet, still some seats available, we'll pack them in. We have student discounts and all kinds of other deals going on, and I think there's a deal if you're in the Austin area, like an Austin local, and there's also, yeah, for UT students too. But, yeah, it's going to be great. We have like 60 sessions, like five workshops, three panels, two live streams. You know, part of Junipera Tree beer and barbecue. Of course, I don't like that stuff. The other thing we did too as a PSA is we put out a call for papers for Milan, milan 2025. So we'll be there July 2nd through the 4th, uh, and that'll be pretty exciting.

Speaker 1:

Um yeah, we'll be at embedded world, Of course. I'm sure NXP will have some giant pavilion there.

Speaker 2:

Oh yeah, oh yeah. We, we do our thing there, there, and I mean we're excited about the summit being in their backyard. So I think a few folks from the austin office out in oak hill, uh, everything on the way down, so we have a big facility down there. Yeah, so that'll be, we do I. I will be my first time seeing it next week, but uh, I've heard it's there, so we'll be there yes, cool, yeah.

Speaker 1:

no, it's gonna be a lot of fun. So so we're kind of just last minute stuff. We're also going to be at Computex Inovex in May, so we're going to have the whole community there. That's Taiwan, I believe. Taiwan, yeah. So if you go to our edgifoundationorg forward slash events or just go to our page and hit events, you'll see a whole interactive map of stuff going on in the edgi community. Some of it's our stuff. That's kind of like the place and so if you're into this stuff and tuning in, go there and, you know, go to one of those things, cause there are always. There's nothing like spending some in-person time, you know, with tech and seeing some demos and hearing some talks and stuff.

Speaker 2:

I've heard Computex. I've been there in 2019.

Speaker 1:

It's described as the CES of Asia, which is I think it's pretty it's kind of it's a little over the top, not as not as big as like a embedded world or any of the shows in Germany and stuff, but it's pretty intense. And in InnoVex is part of that, which is a lot of the startups Right right right in taiwan and stuff, and so, yeah, it's definitely a look ahead on what's what's kind of what's hot and what's happening. So definitely worth it. But today, a little closer to home, actually, one of your uh canadian colleagues here, uh, we should bring on brad, because we're going to talk about what they're doing with, with you guys and nxp, and also with the ocean. So I think, uh, that'll be exciting right.

Speaker 2:

Yeah, I think it's not just the ocean in general, but Ocean Cleanup, I think, is the organization that they're working, keeping them clear.

Speaker 1:

Well, I'll let Brad do the most justice yeah. Ta-da hey, Brad Magic of StreamYard.

Speaker 4:

Well, that's pretty awesome. It's like a live.

Speaker 2:

TV production Pete's the producer. We're just the anchor, the guest, just clicking buttons. Cool, you're welcome.

Speaker 4:

Thank you.

Speaker 1:

It's good to be here and you're calling in from Canada as well.

Speaker 4:

Yeah, I'm calling in from Western Canada. I'm in Calgary.

Speaker 2:

My hometown.

Speaker 4:

Oh yeah, I didn't know that.

Speaker 2:

Yeah, yeah, born there Small world. Indeed, we punch above our weight in AI.

Speaker 1:

That's Canadian, so yeah, yeah yeah, you guys do you do all right, Cool. So, Brad, so you're the CEO of Ozone Technologies. How long is Ozone, and am I pronouncing it correctly?

Speaker 4:

We pronounce it Ozone, with that sort of French angle on the AU Got it? That's me.

Speaker 1:

So how long has Ozone been around as a company?

Speaker 4:

It's been around for 20 plus years in a variety of forms.

Speaker 1:

Yeah, awesome, awesome, cool, great. And so what's the big topic today is kind of how you guys are. I mean, I saw actually the demo I think it was the demo at CES when we met. Oh right, yeah, nxp Pavilion another pavilion.

Speaker 4:

That was the radar fusion demo, so that was a little bit different than what I'm going to talk about today. That demo was all about the low-level fusion of radar data and vision data for spatial perception Awesome for spatial perception, and what we're going to talk about today is a project that we've been working with in collaboration with a group called the ocean cleanup out of the netherlands, and it's a vision only ai application, okay okay, and you guys are building the full solution, end to end.

Speaker 1:

Are you the device builder?

Speaker 4:

but you're doing the whole thing, the whole stack a whole stack, so the, the hardware, the optics, the vision system, the AI, the tools, um, some of the data backhaul, uh, device management, um, so it's soup to nuts really.

Speaker 1:

Um, okay, cool for this particular application and do you do a lot of your, so I assume you have, like developers that are, you know, skilled in all these different things in-house as part of your team. I mean, you built out a full, full stack team we do.

Speaker 4:

We have a whole team here in calgary, um, all the developers are here and many have been with us for, you know, 10 plus years through this ai journey yeah, back before there was this thing called AI that we talked about all the time. Yeah, we were working on some of this stuff before TensorFlow became public.

Speaker 2:

That's right 10 years ago now, yeah, 2014, november.

Speaker 4:

I remember it because we were building tools and then all of a sudden they pulled the curtains back on that and we're like, all right well, I guess we're not going to do that.

Speaker 2:

The PyTorch when it came 2016, I think, and then TensorFlow Lite 2017. I mean, you guys have seen that I'm sure it'll be embedded in the story of Ozone, but you guys have seen quite the life cycle of AI models and tools and journeys.

Speaker 4:

Yeah, we sort of had a front row seat through the and it really has been a journey. I mean, there's so much, as you guys know, everybody in the audience knows it's just a ton of activity Every time you turn around there's something, something new popping up.

Speaker 1:

Exactly, I'm going to take a moment to encourage folks that are tuning in to, um, uh, share any questions you have in the live stream. So we're here to whether you're a YouTuber or a Twitcher or a LinkedIn-er or a stream yarder, you know, ask questions along the way. Brad's going to dig into some stuff, and so feel free.

Speaker 2:

Yep, pete and I are here to bring those up. The comments section is pretty useful and, yeah, this is your chance to talk with. You know the world experts, as you can tell, in these types of topics, so definitely take advantage of that opportunity, world experts as you can tell in these types of topics.

Speaker 1:

So definitely take advantage of that opportunity. Cool. So why don't we bring up? We'll bring up Brad's. He's going to go through a couple of slides here and stuff and then and Davis and I'll kind of go to the background a little bit. But yeah, feel free to jump in with questions and comments and we'll try to make this a little interactive. So so Brad if you want to carry on.

Speaker 4:

All right, thank you. So a very brief intro on Ozone. What we're all about is AI-defined spatial perception, really for machines and robotic systems, so off-road equipment, things that are operating not on highways, not in urban environments, but typically in off-road environments where it's very dynamic and uncertain, and so the ocean cleanup project fits into that context a little bit, in that we were mapping where plastic debris is in a marine environment, in the ocean from shipboard, and what I'll talk about today is the technology that we developed to do that, solve that problem Ultimately, what the end game was environmentally, how it all fits together for the ocean cleanup group, and get into a little bit of detail on the underlying technology as well. Quick view of the ship that they run out of the Port of Victoria there's two like this and and then the agenda. So again, a brief company overview, the environmental challenge ADIS is the acronym for the actual technology, or the camera system, the development of that technology, and then outcomes and status and what's next, and a little bit of time at the end for Q&A.

Speaker 4:

So quick video on our Edge First stuff. Can you guys hear the audio? Thank you do so. So so the applications. That gives you an overview of the technology that we bring to bear to these kinds of problems, and it covers a variety of different applications, from object detection and classification, multi-class detection, so there can be several different types of objects that the customer's interested in. Tracking of those objects, as well as occupancy grid maps, so finding where they are, in a 2D plan view of the space around the vehicle or the ship, in this case, and then two and a half or 3D tracking, so understanding spatially where the objects are, where they're moving, what the direction, path of travel is and velocity and so on. Typically, our customers are using this technology for either passive operator awareness or for active control of the vehicle, and so, again, the project we're going to talk about today, the ocean cleanup, is more about just mapping.

Speaker 4:

We work on construction and mining equipment, materials handling, so forklifts, precision ag, get into unmanned ground vehicles, amrs, autonomous mobile robotic systems and industrial robotics. So the common thread throughout all of these pieces of equipment or these vehicles is that they require perception to either make the operator aware, to simplify their workload or even make them fully autonomous. Just want to make sure you guys can still hear me, pete. Are you guys still there? Okay, good, so we've got these two different platforms the vision platform and the radar vision fusion platform, which we call Raven, and in combination the modules help accelerate projects to allow people to get to market more quickly and gather data in their particular application or in their particular domain. The perception engine is the software stack that runs on the platform, and so this is where the AI is baked into the technology stack.

Speaker 4:

We've got a number of different data sets, mostly around pedestrian detection, but also other off-road objects that you'd find in construction sites and so on, and then we had to develop a whole data set for the ocean cleanup, which we'll dig into in a few minutes, and an MLOps tool or platform, sas-based platform that helps accelerate labeling of all of this data so you can train the engine and actually deploy the engine back to the hardware and evaluate its performance. So just a quick comparison of the different platforms. We've got a new HD version of the Raven coming out as well, which is a higher definition radar gives you better spatial perception for the radar data. Some example projects that we've done using this technology and the visual mapping the one top middle is the one we're going to dig into today, so we're probably 25-plus missions now, across more than I think, probably 10 ships, and I don't know how many nautical miles. I'll show you some maps and we'll get a handle on that. And I don't know how many nautical miles. I'll show you some maps and we'll get a handle on that.

Speaker 4:

So the challenge that the ocean cleanup set out to solve is to clean up plastic litter, plastic debris that's floating on the surface of the ocean, which in turn eventually breaks down and turns into microplastics and gets into the food chain and causes all kinds of problems for for the aquatic life, for the environment and ultimately for for us as well. Unesco or the united nations has a a challenge. These 10 challenges are for the ocean specifically, and the three that are being addressed by the ocean cleanup are challenge one and seven and eight. I can't quite read them, I don't remember them off the top of my head, but but of the 10 that are highlighted uh by the, by this organization, the ocean decade, um, there are three that are particular uh in in the site of what the Ocean Cleanup is working on. So another quick video to introduce their efforts and their perspective on the challenges.

Speaker 5:

So, at the Ocean Cleanup, our mission is to rid the world's oceans of plastic, the real, complicated part of this puzzle. What we need to do is two things. One, we need to clean up the legacy pollution. Two, we need Everyone's ready.

Speaker 4:

If you have the motivation and you have the skills, we'll get there.

Speaker 5:

The first few years, we really focused on trying to understand the problem and map it, which then became the basis on which we developed our solutions. Our philosophy is to learn as we go and iterate our way towards success.

Speaker 2:

Good luck, we've just passed the Golden Gate Bridge the iconic moment Every hour is a small victory, and every plastic piece that we all catch is a small victory.

Speaker 5:

So we have a major structural failure. This is normal. Welcome to the world of cutting edge technology. Plastic is within arm's reach, literally. We can't be that far away from getting a system working. Whoa here, 12 fucking particles also. What is the next logical step? Boyov always thinks about improving. With this machine, I truly believe that we're on the right track. A dirty river comes in, a clean river comes out. That's a teddy bear.

Speaker 4:

This is the day, this is the moment, finally. Time has come now to deliver on our promises.

Speaker 5:

It's so beautiful to see this Whoa, the most iconic thing in the world to me. We have shown that we are capable to repeatedly harvest large amounts of plastic. At this mark the beginning of the end of the Great Pacific Irish Fetch. Imagine if we do get this done, we could truly make our oceans clean again.

Speaker 4:

That's the future to look forward to so that sort of sets the context for the mission that these guys have undertaken. And as we get into the specifics of the technology, adis their Autonomous Debris Imaging System is what we're going to talk about and the bottom line is it's a way to map where the plastic debris is in these oceans so that they can go out and recover the debris, so that they could go out and recover them, recover the debris. So the objectives, as we look at it objectively, in the rearview mirror, the objectives are really to detect, classify and map the floating debris, and some of these objects are very large but submerged like nets. Others are very small and close to the surface broken pieces of plastic that remain buoyant, and then finding a method that's scalable and economical over many years to map such a large expanse. It's pretty challenging to map the oceans. So, fundamentally, we're trying to improve the accuracy and precision of the computational models that the ocean cleanup creates or has generated so that they can find hotspots and optimize the recovery of plastic debris, and then improving the reliability of predicting these hotspots so that they can zero in on basically optimizing plastic debris and minimizing costs, because these missions to go and recover the debris are quite expensive and the ocean cleanup is divided into two main groups. There's one group that looks after the rivers and is trying to stop the debris from getting into the oceans through the river systems, and then the group that we work with is the ocean side and their objectives really are to recover debris that's already in the ocean or that's getting into the ocean. So you look at the problem from an imaging perspective and a technology perspective.

Speaker 4:

The main challenges were finding a platform or developing a platform that was suitable, had the appropriate amount of edge, ai to do the detection and classifications of the objects, again, which are small and sparse in this vast ocean. The imaging system itself, so the optics, the sensor, all of that sort of stuff. The physical design, mechanical enclosure was challenging for this particular environment. Real-time inference that's fast enough to detect objects, again, high resolution camera, moving ship, moving objects, all of this happening at the same time, and again, the debris itself is very small relative to the total field of view of the camera. And then methods to acquire the position of the ship and estimate the location of the object relative to the ship, taking into account the pitch and roll and velocity of the ship while it's moving in this environment and we discovered the hard way that distinguishing plastic debris from wave crests was particularly challenging. The white wave crest looked very similar to a lot of plastic, so lots of false positives from that. So we had to find methods to overcome and resolve that.

Speaker 4:

Before we got engaged with the ocean cleanup, they had also tried some other methods, some early surveys. Just volunteer ships doing manta trawls. They pull these nets and collect plastic debris and count it and estimate where it was found. They did a couple of visual aerial surveys with these aircraft so the person in the top right is actually visually watching for plastics and trying to classify it as they fly along in the aircraft. And so they did a number of studies and a number of surveys to do that. And then they began gathering data with a GoPro and a PC running a simple model offline to post-process the data that they gathered or the images that they gathered with the GoPro. So that was a proof of concept that they put together the OceanCleanup put together in 2018. And shortly after that, 2022, is when we became involved with them to take that prototype and sort of turn it into a more commercial that prototype and sort of turn it into a more commercial camera system.

Speaker 4:

They've also continued to do other work hyperspectral satellite imaging, to try to see if they could do it from satellite. The data available there, the resolution is not high enough to actually detect the types of plastic and the sizes of plastics that they want to detect. So satellite wasn't, wasn't a possibility or isn't right now. And then uavs and drones. They continue to do work in that arena but haven't found a path forward yet to map, you know, fully map the globe.

Speaker 4:

So atis itself is this little camera that you see on the right-hand side there hanging off the railing of one of the ships, and it has all the AI, the LTE modem, gps, imus, all built on board, and so it does all the compute locally and what it does is classifies the objects that it sees, maps them, so you get the size, the class there's about a half a dozen different classes of objects the latitude and longitude, time stamp and a confidence. So this is the metadata that you get out of a detection and the system can save a small region of interest if we choose to set it up to do that, so it can be audited by human effort after the fact, or it can just save the metadata, and it can also pass along information so that we can collect and improve the data set over time as we run these surveys. I think there's a couple questions coming up. Yeah, just a couple.

Speaker 1:

Yeah, I had a couple of questions here. First of all, so, so the ADIS camera, so this is attached to just any kind of sort of shipping vessel, is the idea is like you get these like fleets from around the world that are just like container ships and but they happen to have atis cameras on the outside?

Speaker 4:

exactly that's the whole. Scaling strategy is to make it easy for merchant vessels to strap these things onto the side right.

Speaker 1:

So you're basically you're using the kind of just the general shipping for scanning Because, as you mentioned, right now it still has to be sort of like at the ocean surface level. You can't do satellite, you can't do really drone stuff, so you're sort of crowdsourcing, I guess, detection right.

Speaker 4:

That's a good way to put it. Yep, and we're just piggybacking on all the merchant vessels, or as many merchant vessels as the ocean cleanup can arrange.

Speaker 1:

That's cool, yeah. I mean, I guess satellite would be pretty amazing if that would be possible at some point, but I can't imagine.

Speaker 4:

Yeah, I don't think commercial satellites, commercial data, has the resolution. There might be some other satellites in the system that we don't know, about.

Speaker 1:

We don't know about, but yeah, yeah, yep, yep, that's cool, okay. And then the other question after atis was it says six watts, so I assume that this is not. Is this like a solar powered thing, or does it need? Does it need ship power, or is it self-powered, or what's the?

Speaker 4:

it's ship power. You could solar power it, but it's ship powered and, um, the power constraint is really around safety and certification. So, um, and you know, fire safety and stuff like that, on these ships they have a lot of, there's a lot of regulatory red tape and you know, these ships are managed by many different companies. Right, and so, keeping it low power, easy to install, basically no setup other than you need to tell it the height of the water.

Speaker 1:

I see Okay. So yeah, the deployment needs to be kind of braindead simple and I guess, like you said, there's probably a lot of safety enclosure regulatory issues around putting stuff on ships out there, Right?

Speaker 4:

And then there's the little subtlety of not recording when you're in within economic zones along the shoreline. Different countries have different rules and regs. I see you actually have to monitor where we are on the planet and um, shut it down uh, right, when you're in an economic zone, I see, but you're uh in terms of capturing.

Speaker 1:

You're capturing the metadata on the camera, like you're not capturing any video streams per se. You're analyzing the streams, you're capturing the metadata and then the metadata goes up to the cloud.

Speaker 4:

That's right. While we were doing development and trials, we were bringing data. We were, you know, remoting into these devices um over starlink, okay, and so in instances like that we were able to download data um, improve our data set, monitor what was going on in devices and, basically, do you know a development cycle. But in the deployed version um it only calls home when it gets into port. So it's got an LTE modem on board. I see when these things come into port or along the coastline, sometimes we get phone calls from the coast and it writes a little bit of data was to shore.

Speaker 1:

Yeah, oh, interesting. So it's sort of saves its upload for when it's yeah explore.

Speaker 4:

Yeah, it's in that mode, then it's uh primarily metadata, not shipping date, not shipping images right, right.

Speaker 1:

Okay, davis, you had a question yeah, just just a quick one.

Speaker 2:

I mean I don't want to mess with the flow, but when we're talking about data, I mean, what's the the future of this look like? Do you have a need to kind of scale up or automate the process, speed that up to to I don't know widen your range right in the searchable? Is there any angle for unsupervised learning here? Does multimodal Gen AI stuff have a hand in this at all in the future? Is it really domain specialized and you need labelers, basically, or capture?

Speaker 4:

It's kind of a bifurcated answer there's two answers to that.

Speaker 4:

The marine researchers and environmentalists. They want an instrument that's stable. They don't want something that's shifting. Answers to that the the Marine researchers and environmentalists. They want an instrument that's stable. They don't want something that's shifting all the time. So they want a predictable, even if it's a little bit wrong. They want to say these hundred cameras had the same load, the same AI model, the same results. So they can have some credibility to their Marine research, as opposed to an ever-changing, always updating model that sometimes goes off into the weeds and comes back. Um, okay, well, from a purely sort of engineering technology development perspective, I think there is opportunity for that, for those kinds of improvements. Um, I I don't think there's going to be any learning at the edge for this particular device. It doesn't have the compute, but we do have partially automated systems to bring back the.

Speaker 4:

I mentioned earlier that you can get a region of interest like a small step Right right, and so the system is smart enough to capture that, classify that, if it's got a low confidence, for example, we'll save those and push them up, and then we can optimize the model over time.

Speaker 2:

Cool, that sounds like a blueprint to me. I mean, you guys are brushing up against a few different industries, a few different bodies, and this is I mean, that's the stuff I love, right? It's like where does the engineering and the domain need to intersect?

Speaker 1:

So, I appreciate that answer that was cool, yeah, and I guess also in this, like you mentioned, the drone stuff, I mean I guess the ATIS there could be an ATIS drone at some point that launches itself from the ships and kind of really increases the surface area of visibility. That makes sense.

Speaker 4:

Absolutely. Visibility, um, absolutely, and they could be from shore, they could be ship born um. It's a logistics problem, really. So if you want to take advantage of the merchant vessels, there's no staff on board who are going to manage.

Speaker 1:

Yeah it needs to be like you said. Deployment needs to be kind of peel and stick, plug and play, exactly yes yeah, our to.

Speaker 4:

You know there's many languages spoken by the ship hands, so the goal was to have a one-page, ikea kind of an installation guide that's visual, no words. Put it on, turn the screw, plug it in, walk away. Cool, that sounds good, all right.

Speaker 1:

well, we'll let you get back to your field. Well appreciate the questions.

Speaker 4:

Okay, so a quick overview of the timeline. So the folks at the Ocean Cleanup had done these initial tests and that diagram there. That image is a GoPro just mounted to the railing of the ship and then we built a crewed prototype with our pre-existing hardware platform, which was about the size of a shoebox and it had all the GPS and everything inside, as well as the AI compute and the imaging, and that showed that we could build, you know, a commercially viable camera system and it was just sort of a straightforward engineering project from there to shrink it down in size and build an enclosure that would actually survive these kinds of environments and ultimately iterate on the data set and the model training and the model design. We had to improve the model design a lot as well for this high resolution, low size, object size, sparse scenario. So for the ocean cleanup, the project is, you know, 2018 to present and ongoing. Our engagement really started around 2021 or 2022. My buttons aren't working anymore. There we go, okay, a couple of better pictures of the system. So you can see the GoPro and generic PC I don't know what they actually used ATIS 1 and the guts of it down here, and so this was just an off-the-shelf enclosure that we used for security camera.

Speaker 4:

The first iteration of the hardware, what we call Maven, is ATIS for the ocean cleanup, and so this is the guts of that, and the basis or the compute for this platform is an NXP I dot MX eight M plus processor. Some of the specs are down there. So we've got about two tops of AI compute on that platform and we're using a Toradex SOM system on module that has the eight M plus on board and that's inside the, the ADAS camera itself on board and that's inside the ADAS camera itself. So that's all the compute there is in this particular platform, and then on top of that there's some other ancillary elements like the GPS and the IMU, the optics, of course, the enclosure design. So there's a. This isn't really AI, but it was an interesting exercise to go through a number of different ideas on form and shape into the design phase and ultimately the tooling and plastics in the aluminum enclosure. Lots of testing, so full immersion. It's hard to see from that diagram that's a one meter column of water that we kept submerging the thing into until we were able to resolve some leaks Pressure spray. Of course people are cleaning the decks and they don't use a mop anymore. Lots of pressure spray, and then we also had some various drop tests to evaluate mechanical stability, shock and vibe for the actual equipment.

Speaker 4:

Once it's deployed, we're able to, either over LTE or I mentioned earlier Starlink. We can actually log in to the device any individual device change the configuration, understand where it is on the planet and get a whole bunch of basic information about memory, usage, temperature, historical as well as current. So, like any IoT device, you can log in and check the stats and see what's going on. For this we use something from Toradex called Torizon. It's part of their software package that's bundled with the SOM. It's a really powerful platform for monitoring devices and, you know, in the middle of the Pacific, over Starlink, we were again able to get access to these things and it was almost like it was on the benchtop next to you doing development. You could SSH in, you could do your development, we'd put new loads on, we'd update the model, all kinds of stuff. So that was a pretty powerful development cycle, given that these ships were out in the middle of the ocean. While these ships were out doing their thing in the Pacific garbage patch, we were using that as a test bed to collect data, and these screenshots are from our MLOps tool, sas tool that we use to gather data, label the data. We've got a bunch of foundational models in the tool chain that auto-label data for segmentation, as well as 3D spatial localization that we use for the radar piece, and then you can do audits. It's a multi-user environment, it's enterprise-level kind of software package that allows many stakeholders to operate on the data, to train the model, to run experiments, to iterate on the platform and then deploy to the device when they're ready. These are just some quick snapshots of the data we collected and the types of things.

Speaker 4:

So the bottom right is a seagull. There's lots of seagulls out there. They get detected. So there's a class called animals, nets or fibrous materials. That's probably, by mass, the greatest amount of debris. Hard to see because it's like an iceberg. It's mostly underwater and so if you get the right angle and the right sunlight you can see it, but otherwise it's challenging in choppy water and then just plain straight up plastic. You know, bins that fall off the back of fishing vessels, things like that are bobbing around out there. So, uh, millions, many millions of images and objects, a lot of work to label it, which is why we had to develop the tooling to assist, or ai assist, to accelerate the the process of labeling.

Speaker 4:

So just a quick snapshot of some of those assisted pipelines and and so if if somebody has some other class of objects in another environment that they want to label and train a model for, these tools help accelerate that and help offload the amount of human time required to do that work. We can take data in many different forms radar, lidar imaging data and so on and then the tool itself spits out a variety of different 2d boxes, 3d boxes, so different label types as well as tracking, and then there's a whole ci cd kind of process baked into the tools where you can have audit and there's a trail of who labeled, who audited, what the approval process is and ultimately you can determine where all your data came from and what the process was to get that data into a form to train the model. So there's some predictability about how the model was actually trained and developed. Again, some more visualizations. The bottom left is LiDAR and radar data. The top is plastic, debris and segmentation of the debris. These are just images. This is what the images look like, raw images from the camera on the ship and they're turned 90 degrees. So if you can imagine a portrait shot off the starboard or port, if you turn that 90 degrees upright, the bow wave is what you see on the sort of right-hand side of those images and then the horizon is off to the left and there's an object obviously in each of those things where the little square boxes are. We also get mapping. So these devices have GPS on board. We can tell where the ship is at any given time and also the ability to estimate where the object is relative to the ship. I'll get into that in a bit. This is just out of Victoria, where these ships were operating out of for the last couple of years years. So in the ML Ops platform there's a number of different workflows and these are just snap screenshots of the training experiments that people ran on these different models, different data sets, bunch of snapshots of the actual data. So there's lots out there and to look at this, you know they all look the same, but when somebody with a trained eye looks at it you can see there's different classes. The model itself, the AI model that we're running on this thing, has evolved quite a bit over the last three or four years to be able to resolve, solve this problem at, again, high resolution, small, small, few pixel objects when they're far away in this kind of lighting and environment. So just a quick snapshot of the evolution of that model itself, lots of testing on model performance. So within these data sets we have ground truth data sets that are holdouts that we can use to test the models and iterate on them, on the model design, and as we bring in more data and improve the model through retraining, we can measure our progress and measure improvement over time.

Speaker 4:

The picture on the right is a good one. It depicts sort of the scale that we're talking about. These two ships with the nets behind to collect. They travel through the water about, I think, a kilometer and a half apart from one another and there's an object there in the middle of all that speckle that the camera on the leftmost ship picked up. And so this is the kind of environment where we're trying to find plastic and again some of it is quite small, you know, below a meter and trying to distinguish between wave crests and reflections and actually what is real plastic. So to do that we had to add tracking of the object. Wave crests come and go. As you know, plastic debris that's floating doesn't have the same behavior and this is a little bit hard to see, but if you look in the bottom right you'll see an object being tracked as it travels along, and I know you can play it a couple of times so you can see it. You can see the wave crests come and go and they look similar but they behave differently, and so it's that difference that we had to build into the model and into the object tracking, to distinguish between wave crest and an actual piece of debris.

Speaker 4:

One of the other challenges was to try to estimate roughly where the object is relative to the camera. So we've got a classical computer vision algorithm using homography. We know the characteristics of the lens and the characteristics of the system. We know the height off the water and we can assume that the water is relatively flat. We can do horizon detection and to accelerate that we use the IMU on board to determine the pitch or the roll of the ship and then bring in horizon detection when it's not a gray day and from all of that information you can estimate where the object is relative to the camera. And you know where the camera is because of the GPS and so we get a higher resolution estimate of the mapping, the physical location of the object at that moment in time. Of course it moves over time in the ocean, but in that moment in time, we know roughly where it is in the water.

Speaker 4:

So a number of different ships that we were on and gathering data and testing through april and to october of 2024. These were different missions, um, and these are the ships. So, uh, everything from small tall ships on the right, a bottom right, to the hyundai glovis um, sailing out of south korea is probably the biggest ocean cleanup has a working relationship or a partnership with Hyundai Globus, and Hyundai Globus is the shipping partner to the Hyundai car manufacturing company and they have about a dozen ships and so we're on board some of their ships. We'll see some of that data in a minute. These two ships here, they're called anchor ships, so they're work ships. These are the ones that sail out of Victoria. These are the ones that are pulling the nets behind right now to gather the data in the Great Pacific Garbage Patch, and I've got a couple of videos on that in a minute.

Speaker 4:

So the outcomes of all of this work there's fundamental marine research that these folks are doing and they're sharing the data that they're recovering from these cameras so that other people can continue to do that work or they can collaborate on that work. This was a particular study they did for the Norwegian Environment Agency on mapping debris around Norway. That was done off of that ship. You see, in the top right hand side these are just videos, or sorry. This is a map of the Great Pacific Garbage Patch. It's a bit hard to see because of the angle, but this is Alaska up here.

Speaker 4:

This is the Port of Victoria in Canada, and the Great Pacific Garbage Patch is roughly between halfway between San Francisco and Hawaii down here, and so there's a higher density of plastic in this area and that's where these recovery ships were going out to collect the data and we were using or sorry, to collect the plastic debris, and we're using them as a platform to collect the data at the same time as we developed and improved the model.

Speaker 4:

Hyundai glovis sails out of south korea and this is one of its roots, and the the. The reason why this is a dashed line is because, uh, when it's dark, the camera can't see the plastic debris and so we get no sightings, and then during daylight hours, we get more sightings, and so the that that's the whole route that it took on this particular mission or voyage, and that's why there's gaps in the data. That's on the left-hand side, one of the fellows that goes out and helps do the install. This is on the Globus, and I think the person on the right was one of the ship hands. They were installing the camera on the railing there, you see.

Speaker 4:

There's different ways to represent the data as it's collected. So across the bottom here you can see the density of the data over different periods of time. That's time across the bottom and then a number of different ships, that it was on starboard and port side and so on. So the recovery missions, these are the two ships I was talking about, the anchor ships, and that's the net in the back. It's rather large and there's a quick video here on how it works. They call it System 3.

Speaker 5:

We are here in the Great Pacific Garbage Patch with our latest system, system 3, also known as Josh and Josh is basically a 2.2 kilometer big floating barrier and the biggest part of this system is made up by two wings, so four meters deep that skin the ocean surface for floating plastics and those plastics they float along the wing to a central area which we call the retention zone. And the retention zone is basically a big garbage bag where all the plastics are collected. And this system is being towed through the ocean by two vessels, like the one I'm on now, and they tow this system with a very low speed 1.5 knots or a little bit less than 3 kilometers an hour, so it's slower than you would walk. So we always see a lot of fish swimming in the system and swimming back out, and in this retention zone we also have a lot of cameras so we can see what's going on.

Speaker 5:

We always try to find the spots here in the Great Pacific Garbage Patch with the highest plastic density and we call those hotspots. We have computational modelers. They look at waves and currents and they are getting better and better at predicting where in this vast ocean the plastic density will be the highest. And after we've been out here for three or four days and have been collecting plastics, our garbage bag is getting full, so then we are going to do an extraction is getting full, so then we are going to do an extraction. And whenever we do an extraction, one of the two vessels towing the system is taking both wings on board so that the other vessel is free to go to the rear, and it will then tie off the garbage bag and pull it on deck. We open it up and we dump all the plastics that we found on the deck.

Speaker 5:

After the extraction, we are sorting the plastics for different types of waste according to their different recycle streams, and we put the system back in the water and we continue towing for plastics.

Speaker 4:

So what's next? Scaling up deployments. These are the cameras in production in our facility here in Calgary. So we're building, doing a build right now to deploy here over the next few months on ships and finding creative ways to use the recovered plastic. And this is one of my favourites you guys may know the band Coldplay. They've got a collaboration going with the Ocean Cleanup as well. Thank you. That's it. That's the presentation. So happy to take some questions.

Speaker 2:

How do I get one of those records? Warner Brothers Canada? I ordered one. I figured yeah, but I'm still waiting for it.

Speaker 4:

I'm actually a little bit torqued about it.

Speaker 1:

It's really amazing I was going to ask about in terms of deployment. I mean, this is a crowdsourcing, like you said sort of detection of these plastic objects Is there any. I guess like cruise ships and stuff. I guess you could deploy it on cruise ships as well too.

Speaker 4:

Absolutely the one. That Norwegian study was on a cruise ship, um, and so merchant vessels, cruise ships, um. I think. Probably merchant vessels are better.

Speaker 1:

There's fewer people to fiddle with it that's true yeah, cruise ships get bored and you're like, hey, what's this thing, yeah, what's that camera doing there and the using the pitch and the roll and your calculations of uh detection. That's pretty cool.

Speaker 4:

So I think someone's geeking out on the uh, on the uh calculus, there of uh for sure, and to do it, um, at frame rate, uh, with variable data. So the variable data is the horizon, because on a gray day the ocean looks exactly like the sky at distance and you can't actually distinguish the horizon. And so there's another model there to try to find the horizon.

Speaker 1:

So this is like a good. I mean, davis is saying like blueprints are all about patterns and so this is a pattern, this is the pattern to detect patterns. Basically, you're detecting anomaly, detection at scale using code of crowdsourcing cameras, and in this case it's plastics.

Speaker 1:

But but you know, certainly think about other ways of using and this is a great also example of edge ai where, obviously, yeah, you know you need to be doing the calculation and the counting in device itself, you know, and getting the metadata up at some opportune moment, like when there's great activity right, um yep, but yeah, it's interesting.

Speaker 4:

It's a great edge AI example, because you know there is no cloud connectivity in the middle of the Pacific or it's expensive, let's say at best.

Speaker 1:

Yes, exactly. Yeah. I mean it's not practical, but yeah. So you could imagine, like, how do we think about crowdsourcing? Someone had a comment here, I'll show it, but I think it was. Someone had a comment here, I'll show it, but I think it was. The question was around using LEO satellites. I don't know if that makes a difference in terms of using LEO to do image detection.

Speaker 4:

You, know, is there more accuracy there or something that would work better than the kind of traditional satellites? It could be. I can't really comment. The folks that did the work, um, they did it with the ass of the european space agency and some of the satellite data that they had from their satellites and I actually don't know what orbit they're in yeah yeah but I love this and they were hyperspectral cameras, so they were trying to determine plastic type and trying to get a better view of submerged plastics as well.

Speaker 4:

Yeah, but it really came down to the resolution of the imaging system. At that distance that they're operating at and they could see large. Sometimes the wind will drift up lots of plastic together. They can see that grouping of plastic, but they couldn't see individual pieces, right?

Speaker 1:

Well, I would assume that, though it's like if you see one big piece of plastic, there's probably a bunch of other plastic around there. I don't know. Pause, yeah.

Speaker 2:

Those pictures were pretty stunning or not stunning, but stirring when I saw the shocking, yeah yeah. A question a bit of a hardware related one. Do you guys leverage so the imx 8m plus, I believe is the nxp soc using? Do you guys leverage the npu, the neural processing unit, on that device for any inferencing?

Speaker 2:

100 heavily yeah, okay okay, that thing's pinned good good, good good. Yeah, I mean it's, it's. It's often um a case of you have some powerful multi-core CPU systems and for ease of prototyping you can leverage that. But I was looking at some of the calculations and I was thinking about the use case. You probably are putting that thing into overdrive.

Speaker 4:

It's maxed for sure and we're running a couple of different models to do different things and high-resolution models. That's probably the most challenging part is it's high-resolution images and then running these proprietary models on those images.

Speaker 2:

That leads me to, I think, my personal last question. But what's the exportability or portability of this pipeline you've built? You've built some cool stuff like MLOps, modeling, training, your own models you just mentioned Pete was alluding to it. There's plastics, there's other stuff. What does that look like? Or is that on the Ozone roadmap, to take what you've built here? Maybe you're doing it in other domains already, but what are your thoughts on the transferability, let's say, of this stack to other use cases?

Speaker 4:

It's highly transferable. We've run it on a number of different platforms, socs, and so the inference engine that we run on is, in the case of the IDEM plus NPU, it's the one that we developed, going way back to when we were talking about beginning this project years and years ago with the IP vendor. Verisilicon is where we started and that's the IP that's in that particular SOC. But we've run it on other NPUs and typically we use the other SoC vendors tool chain in that instance. And then we've also looking forward.

Speaker 4:

We've also developed this fusion engine where we're taking the spatial information or the radar, the low-level radar data cube, from the radar modules and fusing that with the RGB data from the imager, and so for safety and autonomy for off-road equipment that's operating in dusty environments, and same reason you put ADAS on a car these other off-road vehicles want to take advantage of that technology, and so we're running those models also on the 8M Plus NPU Nice. Very cool, take advantage of that technology. And so we're running those models also on the 8m plus npu nice very cool, interesting.

Speaker 1:

We got a little shout out here from robert just saying he likes this, so I just want to give him a credit there. Cool, nice, excellent. Well, brad, really appreciate, um, really appreciate the talk. I think you know this is a pretty profound problem and it's a really interesting engineering problem too, like you guys are really thinking about. Like how do we like I was saying my term crowdsourcing, sort of this ai vision and detection at scale? I mean there's no bigger scale in the ocean, you know. I mean it's that so bad.

Speaker 4:

It's amazing like you're.

Speaker 1:

So it's like, oh, we can see this one little thing. So I think it's like a really cool, uh, example of of kind of all this technology coming together for obviously a very important purpose and, um, the the you guys have built over over a decade at this point is pretty impressive. So really, appreciate it.

Speaker 4:

it's ai for good. We get all these uh stories about ai taking over the world, so I think it's time to save the world Some good, some good story?

Speaker 1:

Yeah, definitely, definitely. Well, super, super appreciate it, cool. Well, I think, with that, davis, any last questions or comments, or Brad?

Speaker 2:

and stay warm in Calgary. Yeah, stay warm.

Speaker 4:

No, yeah, that's it. Thanks guys Appreciate the and uh, for everybody else that joined um. Any questions? Shoot me a note, or edgefirstai and theoceancleanupcom for more info on what they're doing. And uh, have a good, uh good time in austin. All right, yeah, thanks thanks for joining the show thanks.

Speaker 1:

Thanks. Talk soon. Bye-bye, all right, bye-bye you.