What's Up with Tech?

Revolutionizing Imaging: Ubicept's Innovations with Single-Photon Technology Transforming Machine Vision and Beyond

Evan Kirstel

Interested in being a guest? Email us at admin@evankirstel.com

What if your camera could capture images with the precision of counting individual photons? Sebastian from Ubicept joins us to reveal how his groundbreaking work with single-photon avalanche diodes is setting a new standard in imaging technology. We promise you'll gain insight into how this leap beyond traditional CMOS sensors tackles critical issues like motion blur and low-light visibility, paving the way for revolutionary changes in machine vision and automotive applications. Sebastian introduces us to FLARE, their cutting-edge camera development kit, developed with hardware partners to stretch the limits of what's possible in the world of imaging.

In this episode, we explore the transformative potential of Ubicept's latest innovations and their implications across diverse fields. From a one-megapixel color sensor that vastly improves on previous models to its promising applications in automated runway inspections, this technology is poised to redefine industry standards. Join us as we discuss the potential partnerships that will marry AI, software, and hardware, enhancing the capabilities of these sensors even further. Tune in to discover how these advancements are set to reshape both human and machine interfaces in profound ways.

Support the show

More at https://linktr.ee/EvanKirstel

Speaker 1:

Hey everyone on the heels of CES, fascinating guest talking about camera development and reinventing imaging with Ubicept. Sebastian, how are you?

Speaker 2:

Good, how are you?

Speaker 1:

I'm doing great. I'm a little tired from CES. There was a lot of walking, but it was a fascinating event. Before that, maybe introduce yourself and the journey at Ubicept and what's the big idea behind the company.

Speaker 2:

Yes, so Ubicept. I was a postdoc at University of Wisconsin and we did a lot of research into processing individual photons into extremely high resolution and high quality image data that can see in all environments, and this was mostly on the processing side. And we learned that the hardware that we are using is actually available to be introduced in consumer devices like smartphones. And that was the final notch. I would say we needed about three and a half years ago to spin out the company because we saw that nice intersection between being able to process individual photons and also having the hardware available.

Speaker 2:

And one thing that gets confirmed over and over is that users of camera hardware are having a ton of problems, like in an environment like this. Camera works okay, but once I start moving my hand too fast there's motion blur and especially seeing in low light in connection with motion is a huge problem, and that's really the customer pain point that we are solving. And also seeing in bright and dark environments at the same time. That's called high dynamic range, and so now we see this kind of, I would say, trifold intersection of customer pain point our algorithms getting ready and also having the hardware available at cheap price points.

Speaker 1:

Fantastic. So you have a developer environment, a set of tools as well. So how is your approach? Maybe different from traditional camera development kits that are out there in the existing status quo in terms of technology?

Speaker 2:

Right, so that's an excellent point. So existing cameras, mostly CMOS sensors, they mostly work with one principle right they capture light in a given amount that's called the exposure time and they sum up all the light. And the problem really starts happening when things start moving in the meantime. So that can lead to motion blur if the motion is too fast. And also, what happens is that these sensors conventionally right once they read out, they add electronic noise to these frames, and that's a huge problem. So what we are doing is we are working with a new hardware technology.

Speaker 2:

These are single-photon avalanche diodes.

Speaker 2:

I think they have been around since the 60s so many decades but these were previously single-pixel devices, the quantum efficiency wasn't very high and other problems, and only in the last couple of years semiconductor makers have found ways to make these individual pixels smaller and arrange single photon sensitive pixel spads into high resolution arrays. And what these arrays do differently is that they time tag each photon with an extremely high time resolution that can go down to nanoseconds. So, just to give you an idea, that's a billionth of a second. In a second, light has traveled about 30 centimeters only, and so that means that these sensors time tag all these photons, which means that all the information that the light contains is available, which is great, of course, because it enables us to see in all environments, but the downside is that this ends up with a plethora of data, so gigabytes, terabytes of data volume every second, and not all of that is usable, of course, and this is where UBSI processing technology comes in, essentially turning individual photon detections, each individual photon detection, into high quality video in all environments.

Speaker 1:

Brilliant. Well, I was a double E student, so I think I got most of that, but probably some of the audience is lost. But tell us about use cases and applications, because there are cameras now in everything. We think smartphones and physical cameras, but they're in robots, they're in our cars, they're in. I have multiple cameras in my glasses now, so your audience of developers is enormous.

Speaker 2:

Yes, and thank you for that question. That's really amazing because there's I think I've heard the number 45 billion cameras in the world, which is a huge number, as you said. But the thing is that most of them are actually not designed for humans to take a look at these images, but there's actually machines connected to these. Think of robotics, think of advanced driver assistance systems and so on and so on. So the person if you will looking at these images is not really a physical person, but it's a computer that is expected to make certain decisions off of that. And the interesting thing is that poor input quality can really harm bad perception, bad camera perception.

Speaker 2:

So the sensor technology itself you don't make sensors in it by itself. That's important to highlight, but we think this will replace sensors in the future. And, to name a few, machine vision is a big, important example that we're looking into. We have a lot of people that stop by at our CS booth from the automotive industry because it's really a big pain point seeing reliably in low light environments. For example, there's, as far as I know, new regulations coming about detecting pedestrians in the dark and obviously if you don't have reliable sensory perception then it's really difficult or almost sometimes impossible to detect each of these pedestrians. And then people resort to really complex technology like LiDAR, for example, maybe thermal cameras, but that adds additional complexity, of course, in terms of cost, in terms of sensor integration and so on. So really, we want to enable machines to make reliable decisions in all environments.

Speaker 1:

Fantastic mission and you introduced a new camera development kit called FLARE, which is a five-letter acronym. I won't even try to tell you, the audience, what it means, but you know performance and innovation, I think, are the big drivers there. Maybe talk about FLARE. It's the big idea behind it.

Speaker 2:

Yes, it stands for Flexible Light Acquisition and Representation Engine. So we partnered with a small hardware partner, which is really awesome to have them by our side, and we made our own camera around that.

Speaker 2:

So, it does not want to be in the space of making cameras at scale, but we just want to demonstrate what this technology can do. And it is one megapixel in color, so it can capture a lot of data. And it is one megapixel in color, so it can capture a lot of data. And the goal with this camera is to capture all these single photons and then replay that and show people what the sensor can actually do for them. This technology as a whole, and then future implementations will be much, much smaller, much cheaper and integration ready.

Speaker 1:

Brilliant, so it's really intriguing. Ready, Brilliant, so it's really intriguing. Do you care to share any? You know customer examples or anecdotes or use cases? I know I have cameras now in my glasses, right here. I don't know what you can talk about in terms of where you're designed in, but anything you could share might be interesting. Or you know examples?

Speaker 2:

All right. So this is something we're just getting started with and this one megapixel color sensor is a huge stepping stone forward. Previously, these sensors were black and white and low resolution, so we are not fully in that space, but we have a good amount of fully in implementation. That's what I meant to say, because we just rolled out this camera at CES, but we have a lot of interest. As I said, the automotive industry, machine vision, uh, inspection, that kind of thing, uh, that that is really getting getting to the core of it, and specialized imaging applications where it's really about um, the image quality and all environments think of automated runway inspection and so on. These things, you know, like these, these companies can afford a higher price point and and therefore allow us to mature the technology.

Speaker 1:

Amazing. So, beyond the kit itself, how do you see your technology evolving over this year and next?

Speaker 2:

in terms of.

Speaker 1:

AI integration or the software or other hardware partnerships. What are you excited about?

Speaker 2:

I think we will see a couple of exciting partnerships announced in the next 12 months. I think we will see a couple of exciting partnerships announced in the next 12 months, and what I'm also really excited about is that big sensor manufacturers already have started working on this technology. So the iPhone, for example, has a single photon avalanche diode. The iPhone Pro models single photon avalanche diode built in and that's part of the LiDAR solution. So it's a low resolution sensor only, but it's already a couple of years old and we see that trend in research papers that the pixel size is getting smaller, and things are getting research papers from these big companies.

Speaker 2:

One thing that is also important to highlight is that Canon has a 3.2 megapixel color camera in the market already and I assume we will hear more in the next couple of years of this technology. And that's, I think, speaking to kind of the original intent when we started Ubisoft, because processing these individual photons is really challenging. But now we see this hardware getting available and over time we think single photon sensors plus Ubisoft processing will replace conventional CMOS sensors in the next couple of years.

Speaker 1:

I'm a drone enthusiast so I have a bunch of different drones and the camera technology is evolving massively, even on the consumer hobbyist side. We're seeing LiDAR now in consumer drones, so I imagine lots of use cases there. What is your go-to-market like? Do you license the technology as well from the IP side? Do you have patents that you license, or is it all sort of OEM kind of customers?

Speaker 2:

Right, yes, excellent question. So I think eventually, yes, this will be some licensing model and we're using this FLIR camera to get people excited about this technology. Let them test it. It's somewhat big as a form factor, but the eventual product implementations will get much smaller, so eventually licensing.

Speaker 2:

One thing that is also happening that might be worthwhile mentioning is that our processing technology actually works with conventional CMOS sensors as well. Processing technology actually works with conventional CMOS sensors as well. I've mentioned before that conventional sensors still add read noise every time they read out one frame. There's one image taken and then the sensor itself adds noise, but that read noise is getting smaller, and reading out CMOS sensors at high frame rates is also something that we can work with and can crunch that, for example, into 30 frames per second extremely high quality output.

Speaker 2:

So that is something that we see as well. I think eventually, still, single photon sensors will be the future, because they operate in a much, much better way, but as a stepping stone and in the meantime, we can also work with existing sensors and see a lot of interest there as well, because people see the awesome videos on our website. They are like well, this looks cool, how does it work, how can we get our hands on this? And they learn that these are mostly single photon avalanche diodes and they ask well, ok, does your processing maybe work with existing sensors? And it happened a good amount of times, and so we're deeply engaged with a couple of companies to get our processing on, first a small number of all of these existing cameras out there, but then in the future, we are pretty confident that these will single photon sensors with UBSAT processing.

Speaker 1:

Brilliant. Well, show and tell. It's always about showing, not telling. So I love that you have a real video that you can see the value and the performance, and I'll share the screen here so people can have a look. But what is your team up to this year? What are you excited about in terms of development? Other events I mean CES was the big one, I imagine, for you but what's next in your pipeline?

Speaker 2:

Definitely shipping our flare camera to the first couple of people get the technology into their hands. Incorporate our feedback. Also, it's worth mentioning that this one megapixel sensor is available for purchase, so people can build their own cameras around it as well.

Speaker 2:

We will definitely have more creature appearances. This is something we are looking into. I really love the video you're sharing. It's really this high quality and then explaining essentially that with this high image quality, people that build perception systems on top of camera input data, then they can take their perception stack to the whole new level.

Speaker 1:

Fantastic and where is the team based? And you have a lot of technologists and some PhDs, I imagine. Where is everyone and what are they doing day to day?

Speaker 2:

Right, yeah, excellent question. So we're actually almost all PhDs. We are based mostly in Madison, wisconsin and Boston area because of our faculty co -founders, and we are developing these algorithms, making them better, but also working on the implementation side of things. Right, if somebody reaches out to us, for example, say, hey, your processing looks amazing, does it work with conventional cameras? Then we are now really at the point where we can say, hey, we can work on embedded compute hardware, which is a really, really important step forward. So Snapdragons, for example, also NVIDIA Jetsons that's another example where we're getting we're actually in the ballpark of these numbers like the compute numbers that people can afford, and then we are able to deliver them extremely high quality imagery in our environment. So really super excited about that.

Speaker 1:

Yeah, amazing work, so much great opportunity. And congratulations onwards and upwards and wishing you all success. Thank you so much. All right, thank you for joining. Thanks everyone for listening and watching. Take care.