Vision Vitals

How Autonomous Mobile Robots “See” in 3D Using Time of Flight Technology

e-con Systems Season 1 Episode 9

How do Autonomous Mobile Robots (AMRs) perceive the world around them?
In this episode of Vision Vitals - e-con Systems Podcast, we dive deep into how Time of Flight (ToF) technology enables AMRs to “see” in 3D — helping them navigate complex environments safely and intelligently.

🎙️ In this episode:

  • What Time of Flight cameras are and how they work
  • Why ToF depth sensing is critical for navigation and obstacle avoidance
  • Key advantages over traditional stereo or RGB cameras
  • Real-world use cases where e-con Systems’ DepthVista ToF cameras power warehouse and industrial robots

You’ll also hear about how ToF data integrates with NVIDIA Jetson platforms, supports SLAM, and enhances real-time safety through accurate 3D mapping.

Whether you’re a robotics developer, automation engineer, or AI enthusiast — this episode will change how you think about robotic vision.

🎧 Listen now and explore the depth of ToF technology in action.

#AutonomousMobileRobots #TimeofFlight #ToFCamera #VisionVitals #RoboticsVision #DepthSensing #econSystems #MachineVision #EmbeddedVision

Host:

Welcome to Vision Vitals, e-con Systems’ uniquely original podcast.

We’re back with Episode 9!

Today, we go deep into how Autonomous Mobile Robots perceive their surroundings using one of the most powerful sensing methods available – Time of Flight technology.

To help us clearly understand what makes Time of Flight cameras so effective for AMRs, we have an expert from e-con Systems who has worked closely on several robotics vision projects.

Hi, good to have you here again.

Speaker:

Glad to be back. Time of Flight technology is fascinating because it forms the foundation for how many AMRs measure distance and avoid obstacles with accuracy.

Host:

Let’s start simple. What does Time of Flight technology actually do for an AMR?

Speaker:

Time of Flight, or ToF, calculates distance by measuring how long light takes to travel to an object and return to the sensor. The camera emits modulated infrared light, and when that light reflects back, it measures the phase difference to determine depth.

In AMRs, it enables real-time 3D mapping. The robot can estimate how far each object is, detect shapes, and create a depth map for navigation. The data helps the robot avoid collisions and understand its path in three-dimensional space.

Host:

So, the ToF camera isn’t capturing color but depth information?

Speaker:

Exactly. Unlike RGB cameras that record color intensity, ToF cameras generate per-pixel distance data. Every pixel represents how far away a surface is. That’s how the robot knows whether something is close, far, or directly in its trajectory.

The advantage is that ToF technology offers consistent accuracy under varying light conditions. It performs well in both bright and dim environments, which is vital for AMRs that operate indoors, outdoors, or across mixed lighting zones.

Host:

What are the major benefits of using ToF cameras over other depth-sensing methods?

Speaker:

There are several key advantages. ToF cameras provide high frame rates with low latency, so the AMR can react quickly to obstacles. They also work reliably in environments where stereo cameras may struggle due to lack of texture or poor contrast.

In addition, ToF systems are compact and integrate smoothly into the AMR’s architecture. e-con Systems’ DepthVista series is a good example. It offers 1.2 MP resolution at 60 fps and VGA at 100 fps, ensuring detailed and responsive depth imaging.

With such speed and accuracy, they are ideal for real-time motion control, localization, and dynamic obstacle avoidance.

Host:

How does the ToF camera support navigation compared to regular cameras?

Speaker:

Regular cameras rely on 2D vision, so algorithms must infer distance indirectly. ToF cameras capture actual depth, so the robot can identify walls, pallets, and people with direct distance measurements.

That allows AMRs to perform SLAM, or simultaneous localization and mapping, with greater stability. It builds a 3D perception of the environment and uses it to navigate around static and moving elements.

As ToF sensors use infrared light, they function effectively even when RGB cameras lose clarity in shadows or glare.

Host:

Lighting seems like a big challenge for robots. How does ToF handle variable lighting or reflective surfaces?

Speaker:

Lighting is indeed a critical factor. ToF cameras are designed to minimize errors caused by ambient light. The modulated infrared pulses are strong enough to separate the signal from background illumination.

However, reflective or transparent materials can still affect accuracy. To address that, e-con Systems uses optimized algorithms and high-power VCSEL emitters to improve depth estimation in such cases.

With calibration and filtering, that gives the AMR consistent performance across real-world environments filled with glass, metal, or glossy objects.

Host:

You mentioned filtering—can you explain what kind of post-processing helps improve ToF accuracy?

Speaker:

Absolutely. One common challenge in ToF imaging is what we call "flying pixels"—erroneous depth measurements that occur at object boundaries or edges. e-con Systems has developed a Depth Discontinuity Filter specifically to address this. It analyzes depth variation between neighboring pixels and removes those with abnormal distance values. This post-processing technique significantly improves depth map accuracy, ensuring AMRs receive valid, reliable data for navigation decisions.

Host:

Can you explain how ToF data contributes to obstacle avoidance?

Speaker:

ToF cameras continuously generate depth maps in real time. The frames show distances to all surrounding points. The onboard processor then converts that data into a 3D model.

From that model, the AMR can detect objects at different heights such as low pallets, human legs, or hanging cables before planning its route accordingly. It’s not just about detecting presence but understanding shape, size, and position.

Such kind of awareness helps prevent collisions even when the obstacle moves suddenly, which is crucial in dynamic environments like warehouses.

Host:

Are ToF cameras used alone or along with other sensors?

Speaker:

They usually work as part of a sensor fusion setup. AMRs often combine ToF cameras with LiDAR, ultrasonic sensors, or RGB cameras. This contributes a layer of perception – LiDAR for long-range mapping, ToF for mid-range depth, and RGB for contextual understanding.

This delivers redundancy and better spatial awareness. In many designs, ToF cameras fill the gap between 2D vision and high-cost LiDAR units, providing depth accuracy at shorter ranges where precise navigation matters most.

Host:

What about integration? How complex is it to add a ToF camera to an AMR system?

Speaker:

Integration is straightforward when using platforms validated for robotics. e-con Systems offers SDKs for NVIDIA Jetson AGX Orin and x86-based systems. Developers can access APIs to capture depth data, apply filters, and combine it with AI-based perception models.

The DepthVista GMSL IRD camera, for instance, connects through GMSL2, supporting long-distance transmission with low latency. That makes it ideal for distributed AMR architectures where the camera and compute unit are placed far apart.

Host:

What are the most common applications where ToF technology excels in AMRs?

Speaker:

It’s used in autonomous navigation, path planning, and real-time safety monitoring. For instance, a warehouse robot can detect and avoid boxes or people, while an inspection AMR can analyze distances to structural components.

ToF also helps in positioning accuracy when docking for charging or payload pickup. By measuring distances with millimeter-level precision, it ensures consistent alignment during repeated operations.

Host:

That’s a great overview, thank you!

So, this brings us to the end of the episode.

We explored how Time of Flight technology empowers Autonomous Mobile Robots to sense depth, detect objects, and navigate environments safely and efficiently.

To learn more about e-con Systems’ range of ToF cameras and other AMR vision solutions, visit www.e-consystems.com.

Stay tuned for the next episode, where we continue to explore how vision is redefining the future.