How eye imaging technology could help robots and cars see better

Light Detection and Ranging, or LiDAR for short, is one of the imaging technologies that many robotics companies are incorporating into their sensor packages. The approach, which is currently attracting a lot of interest and funding from self-driving car companies, works similarly to radar, but instead of sending out broad radio waves and looking for reflections, it uses short laser pulses.

 

Traditional time-of-flight LiDAR, on the other hand, has a number of flaws that make it unsuitable for many 3D vision applications. Other LiDAR systems or even ambient sunshine can readily overwhelm the detector because it requires detection of very weak reflected light signals. It also has poor depth resolution and can take an inordinate amount of time to intensively scan a huge area like a highway or factory floor.

 

“FMCW LiDAR shares the same working principle as OCT, which the biomedical engineering field has been developing since the early 1990s,” said Ruobing Qian, a PhD student working in the laboratory of Joseph Izatt, the Michael J. Fitzpatrick Distinguished Professor of Biomedical Engineering at Duke. “But 30 years ago, nobody knew autonomous cars or robots would be a thing, so the technology focused on tissue imaging. Now, to make it useful for these other emerging fields, we need to trade in its extremely high resolution capabilities for more distance and speed.”

 

In a paper appearing March 29 in the journal Nature Communications, the Duke team demonstrates how a few tricks learned from their OCT research can improve on previous FMCW LiDAR data-throughput by 25 times while still achieving submillimeter depth accuracy.

 

The optical analogue of ultrasound, OCT, sends sound waves into things and measures how long it takes for them to return. OCT devices measure how much the light waves’ phase has moved in comparison to identical light waves that have travelled the same distance but have not interacted with another object to time their return durations.

 

With a few changes, FMCW LiDAR adopts a similar method. A laser beam is sent out by the technology, which varies between different frequencies on a regular basis. When the detector gathers light to measure its reflection time, it can tell the difference between a specified frequency pattern and any other light source, allowing it to work in a wide range of illumination conditions at rapid speeds.

“It has been very exciting to see how the biological cell-scale imaging technology we have been working on for decades is directly translatable for large-scale, real-time 3D vision,” Izatt said. “These are exactly the capabilities needed for robots to see and interact with humans safely or even to replace avatars with live 3D video in augmented reality.”

Most previous work using LiDAR has relied on rotating mirrors to scan the laser over the landscape. While this approach works well, it is fundamentally limited by the speed of the mechanical mirror, no matter how powerful the laser it’s using.

The Duke researchers instead use a diffraction grating that works like a prism, breaking the laser into a rainbow of frequencies that spread out as they travel away from the source. Because the original laser is still quickly sweeping through a range of frequencies, this translates into sweeping the LiDAR beam much faster than a mechanical mirror can rotate. This allows the system to quickly cover a wide area without losing much depth or location accuracy.

Robotic 3D vision systems only need to locate the surfaces of human-scale objects, whereas OCT devices profile microscopic structures up to several millimetres deep within an object. To do this, the researchers limited the frequency range employed by OCT and only searched for the peak signal created by object surfaces. The system loses a tiny bit of resolution as a result, but it has a significantly wider imaging range and faster imaging speed than traditional LiDAR.

As a result, the FMCW LiDAR system achieves submillimeter localization precision while delivering 25 times the data throughput of earlier demonstrations. The findings reveal that the method is fast and accurate enough to record the intricacies of moving human body components like a nodding head or clenched hand in real time.

 

Related posts

Leave a Comment