Even with sophisticated cameras and laser scanners that can see all the way around a vehicle, autonomous vehicles have just as much trouble driving in thick fog as humans do. They could soon have an advantage, however, as researchers have created a new imaging system that can see through obstructions like fog, or at least reconstruct what’s on the other side.
Light can still pass through fog, but as the photons make their way through the thick clouds of moisture, they’re deflected and redirected and don’t make it out the other side with any coherent structure that the human eye can detect and discern as recognizable objects. Other materials scatter light as it passes through in a similar manner, such as foam, which is what researchers at Stanford University used as a stand-in for thick fog when developing their radical new imaging device.
The full details of their creation were shared in a paper, Three-dimensional imaging through scattering media based on confocal diffuse tomography, published in Nature Communications last month, but it works in a similar fashion to the laser-powered LIDAR scanners that autonomous cars use to image the world around them in three-dimensions. A powerful laser scans back and forth across an obstruction—in this case a one-inch thick wall of foam—while a highly sensitive photon detector registers any photons that manage to pass through the foam, hit the object hidden on the other side, and then bounce back through the barrier a second time.
As you can imagine, very few photons make it back to the detector compared to how many the laser is emitting every second, but assisting the hardware is a custom algorithm the researchers developed that takes into account not only when the photos hit the detector, but where they impacted the sensor too—including scattered photons whose journey there and back was not a direct path.
Despite the algorithm having very little information to work from, especially compared to the heaps of data a LIDAR system in an autonomous car processes every second, it’s still able to create a 3D representation of the object hidden behind the obstruction, and one with a surprising degree of accuracy given the human eye can’t make out anything in the same situation.
Is the technology ready to be implemented in autonomous vehicles that are already roaming public roads? Not quite. During their testing, while the custom algorithm could crunch the data and generate a 3D representation of the hidden object in real-time, the process of scanning took anywhere from a minute to an hour depending on how reflective the hidden object was. The setup they tested also scanned only a fraction of the field of vision an autonomous car would need to be able to visualize to safely navigate foggy conditions.
Improvements will be needed to make this a real-time and viable solution before an autonomous car could safely drive down the road, even at nominal speeds, on a foggy day. But it’s not like humans are getting any better at the task either. There are some more immediate applications for the technology, however, including detailed and accurate medical imaging without doctors having to resort to invasive exploratory surgery, and in the future space-faring probes could carry imaging devices that rely on this technology to see through clouds and other particulates in a distant planet’s atmosphere without having to actually land on the surface.