A laser-based imaging device developed at Stanford University is able to observe images hidden round corners. The researchers are targetting their technology at the autonomous vehicle market, but other uses could include seeing through foliage from aerial vehicles or giving rescue teams the ability to find people blocked from view by walls and rubble.
Although this isn’t the first time researchers have achieved this, the Stanford group's algorithm can process 3D images in less than a second and can also run on a regular laptop. The laser scanning currently can take up to one hour.
'A substantial challenge in non-line-of-sight imaging is figuring out an efficient way to recover the 3D structure of the hidden object from the noisy measurements,' said David Lindell, graduate student in the Stanford Computational Imaging Lab and co-author of the paper. 'I think the big impact of this method is how computationally efficient it is.'
As described in a paper published on 5 March in Nature, for their system, the researchers set a laser next to a highly sensitive photon detector, which can record even a single particle of light. They shoot pulses of laser light at a wall, and those pulses bounce off objects around the corner and bounce back to the wall and to the detector. Currently, this scan can take from two minutes to an hour, depending on conditions such as lighting and the reflectivity of the hidden object.
Once the scan is finished, the algorithm untangles the paths of the captured photons and the blurry blob takes much sharper form. It does all this in less than a second and is so efficient it can run on a regular laptop. Based on how well the algorithm currently works, the researchers think they could speed it up so that it is nearly instantaneous once the scan is complete.
Working with lidar
The team is further developing the system, so it can better handle the variability of the real world and complete the scan more quickly. For example, the distance to the object and amount of ambient light can make it difficult for their technology to see the light particles it needs to resolve out-of-sight objects. This technique also depends on analysing scattered light particles that are intentionally ignored by lidar systems.
'We believe the computation algorithm is already ready for lidar systems,' said Matthew O’Toole, a postdoctoral scholar in the Stanford Computational Imaging Lab and co-lead author of the paper. 'The key question is if the current hardware of lidar systems supports this type of imaging.'
Before this system is road ready, it will also have to work better in daylight and with objects in motion, like a bouncing ball or running child. The researchers did test their technique successfully outside but they worked only with indirect light. Their technology did perform particularly well picking out retroreflective objects, such as safety apparel or traffic signs. The researchers say that if the technology were placed on a car today, that car could easily detect things like road signs, safety vests or road markers, although it might struggle with a person wearing non-reflective clothing.
'This is a big step forward for our field that will hopefully benefit all of us,' said Gordon Wetzstein, assistant professor of electrical engineering and senior author of the paper. 'In the future, we want to make it even more practical in the ‘wild.’'