Using NanoMap, a drone can build a picture of its surroundings by stitching together a series of measurements via depth-sensing. Not only can the drone plan for what it sees already, but with NanoMap it can also plan how to move around areas that it can’t see yet, based on what it has seen already.

“It’s like saving all of the images you’ve seen of the world as a big tape in your head,” explains Florence. “For the drone to plan its motions, it essentially goes back into time to think individually of all the different places that it was in.”

NanoMap operates under an assumption that humans are familiar with: if you know roughly where something is and how large it is, you don’t need much more detail if your only aim is to avoid crashing into it.

Developing drones that can build a picture of the world around them and react to shifting environments is a challenge. This is particularly true when computational power tends to be proportional to weight.

Simultaneous localisation and mapping (SLAM) technology is a common way for drones to build a detailed picture of their location from raw data. However, this technique is unreliable at high speed, which makes it unsuitable for tight spaces, or environments where objects are being moved or the layout is dynamic.

Read the complete article on:

Image :