Learning Deep Convolutional Frontends for Visual SLAM | Daniel DeTone, Magic Leap
Video Statistics and Information
Channel: AR MR XR
Views: 3,300
Rating: undefined out of 5
Keywords: Augmented Reality, Mixed Reality, Extended Reality, Virtual Reality, AR, MR, XR, VR, Magic Leap
Id: kjaRRGLw4RA
Channel Id: undefined
Length: 36min 51sec (2211 seconds)
Published: Mon Dec 24 2018
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.
White paper here:
https://arxiv.org/abs/1812.03245
Localisation via visual means is a key technology for self driving cars.
Google owns a decent chunk of Magic Leap and be curious if they are working with Waymo at all?
This is cool.
Does any know know if this has been tried in an autonomous vehicle (let me clarify, automotive / passenger autonomous vehicle ) ?
Or any plans to?
I'd like to see some performance metrics if it has
Very interesting tech. This has a lot of applications far beyond AV car applications. Much of modern video compression relies on point detection between frames for example. There are tons of video and photo applications as well from removing unwanted objects to stitching photos together.
It's like to see how accurate you could build a dead reckoning rig with a single laser depth sensor, humidity/temp sensor, and RGB camera. Point them all at the ground and start moving it and see how far you can go before you are more than 10cm off your starting point over various terrain.
The "Chen et al. (ECCV 2018) Estimating Depth from RGB and Sparse Sensing" paper seems even more interesting. Looking at the error map in the video, it seems you could dramatically increase Radar and Lidar resolution with this technique with no much error.
The code is available here: https://github.com/MagicLeapResearch/SuperPointPretrainedNetwork