ENB339 lecture 10: perceiving depth & 3D reconstruction

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
so in this lecture what i wanted to do is talk about 3d vision i've talked a bit in the last two lectures about geometry how we crush a three-dimensional world down into two-dimensional projection i want to talk a little bit about how we can use information from multiple views of the world to try and reconstruct its three-dimensional structure right so one image we crush 3d down into 2d but if we've got two two-dimensional images of the same scene i take a picture of a room from here and a picture of the room from here then suddenly i've got a whole lot more a whole lot more information and in theory i can reconstruct the three-dimensional structure of the room so we're going to talk a little bit about that and talk about how 3d tvs work and and so on so i go back to this sort of ambiguity of scale and i use this picture in an earlier lecture and it's at face value first time you look at it it's a bit surprising because things the size the scale is not quite right the three-dimensional world is playing a trick on us and you know clearly it's because the small lady is further away from us that she appears to be smaller so here's a uh a picture i took of of a stone of a redstone so anyone know how big that stone is actually smaller but really you've got nothing to you've got nothing to judge it by yeah and so okay so there's a redstone and it's about that big there's another red stone ah right they're the same size uh so what's going on here there's something about it's the distance is what's important that one on the top is a massive stone that's a very long way away and appears the same size as a very small stone which i meant to bring it today uh that's my one prop for the lecture uh and you know because it's a small stone close-up uh it appears to be the same size it subtends the same angle if you like at our eye or at the camera it's the same number of pixels wide so again from an earlier slide i used i used this image where we're projecting an image from the outside world using a pinhole camera onto a wall and all of those uh shapes from outside caused exactly the same image on the wall right so there is i think if this is saying there's no unique inverse i can take a three-dimensional scene and i can construct what the photograph would look like but given the photograph i can't construct what the 3d world looks like there's something missing the depth is missing right we lost the dimension when we made a photograph we can't just get it back so we need to use some sorts of assumptions in order to get get that back so we'll look at this little movie uh lifted off youtube anyone been in a room like this i think this is one in new zealand actually uh there's quite a bit of footage on on youtube for it yeah so it's a cheap trick if they swap around uh then you know that that's been reversed so this thing's called an aims room uh it's a bit of an illusion uh so you look into the ames room through a hole that's in the wall and on the bottom of the diagram that's on the right hand side there and all the lines on the floor have been constructed to uh based on the fact that you're looking in it through looking at the room through that one one hole you look at it from a different point of view and the tiles on the floor would not look square and the illusion would be broken and so when the person goes to the left-hand side of the of the of the room and appears to be much smaller right that's because they are actually further away and you would expect them if they're further away but they would look smaller but the geometry of the room has been constructed so that you think they are the same distance away there's the same number of floor tiles right between the observer and the person who's far away or the person who's clear who's close because the floor tiles are actually not square right so that's how the trick works so you and your eyes you you see sort of you're counting the number of tiles you think it's the same but but it's not so human beings use an awful lot of different ways to try and figure out how far away things are and uh it's not just the fact you've got two eyes and you look at the world from different viewpoints uh there's a paper quite an interesting paper that says there's maybe nine different ways that we use unconsciously to work out how far things are away from us and these different uh different techniques they're often called different visual cues i depend on the distance so there's a graph at the side which says that the distance range over which these different effects work and i'll go through and illustrate the these different these different effects there are some techniques you use for things that are close some that you use for things that are very far away so the very simplest thing the way of working out how far away things are is called occlusion it's relative ordering so here what's closer to you the tiger or the tree right because it's obscures to tie down and so we use this simple ordering uh that you know i can see in the audience who's further away because they're obscured by people who are closer so very very simple but in our brain that's one of the factors we're using to build up a three-dimensional model of the world the next one is height in the visual field and relative size so we tend to think that if things are big they're closer and if they're small they're further away and we also know something about the relative size we know you know that an elephant is big and a person is small so we make some we compensate for that in some way so we see an elephant in a person we expect the elephant to be bigger than the person if the elephant's the same size as the person then we're probably going to infer that the elephant is further away yeah so the anger messes with this because it it it messes with our sense of relative size we expect that these two people are going to be roughly the same height and they're not and so then we perceive that as terms of distance but there are other cues about distance that are contradicting the queue so these two effects that normally work quite well for us are subverted in something like the ames room so another one's texture density so if i look at this gravel path and close to me the gravel is quite coarse right the individual stones are quite large but as we go further away the stones appear to be smaller so if we think of it in terms of a texture up close it's a very coarse texture and further away it's much finer texture so we unconsciously evaluate the texture of things we assume that the texture of the material is constant but the change in the apparent texture tells us something about distance from us aerial perspective is this kind of effect as things get further away from us they tend to get fuzzier and they also tend to change color a little bit they tend to become a little bit more blue so this is something that works at a very large distances it works over you know tens many many tens of kilometers rather than something within your sort of personal sphere another one's called binocular disparity and i'm going to talk quite a bit about this in in the rest of the lecture it's the fact that we view the world from two slightly different viewpoints from our two eyes and from that we can actually get some quite powerful information about a three-dimensional structure another one is accommodation and accommodation is feedback that we get from the muscles in our eyes so when we focus our eyes and see this little animation there are muscles that are squeezing the lens in our eye so when we're trying to focus on something that's up close and we have to distort the shape of our lens in order to bring it into focus and if you try and bring something in quite close to your eye it's almost painful right your eye is struggling to try and focus so there are these muscles that work the eye sadly as you get older they don't work as well uh it means that that the muscle needs to be squeezed in order to focus close as you get older it doesn't squeeze as well you can't focus can't focus as close and it gets to a point where you start holding things further and further away from your eyes when your arms are no longer long enough you go and get spectacles for most most people this occurs in their 40s it's going to happen to all of you all right so that's accommodation it's feedback from the muscles in your eye that tell you something about how far away how far away it is another one's convergence and when you are focusing on something close right your eyes point in and something that's further away your eyes are pointing out so when you're focusing on something not only your your individual eyes adjusting their focus on the object the angles that your that your eyes are gazing in gives you some indication of how far away things are so really high performance muscles in your eyes and you get feedback from those into your brain so that's yet another cue on distance okay so we're going to talk a little bit now about binocular disparity how to use information from two images uh to to figure out how far away things are so here are two pictures of the same scene uh taken from a slightly different viewpoint so this is a sort of you you in you used to be able to get uh stereo viewers you buy these little stereo photographs you put them in you look at them through the eyepieces and you get a vivid three-dimensional perception what's that some people can cross their eyes and see the three-dimensional effect uh i'm i'm rubbish at doing that i've only done it once or twice being able to do this by by crossing my eyes so the pictures are almost the same but there are some very very subtle differences and i'll just point some of them some out to you so finally we come divergences here we have convergence with both eyes adducted and divergence where both eyes abduct back to the primary position and you sort of expect that so if i'm if i'm looking at you lot and if i move my head to the right everything in the room appears to move to the left and so that's a general phenomenon as you move your viewpoint in one direction the world appears to move in the opposite direction okay so that's what they've exploited here they've shifted everything a little bit to the side uh from one image to the other so in the right-hand image everything's moved a little bit to the left so i've drawn an arrow here of a constant length and we see in the in the left-hand picture it's it's the length is the distance from the edge to the to the lamppost you can see in the other image that the distance of the lamppost on the edge is actually actually somewhat reduced this is a slightly busier one there are actually two pictures of a pile of rocks here uh there's not no very obvious dividing line here and what i've done is i have drawn an arrow to a little spot a blemish on one of those rocks and that's the yellow the yellow arrow that you see in the left and what i've done is i've drawn the same length arrow in the other image and what can you say about the spot that's on the rock yeah it's moved over to the left now i'm going to pick another another feature in this scene i am going to and so that louis pink arrow points points back that's the difference uh in the position of that blemish from one image to the other so here is uh an arrow that points from the left-hand edge of the image to the edge of a rock there's the same arrow on the other image and there's the there's the uh the distance back so the shift right from that the that the point undergoes between the left image and the right image yes it's moved to the left but it's not constant you see the one at the bottom it's shifted to the left quite a lot the one at the top is shifted to the left less you see that and that's because it's further away so if i move a little bit to move my head a bit to the right you're going to move a lot in my field of view because you're close to me but the person at the back of the room moves proportionately less if there's something that's an infinite distance away it's not going to move at all so if i go out at night and i move my head the stars don't move at all right but something that's close to me is going to move a lot so what we see is a shift the distance by which something shifts when you move the camera is in actually inversely proportional to how far away it is things that are close move a lot things that are far away move move very little and this is the basis of stereo photography of stereo vision so here is a blast from the past something from kodak uh now defunct that took two pictures on film from two slightly different viewpoints and the separation of the uh of the two lenses mimics roughly the separation of the eyes and uh we have b which is called the bass line that's the distance between the two cameras we know the focal length of the lens we talked a bit about focal length before so the distance uh sorry the the shift which we call disparity that's for me that's the symbol d there is proportional to the focal length and the baseline and inversely proportional to distance so if i can measure the disparity if i can measure the shift of something between the left image and the right image and i know the focal length and i know the baseline then i can work out how far away it is so that's from the film camera days this is a fairly typical stereo camera that we might put on a robot today in order to try and figure out the three-dimensional structure of the world through which the robot is moving now when i look at the world i've got my two eyes and they're clearly at different points right so i'm getting you know a view from here and i'm getting a view from here and my brain is putting all this together and helping me to figure out how far away things are if i take just two images like an image that's taken from a camera here and an image that's taken from a camera here in order for me to see the 3d three-dimensionality of the scene i need to get the image from this camera into this eye and the image from that camera into this eye right and if i can replicate that then i'm going my brain is going to perceive the three-dimensional structure of the world so we go back to this little postcard example before and there's me right so we have two images a left image and a right image and i need to put them into the right eyes i need to put the left image into the left eye right imagine the into the right eye if you cross them over you get a really bizarre depth effect it it your brain just can't compute it and you don't see you don't have any feel of depth perception at all so how do i do that well if it was an old uh stereo a stereo viewer it's something like this and you put the little postcard in the slot in the top and you look through the two eyepieces and you get this quite vivid uh perception of of distance so i've got a couple of props here i'll get someone to pass this around can someone come and grab take this with me there we go so here's an example of one that i bought on a trip somewhere this is from from prague so you look through that and you get actually quite a nice nice three-dimensional view of the view of the bridge you see that good um so sorry that's that's that's this slide that's this slide here and prague's important because that's where this guy called carl capec lived and he's the guy who coined the term robot so he came from prague it seemed sort of highly apt when i was there so there are other ways of doing this this is uh something like this is kind of primitive you can use a head mounted display so that's something you can clip onto your head and it's got two little uh lcd screens in it uh one projects into the left eye and one projects into the right eye and that'll give you very very vivid uh three-dimensional uh view of the world but the image that goes in the left eye and the right eye have to come from two cameras that are somewhere else in the world and are projected back into the relative pattern me into into the relevant eyes there's another way of doing it uh another quite cheap trick and what i do is i color code the two images i code one image i colored red another image i color it blue and i use a pair of cheap and nasty blue and red glasses which puts what the uh the the left image which comes through the red filter and comes into this eye only the blue image comes with the blue filter and comes into this eye to put the glasses on upside down you're putting the images into the wrong eyes and the 3d effect will collapse so this has been this effect been known for quite a long time people have dabbled with 3d movies from probably the earliest days of of movies people have been fascinated by 3d the technology has improved in in recent times so you've probably the last one i saw was prometheus in 3d it was plot line wasn't much good but the 3d was okay uh avatar before that so what i have here is a an anaglyph image would you like to come forward again all right got some glasses if you could just uh sprinkle them around the room like there's not many people here and there's a really there's a really posh pair you can have a really posh pair i want these back get up and walk around if you want i've got about three or four anaglyph images here they're kind of they're kind of cool does it work yeah try putting the glasses on upside down and you see this is this is worth it this is worth a picture right if if you haven't got any uh share share them around if you comfortable with that i'll move on to the next one so what happens if you look it upside down you get no effect at all or does it make you want to throw up yeah so here's one from the uh the last series of mars rovers nasa put a lot of images from the rovers up on the web in this format so for the spirit and opportunity rovers that went up five years ago or something like that this is a picture of the the deflated airbags after it landed uh as it came off the platform looked back and uh it saw that no it's got uh it's got stereo cameras on there and there are these images they're called anaglyph images there are already some anaglyph images up there so go to the go to the website and have a look uh this one's quite this one's quite striking this one's got quite good uh quite good perception uh depth perception and if you look at the image without the glasses on you can sort of you can see the the overlap between the red and the blue and you can see the changes as a function of distance uh this one the uh the shift is actually getting more with distance but that's just because of the the way this particular uh image has been taken so everyone had enough fun with red and blue glasses it's very good just the best lecture of the year uh another way you can do it and a more sophisticated way of doing it is with shutter glasses so if any of you got a anyone got a 3d seen a 3d tv yeah and you got the you got the glasses yeah that's right so what they're doing is the tv is showing the left image and then your shutter glasses they've got two two lcds which can be either opaque or clear and so when the tv's displaying the left image this one is open and this one's black and then it flips over and does it like this so the images are being displayed at probably 50 hertz or 100 hertz left right left right left right and the glasses are in synchronization right so that they open and shut at the right time so generally there's an infrared transmitter on the telly which is sending a signal to the glasses to uh to do the right thing if it wasn't synchronized the the effect would be lost uh so that's what you can that's what you can see there so the problem with all of these display technologies you've got to wear something on your head it's very unnatural you've got to wear these cheap glasses you've got to wear shutter glasses you've got to look into a gadget or put a gadget on your head and people have been struggling for a long time to figure out ways that you can you can do this and there are a few technologies starting to come out now that goes some of the way here this is one that's somewhat recent and it's a bit of a busy diagram but basically there's an lcd dis panel at the front which is displaying the picture and behind that then is a uh an arrangement of prisms so you illuminate the uh the light on one side and then the light comes through the off those prisms and through the lcd screen in a direction that hits mostly your left eye and you change to the other light and the rays come out at a different angle and mostly hit your your right eye so you have to move your head backwards and forwards to get into the zone but once you're there then you get this really strong 3d perception without having to wear anything on your heads yeah so have you seen these okay what's that okay yes so i mean this is a technology that potentially could come to to phones one day one day soon which would be pretty pretty exciting what's really what's really nice about this is it's actually quite compact i mean it's the existing lcd display and instead of the backlight that you have it's just a slightly more complicated backlight so yeah i can imagine that this this could well be in the phones in the in the next few years so that's how you can take two images uh from cameras you know a little distance apart and typically the cameras need to be about the same distance apart as your eyes otherwise you get this the depth scale becomes all wrong present them back to the two eyes and we get a sense of three-dimensionality but what would a robot do so for a robot what it's going to do it's going to take images from two different cameras and it's going to do some computation on them to try and work out the 3d structure to build a three-dimensional model of the world and so i'm going to give this little example here and it'd be easier if i could wander over and point but i can't so what i've done is i've taken two pictures of the eiffel tower from so i took a picture and i walked over there and i took another picture and the amount that something shifts in the image depends on how far away it is for me so in computational stereo what we do is i take a little template in the left hand image so i've just chopped out the top of the eiffel tower yeah i've made a little template and in the second image what i do is i search for that template at a number of different horizontal locations and i find the location at which it's got the best fit so this is a simple image matching technique so for the pixel in the middle of the top of the tower i take out the surrounding pixels i call it a template and i look for that template in a horizontal direction and when i find it i say that's the horizontal shift at that particular pixel so i can compute the disparity of every pixel so generally what i do is i do it for a range of disparities there's it it can't shift the other way it's not possible if i if i move to the right everything the image has to move to the left so there's no way i have to look to the right in the other image it's not possible for that template to be over there so that reduces the amount of compute computation that i need to do and i can also know something about the maximum distance that i need to search and that depends on really how close things are to me so that's a very simple computational approach an awful lot of computation but the technique is actually very very simple-minded so if i take pictures of two pictures of this rock pile with something like this robot stereo camera i can beat on it and i can produce an image that looks like this it's shown in grayscale but this is what we call a depth image so here the brightness is related to depth so things that are brighter close to me and things that are far away are darker yeah so this is an image where the pixel is not the value of the pixel not the color of the pixel it's how far away it is for me which is really powerful so these images look kind of ghostly because the the original color is gone the texture's gone all we have is the distance away of each pixel in the scene so in the uh the matlab toolbox that you guys have been using there's a function called stereo you give it a left image and a right image and it will produce a depth map just like this once i've got that i can do a 3d reconstruction which looks pretty crappy here but if you're actually rotating it around in matlab you can it looks much more three-dimensional than it does sort of static there but it's a rough three-dimensional rendering of the structure of those piles of rocks so if i have these two images the left image and the right image then again in the toolbox there's a function that turns them into an anaglyph yep whack on the glasses all right so this is a really powerful technique for a robot cameras are pretty cheap so i can just take two pictures do a whole bunch of image processing and in a small fraction of a second i've got the three-dimensional structure of the world and three-dimensional structure is really powerful i can tell me how far away an object is and if i'm a robot that's going to tell me whether i'm going to collide with it in the next 10th of a second or the next 10 seconds from a three-dimensional structure i can perhaps figure out what type of object it is and i can use that to plan the path or plan the task of my robot now another thing i want to talk a little bit about is what you can do if i just take two pictures of the same scene so what we have in the top top there uh is a picture taken of the eiffel tower by my daughter lucy and down the bottom there is a picture that i took of lucy taking a picture of the eiffel tower right so that's lucy in the in the foreground she's holding her camera up like up like this right so we took the pictures at almost exactly the same time it was trouble you got a professor as a father i guess uh so we took two pictures at the same time now it's often really important when you're trying to process images that are taken of the same scene but taken from from different cameras to try and figure out okay i want to know where's a particular point in the world how do i find that point in my two images that's how i want to find where's the top of the eiffel tower in this image where's the top of the eiffel tower in this image what are the pixel coordinates in those two images because once i know that then i know all the geometry of image formation and i can do some do some cool stuff so there is a whole technique of what are called feature detectors and so what's overlaid on this image here each circle represents what we call a feature in the image that's something that's kind of distinctive a very a pattern of pixels that's quite unique and then i've got a good chance of finding in any other picture that i took of the eiffel tower so the center of the circle represents the feature and the size of the circle says something about how big the feature is so you can see that on the ground there tends to be some small features so this is a pattern of pixels it's quite tight but it's also seen some things that are very big like one feature might be an eiffel tower you know so it describes that whole eiffel tower is a feature but it might also see a dead leaf on the ground as a feature so each feature has got a position uh it's got a description uh that enables me to compare it with the script with features from other images and it's also got some sort of scale or size now once i've got these features for one image and for another image and what i can do is then i can look for what's called correspondence so here what i've done is that my picture in lucy's picture so these are pictures taken with two cameras two quite different cameras different focal length different numbers of megapixels and those green lines join points that are the same point in the world but found in two different images so you see what's going on here each pixel represents the pixel at each end of the green line represents the same thing in the world right but from it but from a different location and once i've got that that's gold that information is gold i can do some cool tricks so one is there's the picture of uh that i took up in the top ah so that's the picture lucy took up in the top here's my picture of lucy taking the picture and all those lines converge on the point where her camera is so just from two images and this trick i can work out where is the camera that took that picture so this is my picture and i can work out where that other picture was taken from just with some simple geometry which is pretty powerful this technique of finding the same point in the world in multiple pictures is what's used when you stitch together panoramas so you take a bunch of overlapping pictures you find some points that belong in this picture and this picture you find enough of them and then you stretch and shift the images until they overlap exactly and that's how that's how you build panoramas here's an example of what's called image stitching or image mosaicing and here i've got maybe half a dozen pictures aerial photos and i've overlaid them one on top of the other by finding points that are the same in multiple pictures shifting the pictures around until they line up very nicely and you can see that the images are quite nicely registered making a panorama so these are i'm not going to go into details of how we do this feature extraction and how we do the matching i just want you just to sort of to get the general concept and really the last couple of slides i'm going to run are some pretty cool work that people have done on very very large scale reconstruction so i'm just going to run these these two videos and we can talk a little bit about them i'll try to wreck the volume up you can think about what photosynth does as linking images together whenever images are taken in a common environment it's as if you form a hyperlink between them and and so now if you think about the emergent network of hyperlinks between images that can they can be built by a crawler saying going out and searching the whole the whole web it's a very powerful idea here's a shot of saint peter's basilica we're looking at it where we can navigate through hundreds of photos the fun thing happens when we arrange all these guys into a common three-dimensional environment here's a point cloud model that's been reconstructed so these are just pictures that they got from flickr picture of lots of photos in their own planes inside that model let's go dive like this just moving from side to side these white boxes that are now appearing on the screen are showing where photos were taken so for example if you want to close up over here and click on that and you see that everything is registered perfectly with the three-dimensional model so you can imagine a technology like this one with many people's photos being registered simultaneously becoming like the three-dimensional map or a universe we have a three-dimensional reconstruction of the environment and we can also of course look at those photos individually and then from there we can navigate around the space either via photos or via the entire environment this is all of them turned on simultaneously if you want to look at other images similar to the one we're looking at right now we can do this trick now we moved close to the center of the screen all the images that share a lot of context with that image that we're just looking at before these are nearly identical shots here's for example a close-up of this clock looking at similar shots we see that the clock also occurred in a number of other photos like this one so this gives you a way of grouping and navigating between images using the image contents without any kind of tagging having taken place beforehand no hand intervention this shows you how to consume on different parts of the image and as we zoom only the necessary data for that particular part is coming in this is all of the images that have the same content anywhere in them so here's another image of the same museum another image and you can see the registration happen in real time as we go back and forth between those images here we're moving back and forth among neighboring images so images that share some content so this gives you a kind of neighbor tour gives you a rapid way of navigating around inside that space if you had an image like this one somewhere on the web and you wanted to know what's in one of those murals another photo would just be discoverable like that this photo could come from somewhere else entirely it certainly gives you a way of looking at other perspectives on something or close-ups or what's around the corner based on a starting image let's see that this close-up is on a web page that talks about this particular seat you can dive in and then dive back out at that web page and so it gives you a way of looking contextually across different places in the web where the image content actually lives this long-standing dream of augmented reality where the computer will tell you about the world the real world that you're immersed in will finally be delivered with this kind of photosynthetic technology we're going to see a collision of the real world in the virtual world that'll create this incredible experience that people can go and visit and really get a sense of what it's like to see things they've never seen before so your general idea of what they're doing there they're just mining photographs off off the internet and just by the content that what's the detail that's in those images they can group them without any kind of tagging they can work out where the photographs were taken from and basically reconstruct the views from lots and lots of different photographs it's a pretty awesome way of navigating last one a similar a similar idea of about 500 computers and 150 images steve sykes and his colleagues at the university of washington seattle campus reconstructed many of rome's famous landmarks in just 21 hours the idea behind roman today is that we wanted to see how big of a city or model we can build from photos on the internet with support from the national science foundation their rebuilding role pixel by pixel instead of brick by brick calculations that once took months now take hours this is the largest 3d reconstruction that anyone has ever tried it's completely organic it works just from any image that uploads the photos and they suddenly appear in our models it starts with a trip to the photo sharing site flickr to search for images of the real thing once pictures are identified the computer starts the process of making 3d objects from 2d stills if i'm a sculptor there are three photographs of me we find three points in those three photographs which one to my nose from that we know that there are three points in these three images which correspond to a single point in the 3d world computers map huge clusters of these points into 3d space creating ghost-like images called point clouds those squares represent the positions of the source photos for buildings i think we can get accuracy to within um you know a few centimeters can you measure this and for individual objects that are photographed closer up we could potentially do a lot better like you know millimeter level accuracy finally color and texture are added what you get is a virtual 3d tool like this fly through of dubrovnik croatia what excites me is the ability to capture the real world to be able to reconstruct the experience of being somewhere without actually being there look for this next generation technology to show up in online mapping sites video games and a whole lot more it's a virtual guarantee for science nation i'm miles o'brien so this is one of the many things google's doing with this uh street view cars i mean not just being not more than just allowing you to click on a map and see what the view looks like from the road but actually now creating whole 3d models of the buildings that are there so we have a technology to do this now the mathematical techniques are quite complicated it's enormously expensive in terms of computing power but companies like google have just got so much computing power available to them that it makes this makes this very very feasible so just uh you know some of the cool things you can do with images and geometry we can go from we know how to project three-dimensional world into a two-dimensional scene from lots of two-dimensional scenes we can recreate the three-dimensional world so we can go backwards we can go backwards and forwards so quick quick go summary of what we've what we covered uh we we use a number of uh techniques to figure out how far away things are from us from simple measures like occlusion to aerial perspective uh the muscles in our eyes the directions that our eyes are pointing you can think about what photosynthesis together whenever images are taken we can present pairs of images to our eyes and as long as we get the right image to the correct eye we get a very we get a very very vivid sense of depth uh and if we can find who says wrong wasn't built in a day muscle of about 500 computers and 150 000 still images steve seitz and his colleagues at the university of washington's seattle campus reconstructed many of rome's famous landmarks in just 21 hours that's something the idea behind roman today is that we wanted to see how big of a city slide here of a few announcements some of them i mentioned at the beginning field trips next thursday bus leaves 12 10 p.m sharp william street so across the other side of alice street
Info
Channel: Peter Corke
Views: 7,393
Rating: 4.8571429 out of 5
Keywords: stereo, depth, bincocular vision, depth cues, perspective, computational stereo, depth estimation, image feature, 3d reconstruction
Id: C38o9P3SNW4
Channel Id: undefined
Length: 40min 36sec (2436 seconds)
Published: Thu Oct 11 2012
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.