Last week I made my very first 3D scan using Polycam. It uses a technology called photogrammetry to generate a 3D model from a series of photos taken at multiple angles. This 3D model can be used in augmented reality or virtual reality applications so that's why it's so interesting to me. Recently a new technology appeared called Nerf: Neual Radiance fields and it made a ton of headlines It's similar to photogrametry because it's also a way to visualize a 3D scene or object using images as an input but it differs from photogrammetry a lot. But before I tell you please subscribe and hit that Bell so you don't miss any new insights from our Channel. Main difference between these two technologies is that photogrametry generates a 3D model with meshes and textures and it's stored in a way. traditional 3D tools can use it, so we can use it in 3D animation and games or in VR or AR applications. NeRF generates a Radiance Field instead of a traditional 3D model. So the way a NeRF stores 3D models is very different. Nerf uses machine learning to create this Radiance Field and with this you can render new viewpoints of an object from totally new angles. So when moving the 3D We model around, it appears to be 3D to your eyes. Radiance Fields has learned And can guess what an object would look like from any angle and renders the image you see on your screen to give you an example, we used a series of images with a slider to make an object on a website appear, three-dimensional, and I remember this super cool slider on the Apple website to see the iPod Touch from all kinds of angles and went whirling around it. Almost Seems like a 3D model, but if I want to see the iPod from a different angle that was not captured by any of the pictures. I'm out of luck with Neural Radiance Fields (NeRF). We can train the machine learning algorithm and then with the radiance field that is created, we can generate images to see the iPod from totally New Perspectives too. I recreated the iPod Touch slider at home and I use the images in Luma AI and this was the result. Unfortunately, the original iPad slider images did not work because there's no background. The nice thing about Nerf is that Reflections and light effects can be captured very accurately. Water, glass and shiny surfaces usually don't work well with photogrammetry and the traditional 3D models that it creates. But the downside of Nerf is that it's not easily applied in AR or VR applications yet. However, that will improve over time. I'm sure of it because of better exporting tools and special viewing applications that will probably be developed for my own experimentation. I used Luma to experiment with Nerf and I use polycom to experiment with photogrammetry, Follow Me And subscribe for more insights about augmented, mixed and virtual reality.