IKEA's Journey Through 3D Visualization and Spatial Computing By Martin Enthed

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Applause] thank you so much and uh really nice to be invited to talk here at awe so um I will take you on a 3D visualization and spatial Computing Journey at Ikea but first start I I hope no everybody knows what Ikea is U yeah I guess so um the Ikea vision is to create a better everyday life for the many people it's nothing actually in that Vision that we would should sell Furniture but we are doing that by doing well-designed functional Home Furnishing products uh and making them so as many people as possible can afford them but giving you some sense of scale U we had a turnover last year about 48 billion EUR um we sell about 75% online uh sorry 25% online 75% in the stores we have about 220,000 employees in the world and last year we opened 71 new physical locations um still doing that but you're not so interested in Ikea's Vision probably and maybe not the company profile you were May interested in to the 3D journey and the 3D Journey really started with this equation this is the render equation and we have been using that to create these that's the whole idea that's that's really what it's been all about and it started back in 2006 when we produced this little product there it's B is the share um and that was the only thing for a year and then we continued in 2006 that was it in 2009 we did the first room uh in 2016 we started doing animations and film and mixing it and in 2015 we ourselves were talking about large scale production so it took us some time and just to make that note now everything I show on screen now is computer made just so you know so when we talk about large scale production it's these kind of things and the thing that made it hard for us to do this is that we didn't stop doing photography we actually do these in combination with photography and what you also see on these kind of pictures is that we have to have a lot of products with things that we don't sell we have to make them in 3D too that means that we also have to do food we do need to fill out drawers with things that we don't sell and this is to prove and show the function of things so food on plates needs to be there too uh but this is just to give you an idea on on what we did uh to be able to do this we started to do a big project internally in 2009 and that was to build a 3D model Bank of things and at the same time we were also cross trining photographers and 3D artists but this thing then contains a lot of stuff except for bread that you see on screen we have all our products in here there about 60,000 highr models of our stuff we uh are storing them in here and then we are having all the materials or all the surface appearances we store them separately and the reason why we do that is that we don't want to put them on the actual object by themselves and Oak number one is maybe on 300 400 products and updating that takes a lot of time but then when we have it like this we can also search for which product has a certain material and then we can find that backwards we also have textures and so on separated from the actual materials also to reuse and our textures are really really large like 30k is a normal one 30k 10K in 16bit position and so we are doing it in very high scale on high high resolution and this we started doing in 2009 we were ready in 2012 and we skipped a lot of products and we skipped these we started doing them five six years ago so hairy things textiles those kind of stuff that is hard and unfortunately for me who works with the 3D Graphics Ikea sells a lot of different things we even sell these and we make them in 3D too of course we have to and that means that we have to figure out how to fold towels we have to figure out how to grow plants we have to figure out how to groom hair on small toys to be able to build all the assets we need to have for all the products we need to do and we do with them in really high resolutions so we can handle a long-term life and I come back to that a little bit later but that also means that we have to render these things and render them takes a lot of time and effort so we have a render form about 3 and a half th000 CPU cores we can't really use gpus because our texture are too large so the gpus have been too small uh then we give even more power to our artists but that means that during the lunch or the evenings we have a render capacity of all of this about 7,000 or almost 8,000 cores this works nicely for us we render almost everything with this we also do all the rendering uh real-time 3D model conversion on the same pipeline so it's the same pipeline running both of these things then we also have a safety the safety is called Cloud so we have an on demand render capacity in the cloud that we can turn on and off we don't use it that often uh but to have it up and running we have about 6 million assets synced online so this is what we're using it for we are making still pictures we are making animations like this and the real reason is that our products have a function and that function we need to show and the same product can have different functions that means that we have to show it in different places move it around in spaces and show it with a lot of different things in it because that's what makes sense in this world so that's the jpeg or animation production that took us about 10 years to do at the same time we were adding the augmented reality stuff in 2012 uh we used the three the catalog the as a marker we we printed about 200 millions of them and gave them out so why not use it as a marker for AR and we did that for 5 years between 2012 until 2017 and then in 12 2017 we skipped the catalog and used AR core and AR kit instead and in in 2016 we actually did our first XR or VR thing and put it on Steam I don't know if anyone in here has played the meatball kitchen kind of thing that we did back in the days yeah a few of you it was one of the earliest things it was my team making that was really fun so things are uh progressing since then um if we are adding the latest years we have been pull taking all our planning or 3D editor tools that is online on the website and put them all in 3D the kitchen planning tools has always been three-dimensional so it's been there since 2001 and if you look at this you can say that the use of 3D has been steadily going up internal use external use the actual use of 3D all the way since 2005 and that first ber share in the catalog that wasn't bigger than a stamp and the latest addition is that we have added it on our product information page and it looks like this it's a small little button there and then you can of course spin it you can see the the files and everything and of course you can go to an AR version natively on the phone if you have a phone this for this audience is probably very very basic but it's there and it's web- based and it's for all the products but the thing that is interesting on working for a company like AIA especially for me is that we have to do everything in scale that means that we have to have everything from product development as a fully connected Pipeline and if you look at the green arrow that should be easy I see some people laughing it's not easy even if it's coming from CAD even if it is 3D if even if it's done that way it's still hard to make this work then you have the lower thing there where you have plants and toys and textile stuff that's even harder and we have about 5050 of our products having hard surface things like you can get from a solid modeling thing to the ones that you have in the lower path and sometimes they're mixed together in the same actual product but we try to go to one physically really physically correct 3D asset and then go from that one down to the different use cases sometime using it as it is but very often going it down and compressing it down in gltf USD or whatever for different render engines to use that in different solutions since 2009 we have made three different levels of details on our products the most highest one is down to 0.2 mm that's 5,000 definitions per meter if that helps and down to uh 1 cm accuracy and this is to be able to use the same kind of asset for different use cases the last 5 years we have actually gone up to four times this so we are at 20,000 definitions per meter or 0.05 mm accuracy and this is to be able to handle the human IM Max the retina display or 60 pixels per degree that you can handle in a way in your eye I will come back to that also a little bit later the thing that we have to think about fortunately or unfortunately I don't know is that we have a lot of products that has a really long life even some that people know by name like Billy uh but this is a ping share and it's designed in 1978 we build the first 3D model of it somewhere in 2011 we can still load that and use it in our production that's over 10 years of technical development and it still works so how do you define something to today that you load on the other side of 2035 it's almost impossible of course if you don't have a crystal ball if you can't look into the future but if you're trying to follow technology and we are trying to do that as much as we can and we also looking at the human need then we are trying to figure out how to do this as physically accurate as we can and one other way of doing it is to join standards organizations uh it it's not as boring as it sounds it actually isn't because here you actually are talking to people around how you are doing these things and how you can standardize things so you can make ones use many times we have been part of Kronos since it since uh 2017 uh mainly because of 3D Commerce and gltf file format uh we have been part of staring met standards forum and Alliance for open USD I'll come back to those but a for a retail not selling silicon or software joining Kronos I can tell you it was scary sitting in meetings there with all these Tech guys but we joined and we saw that we could make good in that and we also joined as a promoter members uh together with these guys we are sitting on the board and as you can see on the logos there we are really sticking out but it's quite fun to be honest and I think we can do we are doing some good during this time we're also being part of starting or promoting to start met standards Forum I just want to note the yellow Mark thing here is that it does not create any standards people are mixing this up it's a place where leading standard organizations companies cooperate it's a one it's a thing that I've never seen anywhere else so if you don't have time or don't feel that you would like to be in on standards join this and listen in you will learn so much of where the industry is going we fell in love in USD already in 2016 personally definitely myself um but then we have been lobbying for many years now that this needs to become a standard it needs a standardization body and when they approached us to start this last summer we joined immediately so we are one of the members here uh joining this and we really really like USD and we hope that that can be a non proprietary I almost can't say that word nonproprietary standard uh that we can save our highr original objects in so we don't have to have that in something that somebody else's own IKEA is really interested in the surface appearance how that looks and that's where we are spending ending a lot of our time so we have been playing with material X for a few years now and we are hoping that the alliance for USD will use this as a base for material definition that would be nice we would love that uh and this is taken from the Ikea strolla editor so we made our own editor for material X to be able to see it fully rendered in the same editor and here I think that's where we can um help the industry because we are really picky about our products looking the way they do in real life um and we hope if we can manage to make this open source we would love to now I would like to take you on a little journey it's a personal turning point it happened uh the 4th of December 2014 at 1:36 p.m. Pacific time I was stuck into a room in Seattle at valve and this was the room and then they pointed at the floor and told me please put this on and I was a little bit scared but that was the first time I could walk around on the other side of the screen for 2 hours without getting motion sick and I get motion sick very easy easily that's not a good treat when you're working in this industry and I was awesome I was really really really cool I never thought I would feel the feelings I had when I was walking around those two hours the graphics wasn't great but they were really happy lending us a hardware already in January 2015 and if you know your history this didn't come become public until 2016 and we were used using it to look at chairs a lot and we looked at them in real life and in this and we couldn't see a big difference that was really really cool we also started to see that this will probably work sizewise color-wise everything and we also started to feel that there is something going on here that fools the brain and it got me super interested in learning more about the human brain the neurons they w we interpret the the images we get into our eyes and how that really works in there and if you look at this this is the way I was taught in school that the the eye is like a camera and that's it but that's not true at all not true at all and I would like to show you two examples and you might have seen this in the past if you take the left one there you have the A and the b square you probably as normal people will see these two squares as different color intensity I would say that you actually get in your back of your brain you have changed those color intensities to make sense of the image this is based on your data the data or training data for your neurons to use current language that has been burnt into your training that that must happen if you have the shadow there because they are actually actually the same color it's you seeing a shadow that makes this happen the same thing goes for the right thing there you have an edge there in the middle that edge looks like the top part is in the Sun the lower part is in the shadow that means that your brain is changing the input to make the lower part lighter than the top but it's not so your brain is changing the what you see to make sense of it to get a three-dimensional thing in your brain another thing we also figured out very fast is that you're only looking into the world at about 2° with high resolution it goes super fast down to six degree six pixels per degree it's about 60 pixels per degree in the highis and highest two degrees if you take your arms out straight out like this put your thumbs together your thumbnails that's about 2° that's where you have high resolution the rest of the things that happen is actually filled in when you're looking around so you get updates on this small little thing but then you get um everything else is filled in by your brain you actually have a blind spot in each eye about 15 Dees out for that eye and 15° out for that eye and that's filled in automatically like a healing brush tool by your brain otherwise you would have felt like you have two black spots and you don't see that so you're looking out in the world with high resolution like this and the rest is filled in by your brain it actually becomes like a cursor so you can see where people are looking and you can use this and of course the newer headsets are using that but that means that the only place where it's a three-dimensional thing is in your brain everything else is two- dimensional or all these small portions of data that you get the only place you have a 3D thing is in your brain that means that your brain is a 3D reconstruction engine and up to now a company like us and every company who sells products have been able to show this by by showing pictures and you deducing from the Shadows you see from the lighting you see on that you are building a three-dimensional representation in your brain and figuring out if if you're going to buy it or not but what if we can give you a 3D thing directly what we think this will do is giving you something that gives can you can understand what it is with lower cognitive load and this is what we're preparing for adaka the future of spatial Computing so the history we stand on is this we have done this for many years we know what 3D is and I won't going to do any big predictions about the future more than it will be big and fast bigger and faster and what we are talking about is really going from these computers stationary to the mobile was a big thing definitely was a big thing but the input and output from those smartphones was approximately the same as the things that you had on the stationary computers yeah I know the newer smartphones knows a little bit about the space if you wanted to but I think going to the next step where the devices we have understand the three-dimensional space and can give data back into that space is a really interesting thing because if you can have that spatial device you can add a virtual product we can put that in a space where your home space so we know where the doors and windows and power outlets and all those things are we could add things that is not only the visual cortex information we can add another human there either in the same space with another device or at another geographical POS place and get them to that space we will be able to interact with that thing lift it up inter look at it together maybe even touch it like yo was talking about in the talk before me and go into microscopic scale maybe doing cut throughs maybe even go to the forest where the wood for that leg was growing up if we want to things that we have been wanting to do for many many years and this is what we called spatial Computing when we started this in 2017 so then we started what we call spatial Computing lab and to be dividing up the problem for us at Ikea we set up these nine areas rooms how do you define a room room how do you define products and how they interact with brooms and themselves how do you do simulation of products like how do they they uh behave in light or how they fold over each other you have room understanding how to understand the space recreate or redesign that space sound you have experience as someone else that's experience the same space But as a person who is longer or shorter or visually impaired or in a wheelchair or something like that how do you share that experience with people and then touch and even smell so if you look at these you you go from the left to the right you see that they're harder and harder to do and I would love to show you quests that we are doing we call our projects quests we have done over 350 of them now the last seven years uh but I can't show you a lot of those because they're secret but I can show you two of them let's see if that works here on the upper left you have a small little test Quest we did trying to move a virtual sofa through a corridor and interact with it and see if you actually could move the sofa from that space to the other space and it's really hard it's hard in real life too h on the lower right you see the first attempts of you working together in the same with the same AR planning but with several different people these are more than four years old but what I I'm allowed to show you I can't show you more than that going back to these uh I would like to talk about something that I think we can contribute and maybe even help out doing is the things around room understanding because companies like us are sitting on a gold mine we are sitting on the possibility to make synthetic training data uh and we know it so it's not the secret um the things that we can do is go from these advertising images to deep learning training data the only thing is needed is adding annotations and as we are doing it in 3D we can do permutations of things pretty fast and as I showed before we are pretty good at rendering so we know how to do that so taking something like this and making a movement in it walking around in it we already have it in 3D so we can do those things and if you have been working with training data you know that training data needs annotation and that's a pain if you're taking it from Real World but if you're taking them from 3D we'll actually get it for free down to pixels so we know what's under each pixel at each frame if we want to do that then of course different training situations needs different kind of input but if you just tell what we want would like to have then you can do that so why am I here talking to you we don't sell Hardware or software or Services we employ some people sometimes so if you want to work with us come and work with us but my main reason to be here is to put our problems into your brilliant heads that's my real reason to be here because we want to connect and collaborate with anyone of you enabling these new technologies because in the end we want to be where the customers are in their home we want to use technology to transfer information with as low cognitive load as possible that's the main goal it's not really computer Graphics the main goal is to transfer with as low cognitive load as possible and we want to show our offer of well-designed functional whole Furnishing products in your living situation and the Ikea vision is to make a better everyday life for the many people but we will not build the base technology and the hardware enabling but all of this and we want the only thing we want to be be is best at using this at scale as smart as possible that's what we want to do so reach out to us and I was just want to end with what our founder always said thei and that's Swedish for most things Still Remains to be done a glorious future thank you so much for having me here [Applause] [Music]
Info
Channel: AWE
Views: 11,485
Rating: undefined out of 5
Keywords: Augmented Reality, Virtual Reality, AR Events, VR Events, AWE Nite meettup
Id: Kup0d4Te3n0
Channel Id: undefined
Length: 27min 47sec (1667 seconds)
Published: Wed Jun 19 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.