Unity at GDC - ARKit 101: Learn how to build unique AR apps for Apple devices

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] wait okay now from the beginning all right I'm gonna go ahead and get started so thank you for showing up I know it's super early see a couple familiar faces so glad to see you're here for the other side I was doing some AR core workshops earlier but today we're gonna be talking about AR kit so out of curiosity how many people have tried out Eric it before few people that's good so my name is Dan Miller I work for unity technologies as an XR evangelist means I focus a lot on augmented and virtual reality building a lot of sample applications giving workshops traveling around kind of doing community outreach if you guys have any questions about what I'm talking about here general unity stuff AR or VR feel free to reach out to me that is my email and I do have some business cards as well if you guys are interested in that after alright so it's going to get started so what I'm gonna do today is give kind of a high-level overview of how AR kit works having an understanding of how it works will help you kind of better develop apps and things like that for it next I'm going to talk about some of the core features going into some of the what 1.5 beta features which are out now there's no kind of planned release date though for that but if you are in the Developer Program you can't get the beta you can't try those out next just going to talk about some kind of tips and considerations for developing things you can take advantage of that we provide things you can kind of build your on yourself give a brief overview of face tracking so that's part of the AR kit sdk that's included if you have an iPhone 10 and then I'm just gonna kind of blast through some demos that I've created and give some examples of what some other people have created and then I'm gonna try to break down those demos you know walking through the explanation of the kind of technical implementation of what's happening so you guys get it a little better understanding of that and I'm happy to share anything after if you guys want the slides or something like that just feel free to email me alright how does it work so Erica is using something called a visual inertial odometry and at a high level what it's doing is it's using computer vision on the device so the devices camera and it's you I am you data from the device so these are things like the accelerometer and the gyroscope so a lot of people come up to me and ask you know how come Eric it came out so quickly after it was released into like the public and how come a our core kind of was more delayed and one of the main reasons for that is that Apple has had a very strong influence or kind of total control over their manufacturing process for their devices for a very long time so I am you sensors things like accelerometers and gyroscopes are very twitchy and kind of the data that you get from them can be very noisy but because Apple has calibrated those sensors in their manufacturing labs it's easier to kind of reach a broader scale of devices where they kind of control all of the manufacturing they're on the Android side of things you have you know Android phone development happens all over the world it's a much larger scope so it's kind of taken a little bit longer to a calibrate those devices and kind of get there early for the manufacturing so the core features at a high level are plain finding so right now the base is a horizontal only in the beta there is vertical positional tracking so this is basically when you found a plane you kind of move around in space and the device knows the rotation and position that you're in and light estimation so this is being able to basically kind of understand what you're looking at in the frame and either shade or light your models based on that and there's some additional information from there so I'm trying to kind of go over this so for plane finding you're basically it's using feature points which are kind of interesting points that the computer vision algorithm recognizes and then from there when it finds about three or four it will go ahead and create a plane the way the plane finding works is that if there are planes that are kind of overlapping or it understands them to be the same they just kind of merge together and create a larger plane right now again in the release features planes can only be kind of square or rectangular next they have hit testing so in a arc in AIRC it when you're gonna basically place objects in the world you need to kind of have a special ray cast from the device into the real world and what they've done is they've provided this for casting against things like featurepoints which is would be much quicker than kind of waiting to find a plane and what you can do with this is again it's kind of just this built-in raycast function if you're familiar with those inside unity but they've built a special one to kind of do it into the world and then also pinpoint on these these key feature points there next we have light estimation here's some pieces of pizza that have placed in the office here and you can see that when I move the device over here which is kind of darker frame that it's actually adjusting the light of the of the scene here and the way that this is working inside a arc it is that there's a special script attached to a real-time directional light and it's actually just modifying the intensity value so when it gets a little bit darker from the ambient value there then it's gonna kind of turn down the intensity also Eric it now has an ambient color temperature so if you were out in let's say the desert and there's lots of orange and things like that you can actually use that color value to kind of modify the directional light as well so you get again you know light your objects a little bit more like the real world all right so now we can go into some beta features so in beta they do have vertical planes this has been a kind of hot feature that everyone really wants the one thing to note is similar to the horizontal planes you still need kind of interesting feature points if I were to try to get vertical planes on this white wall right here I would have a really tough time maybe if I was able to kind of notice this stuff or notice something over here there's some you know more interesting things going on on that wall but in general you have to be a little bit careful because the plane finding is all determined by feature points for this one is a kind of painting in my apartment and it was able to find it based on kind of the noise that's going on inside of that painting there next are irregular planes so what this does is this actually kind of goes and builds up a mesh based on what you're looking so you can do things like circular tables here I'm just doing it on top of my bed but you can see that it's not necessarily exactly square and so there's a couple of parameters that you can modify for either tiling the during here which happens to be the unity cube as well as the outline so it kind of gives you an outline and then builds up planes from there you can use it just like the other planes but now you can kind of detect and track different surfaces and things like that last is image anchors so this is a big one as well if you're doing any sort of calibration or if you're kind of you know aligning yourself to something you can actually upload your own custom image anchors it does need to be built into the app so I know some people are trying to kind of do it at run time and I've been having some difficulties with that so basically what you can do is you can take almost any picture some are gonna be a little bit easier to detect than others but from there you can basically get an anchor to that so it's like a key position if you move it around it can kind of refined it but it's not gonna track it as well as maybe some other a our SDKs you might be familiar with something like v4u but this is this is kind of super helpful there's a couple more features that I don't I'm not highlighting here in the beta a big one is being able to to kind of read eclair your origin so the way the AR kit works inside unity is that when you start the application you are positioned at 0 0 0 that's kind of the origin inside the unity scene as you move around as you kind of navigate in the world you go from that origin so if you were to create something that kind of tracked your motion path right now you would have to kind of start at the bottom then you would lift your device up to be that you know 1.5 meters or Albert all you are but with the ability to reset your world origin combined with this these image anchors you can really kind of set your origin to be a known spot so if you're kind of you know you have something on the ground there you say preset world origin to that position then that's how you can kind of extrapolate from from where you are all right some tips and considerations so use tracking States there's basically Eric it kind of has a session here you can link into the session at any point and get back kind of useful flags based on what's happening these are things like insufficient features if you're looking at something like the white wall there's also excessive motion and things like that you know really with a are being as new as it is now I think it's important to kind of onboard and have any of your users if you're kind of producing a an app for consumers to really understand what's going on you know if they're on let's say the train or the subway and they open up their app and they're not tracking and they don't know what's happening it's it's gonna be a bad time the other thing is the kind of ideal motion for finding planes and detecting them and that is kind of more of a like circular or a back-and-forth motion from there so again linking in with this providing some UI kind of instructing the user how to calibrate and find planes is always an important important thing to consider the next thing is I've talked a little bit about feature points I've described what planes are for for attaching things to objects quickly it's much it's much more kind of optimal or a better user experience if instead of using planes you're using future points so if I were to make an app that was measuring the height of this podium here it would be difficult it went wouldn't be impossible but it can take a long time to find a plane on top here a book here Casta you know cast a little anchor here or something then find a plate on the ground and kind of measure that distance if I'm using feature points I can just find something that's interesting and happening here say hey here's the start of it go down to the bottom and then say hey there's the end so instead of waiting for planes you can just cast against feature points and that's how you can a just quickly get objects in the world and those do have kind of a position in the world so definitely consider that you can kind of disable and reset the AR session inside your app if things aren't working or something like that you can kind of prompt that which will go ahead and restart it there and you can also limit what you're looking for if you're only doing feature points so you don't want to find planes like that measuring app you can kind of restrict that inside the initialization of the session so consider doing that you know with all the data that's going in from the IMU and the computer vision it can kind of be fairly intensive depending on what device you have so the more you're restricting or not doing get me helpful there all right this is a huge one for me use shadows so you know in the real world almost everything has shadows or cast shadows I guess and inside air KITT what we've done in our plugin is we've included a special shader that allows for shadows on transparent objects so what you can do there is you can actually just create a plane so you can see me moving the plane there this is just a you know 3d plane primitive I've included that special shader and then there's a real-time directional light that's actually casting shadows so what you can do is if you're placing object in the world you can basically just have this be a child object kind of at the base of your object and that'll cast shadows something I've done is well in the past to make you know as good of like I don't know say viral videos but make interesting AR kitten videos is you can actually go and adjust the rotation of a directional light inside your app to match what the shadows other things like in real life so I live here in San Francisco there is a Jewish Museum around the the corner that has a massive open courtyard that's where I shoot all my AR videos because it has some kind of interesting floor there and there's usually not people hanging out and a couple of videos I've done I went out there I was kind of seeing what's going on I was seeing the angle and direction that people shadows were being cast at and then I went in and kind of modified the rotation of my directional light now it might be a little better to have that ability to rotate that in your app but it really adds a lot to the experience and really kind of places objects in the world there when you have realistic shadows and you can see here with the with the plane there we're getting kind of the accurate the accurate kind of angles with that little cube checkerboard cube and then the circle as well so definitely keep that in mind and one thing I want to want to note as well is that this shader happens to be included in the AR kit sdk but there's nothing that's restricting it to the iOS environment or anything like that so if you were using other mobile ARS SDKs you can't go ahead and just grab this from here it's just a special shader alright next thing is using real-world size so the way that unity works for virtual and augmented reality is that the size you author things that is the size that it'll be in the real world so unity has this idea of a kind of plain primitive that is a kind of our cube that's like a 1 meter by 1 meter cube if I were to create that inside unity place it here in the world that's gonna be kind of 1 meter high so when you're creating things like digital characters or environments kind of keep that in mind that the scale that you're authoring in unity is gonna be relative to the real world scale all right next thing is just kind of think about object interaction so a lot of the times in a our kit you're gonna be placing objects on a plane you might want to kind of move objects around on the plane no thanks tunnel brick you might want to move objects around on the plane but most of the time you just want to move it in a kind of the X and z direction so if you've placed something on the plane here you want to move it around in this direction if you start to move things below in the Y direction then it's kind of going to be clipping through the world and the way that the rendering works in AR is that the digital objects get rendered last and on top of the camera feed so that's something to really consider there's when you're placing objects if you can place them as close to the floor or on the floor as possible and if they start to clip through you're gonna get some kind of weird perspective or rendering issues that that are hard to kind of understand alright that was a lot of information what is next up well alright so Eric hit remote basically I'm going to be using this a little bit with the face tracking what we've done is we've provided a one-way stream to unity where you could actually hook up an iOS device and you can start to use it to collect air data and debug and kind of build on top of some of your devices there so I definitely recommend checking this out you'll see how you can utilize it for some more interesting features here in a moment when I go through some of my demos here but basically the general idea is that there's a special scene inside unity you build that to your device you launch that and then when you run the editor with the device plugged in you can kind of hook it up through the console and start streaming the data there all right face tracking so this is as you might be aware this is an iPhone 10 only feature it's using the front-facing camera which has some kind of depth sensors and things like that but the high level features here are a face anchor so this basically is kind of at the center of your head or face and this is tracked one thing to note here is that this is inside your face so when you're going to kind of line up objects if you put it you know right where the origin is that I'm gonna have like a mustache kind of coming out over here or something odd like that so definitely keep that in mind I will say the air kit remote is super helpful for iterating on a lot of the face tracking stuff just because you can start to understand scale line things up and then kind of copy those component transforms apply those back next thing is facial mesh geometry so actually it's able to scan your face and mesh that at runtime and what you can do with that is you can apply another shader so I talked about the shadow shader there's also an occlusion shader and what you can do with that is it basically occludes any geometry that is that is kind of shown on your face so I have an app where I'm kind of wearing a hat well tip that I want to kind of tip the hat back a little bit because that's how people normally wear hats well with the way that rendering works without this occlusion shader you're gonna kind of see the back of my hat through or on top of my head if I put the occlusion shader on the mask you'll see that actually kind of Clips it looks like it fits around my head a little bit more so that's just something to keep in mind but again we're getting kind of real-world meshing geometry your face at runtime next thing is facial blend shapes so there's actually 50 coefficients which you can think of this kind of just floating point values that are being kind of read and manipulated from your face these are everything from the movement of your eyes to how much you're kind of left lip is open and things like that and so what you can do is you can go and create digital characters and you don't have to use all 50 of these coefficients I will say if you find an artist that's making facial blend shapes and has all 50 of these that is can be a somewhat daunting task but you can't actually just use a couple of them you know you can use mouth open you can use left eye you can use a opens and things like that and we do actually include a sample that does have all 50 if you guys want to check out how that works and I'll be showing that here in a little bit but the other thing you can do with these facial blend shapes is you can actually link into just one of these and then drive interesting kind of interactions and things from those and I'll be showing a little demo of that all right here's a visualization this is Jimmy if you guys don't know he is kind of the core developer behind a our kit from our side so you can see here the image anchor you can see how it is kind of inside of his face so it's kind of the origin there here we have the real-time meshing and the blend shapes over there give a shout out Jimmy alright demos what is the first one spaceship so what I'm gonna do here isn't this one is gonna kind of demonstrate let's see what we got here this one is going to demonstrate sweet how the how the kind of motion tracking works so the first thing I'm going to do is I'm going to grab this piece of paper put it on the ground over there so I can hopefully find up see you and I found a plane right away that's perfect so what I'm gonna do here is I'm gonna try to there and so what I'm doing now it's a little bit hard to see but I have started a kind of timeline animation that has a spaceship coming down from the heavens there so what I've done is spawn the spaceship there's some basic controls to kind of spin it around and I'll go and find the door here right there and now I can open the door and because I've placed this in the world and kind of locked it in here I can now just go and kind of walk inside of it so this is this motion tracking that I'm talking about can maybe look out at the window see some of the beautiful audience there but you can see I can kind of just walk around in it now there's kind of a thing in the center there but again I've placed just an object in the world the size of this object can be kind of determined and how you want but you can see how you can start to either interact walk through kind of digital or virtual things again this because this is kind of rendering on top it's kind of going through the table and things like that but in a more open environment it might it might look a little bit better there but you can see again like I've established a ground plane and now as I move through the the world with the accelerometer and gyroscope it's able to just kind of manipulate through there so again there's there's not much code going on here at all I just spawned a really big object that happens to have some geometry on the center and then you're able to just walk through it once you've placed it on a ground plane there right portal all right so this one I remember which one it is here we go look little teaser it's what's coming next all right so this one is basically a portal effect so it's fairly common I've seen a lot of these especially when Eric it first came out kind of showing off basically a portal into a different world or something like that and the interesting thing here is that as I move around I'm actually getting kind of a different perspective into the portal so you can see I'm kind of I want to like look at that tree I'm just moving around here and the way I'm doing that is basically using something in unity called a render texture and I'm gonna walk through how I kind of set that up inside the unity editor here to just explain how that works so you guys could potentially build something like this now right now it's just kind of a viewing portal but you can imagine that once I got close because I know the position of the camera and the position of that object inside unity I could then kind of trigger a different effect maybe I could go into the portal maybe we can teleport to it something like that all right so how does this work in unity there's all my cool apps there so let's go not this one this one alright so what I'm doing inside unity is I'm basically right here I have my little portal guys can see this let me make this a little bit bigger so I have my little portal here and the way that it's set up is that it has a just cylinder that I've kind of mashed together and it's using this shader and the shader is being supplied by a render texture so what a render texture is is it's a way to take what the camera is rendering instead they have in that camera render say hey everything you're rendering from that camera put in this texture so this is just kind of a special object inside unity you see I have it over here called portal RT and what I have set up inside unity is I have a portal cam so this is just a camera that's actually located over here looking at this little environment and what I'm doing code wise is I'm taking my main camera which is hanging out over here this is what's driven by the the AR device so this camera which is the main camera that has this unity AR video script on it this is what's rendering the real world this is what's moving around this is what's tracking so what I'm doing here in order to get that kind of perspective on the portal is I have a little script right here that basically says let's see I don't know how to make this bigger unfortunately there we go figured it out so what I have here is I have a little script that basically says hey you know have a public transform of the real camera and then basically have an offset that's like a hundred you it's off and this one happens to be a hundred units off in the x-direction and in my unity scene I've put this little object here a hundred units over there so then what's happening with this script here is that I'm saying hey basically have the same position and rotation but with the position have it offset a hundred 100 units so that's why I'm able to kind of use the rotation of my device to then drive this other camera that's over there do you have a question ah so potentially but you can set up calling flags on the main camera so that the yeah so not be rendered it yeah that's a good question there so potentially if I looked over there or if I walked 100 meters I would be able to see that but you could also kind of offset that 100 meters as well and it could be could be passed the clipping plane there but again I'm you know I'm just basically I have two cameras like this everything's happening with the rotation and then the position is just offset and that's what gives you that little perspective and depth on the portal alright and I'm recorder so this is very hot off the press as in I made it a few days ago and the demo has worked in the past has been a little finicky so you guys are in for a treat but I want to walk through it at a technical level before I attempt to do it so this is the sloth this is now kind of the unity an emoji that ships with the with the AR kit plugin so this is able to use the this is able to use the front facing camera to drive all these blend shapes so if I go ahead and look at this slothy err you can see that I go into my inspector put down here you can see that he has a bunch of different blend shapes if I kind of modify one of these blend shapes you can see his eye kind of moving there maybe we could do something I squint and you can kind of see it so again these are fairly subtle blend shapes that you can kind of manipulate here like that so you can see it kind of blinking there and these are all driven by that front-facing camera so what I'm doing in this demo here is I'm going to use the air kit remote and my iPhone 10 here to record some facial captured animation and then inside unity I'm using an experimental animation API that allows me to record data that's happening in the unity editor to an animation clip so this is going to be me capturing my face doing some stuff and then it's going to be recorded as kind of offline facial capture that we could then play back as an animation inside of unity so let's give it a go so I'm going to unplug this plug in my camera here there we go some nice pictures of me and my grandpa so now I want to go to here see if this works Daniels iPhone all right there we go here's my apps so I'm going to basically launched something called air kit remote this is something I've pre built to the device through here and now inside unity I go and I hit play and then I link up my device through the console here so you can see iPhone there there and then I start tracking on here so you'll see it hopefully go there so that's as me in the editor let me zoom out a little bit so this is me kind of driving the sloth space and now in the game here I can go ahead and hit the spacebar and say like welcome to GDC ah so it's a little bit delayed now I've kind of hit space there and stopped recording it and now if we unplayed have noticed that it does kind of tilt to it to an there unfortunately with the with devotion there for what was happening on my device it's just a camera feed to me this is kind of how the remote works once you've allowed it to do that and allowed face tracking but back in the editor here what we can go and see is with my animation clips I've now recorded the position so this is me kind of subtly moving around it does kind of again rotate there but I've also recorded the blend shape values so this is all the blend shape values there's tons of like kind of subtleties that's going on you can see is I kind of scrub through these it allows me to play those back you can see I'm looking pretty happy right there even though it's so early in the morning so what I can do now is I can go back and you can see all these values that have been changed because I'm kind of locked on the animator there I can go back here and I've already created a couple animator components that have references to these clips that I've just captured so I'm gonna kind of turn that on so in the inspector when I turn that on and then I want to also turn this one's already on so now I'm just gonna go ahead and hit play in here and we should be able to see there we go so we're seeing kind of the playback of what I was saying there in the editor so I've kind of captured myself doing some facial capture and then you could imagine using this on like an MPC or a character or something like that so again on the technical side just to show you guys how that works I hit up the animation team last night I was like hey guys you know why is this recorder still experimental and they basically said they've been really busy with other stuff including a lot of the GDC stuff and they haven't fully tested it but they are expecting it to kind of come out soon so for these scripts here I just want to show you here real quick what you can do open up this one what you do for all right that was not the idea I expected but we can roll with it what I'm doing here is if I can make this bigger here we go I'm using again this experimental animations API the namespace here this recording only works in the editor so that is something to keep in mind but if you aren't processing or be able to pass their information through things like the remote you can go ahead and use this and then what you do is you just create a game object recorder and kind of create it you know a new instance of it here said it's route to the object that's on and then you specifically bind a component so if you're familiar with unity and how the animation system works what you do is if you hit kind of the record button and then any value that you change on that object is going to get keyed into the animation clip so if I'm going kind of in here and just kind of show you guys this real quick so I have a cube right here if I go in my animation window and just create something real quick like this if I hit the record button right here and then I start manipulating this cube you can see that these values over here have now turned red and down here we now have a key there so that's kind of how the animation system works it's kind of like I'm activating this record button so in the script here what I'm doing is you bind a certain component this could really be any component that can be keyable or animatable and for this one I'm using the skinned mesh renderer that's where all the blend shapes live underneath I'm saying you know this is the object and I believe this is a callback to if it's taking objects underneath it and then what I'm doing here is I'm just saying if the clip is equal to null return if I am recording that I'm gonna be taking a snapshot at time.deltatime which is just the rate at which time is passing and then if I'm stopped recording there we're gonna save it to the clip and so in unity I have a animation clip that I've assigned here so let me go to my sloth head and so you can see there's this little face clip down here so yeah I just wanted to yeah so anyway so you basically what you do is you just create a public clip and then all your values get get kind of outputted into that using this game object recorder and you can again kind of use this for multiple things to process and kind of turn into animation clips and then once this animation clip you can use that in a state machine you know I could record the welcome to GDC clip I could record the you know have a good morning clip or whatever and then just use that in a state machine or something like that question yeah so the way the animation system works we have this here let me yep so this is the dope sheet and here are the curves you can see because we're just using just kind of set values here but yeah we could go and kind of grab these and move these around so yep these are just kind of keyed properties they're inside of unity all right so next one face stuff all right what I mean by this we'll find out so let me go back and see if we can get this going all right so this one I believe is like this and so what I'm doing here is a couple different things so I'm linking into a single blend shape that is the mouth movement here I'm also including an occlusion mask on my face so if you notice that if I hold my mouth down it looks like it's kind of being clipped by my lips so you can see that so it's kind of really coming from my mouth I guess and what that does is that's just a kind of subtle subtle touch that you could do that that makes it again occlude objects behind your face and then I'm just linking into a blend shape turning on a particle effect turning off a particle effect based on the value of that blend shape question no the occlusion the the occlusion shader can be applied to anything there is an example where we have the occlusion shader on planes so you could place an object if I found a plane on here and I place the object there and I kind of pull my camera back over here it will be occluded that's what I'm saying yes you can use it it's just a shader that you can use on anything and it just includes any objects behind it yes so like that's soothing you would you could you know you could potentially do something where you're spawning like little occlusion cubes on feature points or something like that but it would be better or probably a little more optimal if you put it on the plane you detect a plane here you put an object over there and then you get that occlusion all right let's go ahead and try something else out here all right so this what I was talking about here this is cowboy Dan and what you can see here is that I'm using the occlusion mask to again kind of restrict what's happening over here in my hat because typically everything just renders on top of the camera feed you know I can't put my hand in front of my moustache here because all digital objects get rendered on top so you by using the occlusion mask I can more accurately kind of place objects like this and make it look like they're kind of on my head a little bit more alright and this might be the broken version this might not even work yeah so here's what I'm talking about so this is not using the occlusion mask and that's because all digital objects are rendered on last and you can see that there's nothing to kind of pull my face out there because this is just happening on top and so this is what we get without using the occlusion mask or without using the facial geometry here and these are just child objects of the anchor that's found on my face there all right that's what I wanted to show there all right next thing I want to show is this is my coworker John he is based out of Australia and he is the Evangelist there he's done a lot of air work before joining unity and now he's doing a lot of kind of cool experiments and what's going on here so what he's doing in this example and it's a little bit blurry I apologize for that is it is taking an image of the camera what you're getting from the camera feed and it is then wrapping it in a cube map that you can then reference for reflections so what you can see here is that this is Kyle the robot he's really shiny as he walks by this person wearing red there's actually a red reflection on Kyle so this is again kind of like real-time reflections that you're able to apply to objects based on what you're seeing now a a cube map is gonna be six pictures all around you so I'd take one here and take one here take one here take one here things like that unfortunately you know this is kind of a little bit fuzzy because you're only getting one angle but if you wrap it in a cube map do a little bit of blurring things like that you can get some pretty cool effects here where the real world is affecting these digital objects and a big part of any augmented uh kind of augmented experiences is how much you understand about the real world so taking advantage of things like environmental lighting taking major things like real world shadows and stuff like that you know the understanding of the environment the better you do you better you understand it the better you able have augment objects into the real world and create some kind of cool effects where the real world is actually interacting with those objects question on Kyle yeah I think that would be I think that would that looks like it might be a real light yeah just kind of settled there so yeah this is really cool when he was showing this off people people's minds were pretty blown and then just in general it's kind of cool with a are objects or just air apps in general I really like the ability to implement things like screen capture so you can like share shop share stuff to social media so when we were doing this it was at a conference in India we were going around a lot of kids were kind of posing with Kyle here and he got to play some looping animations and things like that and then we would send the video or the picture to them so if you're interested in this he has implemented it for a our core and AR kit there's plans to potentially wrap it up into some of our higher-level api's for the multi-platform stuff but it is available on his github right there just github.com slash John seeds ma slash air camera lighting and so he does have to do some slightly different things with the way that we're kind of capturing and rendering the real world or the camera feed for each mobile sdk but this one I believe was captured in AR kit all right that's it thanks a lot for come in if you guys have any more questions or things like that I'll be around I think we have about ten minutes left and do you guys have any follow-ups or things like that feel free to shoot me an email that's my Twitter where I post a lot of they are and random unity related things so thanks a lot for coming [Applause] yep yeah so basically the inside unity the way the shadow the direction of the shadows are controlled by the rotation of the directional light so potentially you could have like a little gizmo in your app that is just adjusting the rotation through that right now there's nothing there's no API to detect where lights are coming from I know it's something that a lot of people want and I'm I imagine they could be working on something like that because that again just goes a lot further into matching these digital objects into the real world cool any other questions all right yep Oh yep sure yep no I mean that what what you're gonna run into there is things like quality settings on mobile and then just yeah just kind of shadow distances and things like that so yeah you could definitely and what I do a lot I didn't really mention this but in all our SDKs and Eric it specifically when you're placing objects on planes it's actually placed from the root of the object so if I had a character right here and his root happen to be maybe here like in the center if you place that in the plane he's gonna be halfway in the ground so you always want to make sure that you either kind of have a parent root that is at his feet so that you place him there and then for like a floating object again you would just put the root of the object with that shadow plane and they kind of at the base there and then you could put you know whatever you want any amount of distance above that all right thanks a lot guys hope you enjoy the rest of your GDC and thanks for coming [Music]
Info
Channel: Unity
Views: 14,155
Rating: undefined out of 5
Keywords: Unity3d, Unity, Unity Technologies, Games, Game Development, Game Dev, Game Engine
Id: _Y7pM9TP_CM
Channel Id: undefined
Length: 43min 57sec (2637 seconds)
Published: Wed Mar 28 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.