AWS re:Invent 2019: [REPEAT 1] Create augmented reality experiences with AWS (ARV204-R1)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
can everybody hear me can everybody hear me all right give me a thumbs up if you're perfect all right give you ten seconds to finish any emails or texts that you're working on you can put your phones down and we'll get started here hmm thank you very much for coming I know it is the day after the replay party and you're probably off two flights today so thank you very much for spending an hour with me to talk about creating augmented reality experiences with AWS my name is Leo Chan I'm the worldwide tech lead for AR VR and 3d across AWS very excited to talk to you about how to get started with ADA AR development on AWS I walked around the room a little bit talked about some backgrounds and people are very curious about how to get started in the space so let's jump right into it with why you probably care about AR at this time so if you think about how many a our devices are out there there's over a billion AR capable devices and these are devices that people carry with them every day and their pockets and these devices keep people in reality as opposed to taking them out of reality and say V R so the augments through space and enriches the way that they interact with reality we're very excited about that if you think about it as a very large display surface you have a captive audience through these digital devices that people have on them all the time so for context there's about 30 million headsets out there for VR and they're through over a billion ar devices so a huge opportunity for your company to reach your customers and delight them in new ways using AR if you think about what type of AR experiences you might be interested in developing this is not a comprehensive list but something to inspire you and get your juices flowing about what you might want to produce an AR for example if you're a retailer you might want in place project previewing this is a fairly popular application where you as shown here on the amazon.com site where you can take a virtual object like a chair and place it in your room and valuated for size evaluated for color in place maybe blender on your kitchen counter in place product previewing an AR another example would be for virtual demonstrations so this example is a demo that you can find an article for on our our website if you are a museum for example and you want to show a virtual artifact in place into people's homes where they can learn about their artifacts in in place you could do something like this here to do a virtual demonstration this is also useful if you're doing augmented reality for virtual demonstrations of meetings if you're in a conference and you need to look at a similar job site or a product or a construction site if you're in architecture and engineering to do a virtual walkthrough with a meeting in AR we're also seeing a lot of demand from our customers for guides they are manuals and walkthroughs done in AR so that we actually the expo is over but Deloitte developed this project here in in the lower-right and there was a model truck ideally you would do this on a real truck in the real world but of course we can't bring one of those to the show so they have a small truck and using object detection we detected the truck the parts of the truck then when you pointed the AR manual at it it would bring up different cues and virtual objects to guide you through a maintenance of that truck so this this air Lorien up at the top which reduces drag on me on the truck is virtual and it's placed on top of the real object and then based on an analysis of the current wind conditions it told it tells you what is the optimal angle for that particular airline for the day so this kind of interactive augmented reality step through manual as you know gets rid of the 300 page paper manual and helps with retention and understanding in place for people using these manuals then the final one is a heads-up display so if anybody saw the movie Iron Man the when you get the heads-up display with all the data coming up over your view so it's a fairly common way to visualize say IOT data in place so if you're in a factory you want to do an analysis of what's happening with a particular machine you can imagine holding up your phone - to the Machine recognising what it is and then getting the IOT data streamed on top of your device or maybe you're looking at a water tank and you can't see in it and you can get sort of an x-ray vision to a heads-up display of the the fluid levels inside of a tank using IOT sensors and data it's a lot of exciting ways to improve and delight your customers using a are using these mobile devices so the question then is how to get started and how can a WS help you do some of the heavy lifting to get started in AR so to ground that conversation I'd like you to think about what AR experience you might be thinking about developing for your own business now there's lots of different unique differentiated features that will be in your app but what we're hearing from customers is there's a common v common experience pillars that we're seeing come out across all a are apps and I'll share that with you now the first one is these apps need to be smart and by smart I mean they need to be context aware so in that case of putting the sofa on the ground you want your app to be smart enough to know to put it on the ground and not on a chair or on a table or on a counter you shouldn't so the app needs to have enough context awareness of what it's looking at to do the right thing for the application that you're that you want it to do the next thing is a little less obvious but this app needs to be voice driven and that's because when you have a mobile AR experience or later down the road a head mounted AR experience there's no keyboard really available to you to do any typing in mobile AR you can't afford the screen real estate for keyboard and it takes throughout of the experience head-mounted AR there was no keyboard in existence so any any thing that you would normally rely on a keyboard for you want to be able to drive with your voice so you want to be able to have your application talk to you and for you to talk to it so it's a chatbot just like you the way you talk to your your echo dot or your echo devices with Alexa so you want chatbots and you want to voice driven interface because you're not typing in AR next thing is fairly common for a lot of AWS as existing customers it's your app needs to be globally available redundant and provide scalable storage so what that means is when you publish your app and deploy it you need it to be available for customers all around the planet doesn't matter if you're in South America and Europe Asia or North America you want you app available for these people and you want the experience for your customers in each region to be very similar low latency the same low latency download so it's easy to say how to implement independently but idiom AWS can help you with that and then the scalable storage part of AR speaks to what many are calling the mirror world for AR so as we imagine how AR is going to be augmenting the real world across the planet eventually we will be building up a database of the world where each location in the world in each room will be known to this database and we'll be putting virtual objects geo located in different real-world locations for you to come back to and share and the turns out the real world is a really really big place and when we get to the mirror world and the amount of content that is needed to develop an AR experience that is global it the amount of data that you need scales up linearly as the needs of your app grows so you never know how much storage you're going to need in the future is this rules out but you need a storage system that will gracefully scale with you in a way that is elastic and is in a way that'll allows you to only pay for what you use so globally available scalable storage the next pillar that we're seeing from everyone is that these experiences need to be multi-user we hear from a lot of people who want to do remote collaboration conferencing or learning systems or even just casual entertainment systems where you are talking to your friends or your loved ones in AR you you need multiple experiences across the planet to be synchronized at the same time and then you also need your platform to work in a spatial immersive 3d environment so you need to be able to author the content in a immersive 3d spatial environment and then be able to deploy it to to multiple platforms across the planet the multiple platforms is actually there's actually another pillar that we're seeing as well because there's over there's ios and android for AR and you don't want to be building for each application for for each platform and having to distribute them across the planet and then the point i didn't put up here is you everyone is also telling us that we need to do this in a way that it's passed and way that it's easy and a way that is low cost for development and for the customers those are always underlying tenants to what we're doing at AWS so I'd like to take you through what those experience pillars look like in across the AWS ecosystem because this is actually a fairly large list and what we're hearing from a lot of customers expecially startups is the initial capital expenditure just they get started before you even develop your first feature just fulfilling this list it's prohibitive for a lot of people so we I'm excited to share with you methods you can use services to get all this stood up without being an expert in any one of them or having to hire a development teams to develop any one of them so let's start at the very very first pillar the smart pillar so let's talk about context awareness this is machine learning computer vision a IML to be able to for your app to be able to understand what is happening in the environment that in a situation it's situated in we have a wide ia IML stock I'd like to focus on just three services today on this the first one is recognition that is this one icon here recognition takes in an image video feed gives you a bounding box in that feed for objects that it recognizes so let's say you're in Enterprise manufacturing and you're doing an AR device that needs to did detect certain machine parts to bring up a menu for or an IOT structure for that particular machine so you would train the AI ml recognition service on the particular part that you're looking for and then when the phone detects if that's in the video feed it gives you a bounding box and returns the the fact that you at the object that you're trying to detect and this you can get going with just a few API calls and you don't need to hire a team of AI ml specialists to do it that's recognition the second one is comprehend so we talked a bit about voice driven input comprehend takes texts phrases and does an AI ml analysis on that text to tell you the meaning of what's in that phrase it can tell you if the phrase is positive if it's negative or if it's neutral for the sentiment analysis of what the customer is saying and it could also give you a breakdown of the parts of speech so what are the nouns what are the verbs what if the adjective or the adverbs and using these parts of speech you can then build some simple programs to pivot on them to see like what are they looking for what are they what are they trying to do with this this thing that they're looking for through the through voice recognition and again this is without needing any specialized knowledge on how did how to do voice analysis or AI ml and then finally if these bespoke if you need something more bespoke and these higher-level AI services are not they don't fit your particular application we have sage maker this one on the right here which is a fully managed service to allow you to train new machine learning data sets for new machine learning applications so if you want something lower-level in the AI ml stack to do novel bespoke machine learning applications sage maker allows you to do that without worrying about the heavy lift of the infrastructure to build up the the data learning infrastructure behind it so you save you a lot of time in building up new machine learning models let's move on to the second pillar voice driven there's a lot of progress here in voice driven technology across same Amazon that I'm really excited to to work with it's really fun me we have a couple services here that make it really easy to make Chad watts the first is is Lex which without code allows you to build a chatbot tree to fulfill intense and data slots and those intents so when you this is the same technology by the way behind Alexa that's running in your echo dot so for example if you are with Alexa you say set a timer that's your intent your intention is a set a timer and then a data slot that you need for that intention is the amount of time right so when you say Alexa start a timer what you'll say is how much time do you want to set that timer for and you'll say 60 seconds and until started so the data slot you need is the time and the intent is to set a timer so Lex is a code free way through the console but allows you to set up these chat bot trees do the voice synthesis and the chat bot back and forth without any code you can literally set up that chat bot in probably a couple minutes in the Lex console we're excited about that Lex is supported by poly which is our text to speech synthesis engine so if you're not in a chat bot tree but you just need text to be spoken you give poly alignment a phrase of text it gives you back an audio file that is spoken in the voice that you want with different accents male/female different regions and in different languages as well so there's been a lot of progress in poly this year I don't know if you've noticed but around the fall time Alexa just started sounding a lot more natural because we switched to a neuro neural text to speech synthesis engine which is uses machine learning for more natural synthesis in just a few weeks ago we released to speech styles one is the newscaster style so she sounds like the newscaster 1 is the conversational style so she sounds much more casual so we're really happy about that and that again you can get started with just an API call you don't need to be a speech synthesis expert to get started with this and then the final one I want to point out is Amazon Translate here on the right here this takes text in one language to English translates it to another language say Spanish French Korean Japanese so you can take if you're say working on a learning management system you could take your content that is in English and then translate it to Japanese a Korean and then have Paulie speak that that in in Korean or Japanese so you can take your same content and automatically translate it out to to multiple languages for a global audience using Amazon Translate our third pillar is global scalable content this speaks to some of our most mature services s3 the simple storage service was one of our first services that we released this service allows you to I think most people probably know how this one works this allows you to make available to you cloud storage that elastically grows as your needs grow and you only pay for what you use and when you don't need it you can shrink it back down and and you don't have to pre provision any any storage or predict how much you'll need in the future it just elastically cleaver grows with you so very convenient for AR and VR apps especially as your your needs will grow elastically as your app develops and then the one beside it is cloud front so cloud front is our content distribution network which means when you push that content s3 you can use our 66 availability zones across our 22 regions or points of presence to have your content available across the planet in seconds with similar low latency download times across the planet so no matter where your customers are they'll be able to guarantee that they have the same experience as you do in your home location where you're publishing pillar number four is multi-user we have a service called app sync which is a fully managed graph QL backend that allows you to set up multiple mobile devices and synchronize their state using response calls and it's completely fully managed and really simplifies the backend setup of a graph to QL infrastructure and then the final pillar finally is spatial interactive 3d and for here we have Amazon Sumerian so Sumerian is a service which allows you to in a browser author 3d content and then also deploy that 3d immersive content to your customers also through a browser so there's no apps to download for your customers to use your product then there's no apps for you to download and install and licensed to start authoring in AR and VR so the big takeaway I want to take away from this slide here there's a lot of services listed here they're all very useful for AR development but when you go to the console if you look under the a RvR category there's only one listed and that's Sumerian because that's the one that is the most directly related to authoring the 3d content for AR but the takeaway here is we have all these other services that allow you to stand up in AR experience and bring it to a voice driven smart globally available application that has nothing to do with Sumerian and the way that we work at AWS is we don't lock you into any one service you don't have to use all of them to use any one of them what I mean by that is you could start with Sumerian make a voice driven app but take it make a pilot for your company for example get it out to product get it out to market go into production and then maybe have for a few years you realize you know about this stuff this doesn't really fit my needs anymore I need something else other than Sumerian to do my environment well you could replace this with another package maybe unity or unreal and still use these other services through the AWS SDK there's no problem with that it's not an issue we can get your data out your scenes out and and swap any one of these services and still keep the other the rest of the ecosystem together and that's the same for any one of these different parts but if you think about the AR apps that you're developing back at home I'd love you to keep in mind these other pillars like you know how can I bring voice driven design into this app how can I make it smart and how can I make it globally available so that people can use it all over the planet so while we could talk about any of these services we're going to focus on Amazon Sumerian for the rest of this talk how many people in the room have actually used Sumerian before can I get a show of hands okay about four or five people so that this let me give two slides here to just talk about what Amazon Sumerian is and then the demos that I'll give a little bit later will land and make a lot more sense Amazon Sumerian is an environment that runs in your browser so in Chrome or Firefox on desktop and it does three things it allows you to create an AR experience integrated with other AWS services and then deliver it to your customers so we'll start from the left here the creation side it is a simple to use drag-and-drop style 3d interface that runs in your browser so the first thing you'll do when you start is you'll notice that there's no tools to download there's no licenses to buy you don't have to claim the revenue of your company to figure out what level of Licensing you need you just go to the console you click on Sumerian and five seconds later the editor is up in your browser and you're ready to create your first AR see very low friction very easy to get started and there's no fees to pay there's no licensing fees you can just start creating right away once you have created your AR experience so what I mean by that if you're working in 3d you're creating your virtual objects in 3d importing them via FBX or BGA from a content creation package and you've placed them around then you want to start looking at integrating other AWS services such as lexan poly to do the voice driven interfaces that we talked about so the integrated part for the most common AR applications such as chatbots we have built in components the dialog component and the speech component for Lex and poly respectively so it's very easy that they directly hook up to the backend AWS services to generate those super easy to get started with very no code involved in that direct integration for the other AWS services that are not directly integrated into Sumerian there's over 175 services now across the AWS any service that supports the AWS JavaScript SDK and that's almost all of them by the way is available in Sumerian because Sumerian has a built-in native JavaScript interpreter to Ana scripting engine to script your scenes in AR so anything that's you can do in JavaScript you can do directly inside of Sumerian because it's running in a browser once you've integrated your services now you're ready to deliver it to your customers so this part is as convenient to secure a part and then when you hit publish in Sumerian you don't build an app you don't get an executable you get a URL that URL like a web page is what you give to your customers and in Safari on iOS or chrome on Android that's URL opens up in the browser so there's no app to download it's just going through the browser and in the browser you get a 3d AR experience running from the browser so this is pretty huge because there's no app to download because there's no app to download doesn't mean that there's no app for you to deploy so anybody who has gone through the deployment process knows how painful this is what you do is you build your app and - by the way because you have to build one for iOS and one for Android so you have to build two apps then you have to split one to the app store and one to the Play Store and you have to wait for them to get approved and once they're approved then you have to tell your customers to find them in the app store and download them and once they download it then they can open it up and they can have your experience so that is measured that whole loop is measured in days if not weeks whereas with the URL you put publish you send the URL via QR code as a mass text to your customer and you say here's the URL it's ready to go so that's you know measured in seconds in terms of the deployment process and then when you're ready to update your experience if you're in an app world it's the same thing you build it twice and you submit it to the Play Store in the App Store prove it and then they bring it up to the AppStore and then it gets updated and you get an update all your customers will get an update app notification they have to download it and then you get your iteration so that's again measured in days where it's with the URL it's measured in seconds you just like a webpage you hit republish it updates the URL they hit refresh on their browser and on their next refresh they get the content immediately so we know this works because we're not reinventing the wheel here this is how the World Wide Web was developed and how we get the richness of our ecosystem with web pages so this is a live link here with a builder fair quiz game that was built one of our by one of our solutions architects in Cimmerian so if you it's ok to take out your phones here I won't be offended if you want to take out your phones and point it at this QR code you will get this quiz experience which shows you these cubes floating around you in place and if you click on those cubes you'll get a quiz game about those different AWS services that are shown on each cube so deploying an application is really this easy you can do it in a presentation like this nobody has to go to the App Store and download anything they can get they can get their applications running in their browsers very simply it'll tax the Wi-Fi a little bit here or the cell network but it runs natively in your browser here so we're very excited about that easily shared and it's just and you can republish in section in seconds just like the World Wide Web and we really believe to get to a rich ecosystem of AR and VR content in the world we really need this published by URL ability for 3d experiences so I think we can all imagine if the world if every webpage had to go through an app store to be approved and published and approved by two companies I think that we can all imagine that the world wide web would be a much less rich place than it is today in order to get the content that we need for a rich ARV our ecosystem there's published by URL system I think will enable us to have see a lot more diverse and rich content out in the world so here is a look at what it actually looks like to author and to publish in Sumerian so on the left here is what you will see on your desktop and I'll demo this in a second it's just it's a very standard layout for 3d editors on the left here you have an entity overview this shows you what's in your scene in the middle right here you have a a canvas that shows you your 3d environment these checkered pixels or transparent pixels that represents what is feeding through through the camera or feed from your mobile device on the bottom here is a state machine which allows you to define interactions in your scene without using any code and on the right is a properties panel that allows you to manipulate and change the properties for the entity that you have selected fairly straightforward and I'll show you a demo of this in just a second on the right here is what your customer sees when you go to publish the scene so this is running well first of all this is chrome up here there's no plugins this is out of the box straight chrome and down here this is the same thing this is out of the box straight chrome running on an android device in this case you can see there's a URL here I've got some authentication here to log in and then I've got the AR experience running which uses image tracking to find my business card paste the video onto that and then put some 3d text in the world this particular tutorial is available on our website you could get this going in about two hours from scratch and you don't have to download any software to start the editor just launches in your browser in fact this is actually being run as a workshop right now around the corner in the Mirage ballroom D where attendees are making this within a two hour workshop so super simple and really easy to get started with so if you are interested in this I highly recommend you go online and find the article for this and give it a shot get you started in AR so I wanted to spend a few minutes talking about the different AR platforms if you will the different arrow targets that we can support in Cimmerian so the first one is fully web based AR so I'll caveat this this slide with these are very early days for AR so we're we're still in development as a worldwide community on what we can do in browsers in a are so web based AR is as the name implies running completely in your browser and allows you to do the deployment via URL in order to do web-based AR we only have access to the sensor data and AR functionality on your phone that the browser has access to the w3c committee is working on a specification called web XR to standardize how we can pass this AR data to the browser unfortunately as I stand up here today it's not finished the Google Chrome will support this very soon hopefully by the end of the year what we have built before is WebGL which is a graphics language which allows us to render 3d content and web VR which allows us to get head orientation data to allow us to figure out which way your your device is pointed but for the more advanced AR functionalities so object detection in AR kit 2 or lighting estimation for the lighting in a room these are only available an AR kit in a our core these are the direct iOS and Android AR layers these aren't exposed yet to the browser's through WebEx are because the specification isn't finished yet but over time we will see this develop more and it'll the web-based AR layer will become more and more complete and more and more robust so this will date myself but you know like I'm old enough to remember when the world wide web was created and in the very early days of early browsers Netscape Mozilla not every browser supported every image type I don't know if anybody who was old enough to remember this but you you your web page had to detect different image browser types to say like is a gift supported on this browser or JPEG or PNG and you had to have fall backs on your web page depending on the capabilities of a browser you don't have to worry about that anymore right because html5 and the standards of HTML are so well known that the browser's just account for that and everyone has a uniform support for all image types and movie types and different codecs so aar is like the web was 20 years ago were there's different capabilities in each browser but this was only gonna get better and it won't take as long as it did for the world wide web but we're we're as we're waiting for that stabilization period for all different browsers to support a our capabilities but for the cutting-edge functionality in AR and VR you need to go down to the AR kit in the air AR core level and we allow you to do that as well so we have tutorials on how to build an AR kit or an AR Coria 2 app and then we create a webview in that app and we passed the AR data from the AR kit or AR core app up into the webview and sumerians running in that webview via a URL so that URL quick iteration workflow that I talked about it's still valid in this but you still have to unfortunately today release an app to get to those high level advanced AR features but you can still refresh your content because all of your content is running into meri and via a URL so it's kind of like a thin client that is running an iOS or Android to get access to those to those particular features so it really depends the natural question is which one do I need to build an app or do I not need to build an app I want to do the URL deployment you know which one should I choose and it really depends on what you want to do in AR today I'm it is my sincere hope that I'll be up on stage here with you next year where I'm like you can do anything you want just with the URL but for now you actually have to it depends on how deep the AR functionality you need is so there's three different ways you can place virtual objects in space in AR so I'm going to talk about each one of these in turn so the going from left to right is goes from easiest to most sophisticated so the easiest one is magic window it's it's a three degree of freedom solution which only looks at the orientation of your phone and based on the orientation it places it drowns objects in space so those who downloaded the original play zone game that was done with magic window the second one is using markers so 2d well-known by well-known I mean a priori known images in space and then finding those images using computer vision and then grounding things and anchoring things based on the location of those images in space so the business card demo that was an example of image tracking and then the last one on the right is simultaneous localization and mapping sometimes called markerless a local AR so this is the same thing but you don't need to put any images or markers in the space to do the anchoring the way this works is it uses feature detection - Greta - through your camera and machine learning to track feature points in space and then anchor and ground content based on that feature mapping so let's go through each one to ground it in examples here so this was magic window as I said before with this quiz game and what we have here is these cubes seem to be staying in place as I move the camera around because these the orientation sensors know which direction I'm looking at with the phone and it applies the inverse motion to the boxes so that they appear to stay in space so this works really well for anything that's floating in the air because it doesn't have to be grounded to a table or to a chair it it can be it as long as it's floating in the air so why is that important it's because the orientation mapping is only so accurate so what that means is your object that you're trying to anchor in spot will tend to float a little bit it'll tend to drift just a little bit it's not too bad if you look at it here but it is drifting a little that's okay if something's floating in the air but if you need something to be anchored in place on the ground or on a table it won't look right it'll look like it's kind of drifting in the in spot so this isn't going to do it for that but if anything that's floating magic window is going to do the trick now the good news here is this is fully available as a JavaScript library and will work with web deployed AR which is why we worked so conveniently in the first so the next one is image or image or marker based tracking so we have both web-based and iOS and Android implementations for image tracking what we do here is we upload a particular image in this case my business card and then we use computer vision to find that image in the real world and then anchor content onto it in this case the video and then the 3d text around that so this particular implementation on the trail uses 1/8 third-party library third-party JavaScript library through a company called étoile they that is a paid-for license so it provides you the image tracking library in JavaScript so that you can actually do a web-based they are deployed by URL experience just like this but you have to pay for that library if you want a version of the of that tutorial that doesn't require a paid version you can you can do it directly an AR kit and a or cor and you have to unfortunately you have to deploy an app so that's the difference but you actually then you get the image based tracking without having to pay any licensing fees so it depends which which direction you want to go but these image based tracking methods are quite flexible so if we look at if we look at this demo here it's using the exact same technology sometimes we call this on box experiences for retail the Cimmerian host here that's dancing I'm using the actual Amazon box tape as the image to ground you to anchor her to so we're able to anchor this virtual avatar in space but onto the a priori known image of the box tape so you can get a lot of flexibility here as long as you know what image you're searching for this is also done with the étoile library this is our tutorial site so you can get started here if you want to get this QR code to the our tutorial page with both étoile which is up in the air left here and a our kit and AR an AR course through iOS 3 different methodologies to do image tracking through Sumerian not every application is going to be able to use marker based tracking though you might you may be in a scenario where you don't know your customer environment completely and you don't you aren't able to put markers or no images to do the anchoring so for that you're going to need what we call slam simultaneous locate localization and mapping so this gives you true scale AR so in in correct scale virtual objects placed into the real world again a common theme for this is product placement like this chair which by the way is a model that comes straight out of the amazon.com data warehouse it's a real chair on the Amazon site and then it's replacing it in on the ground it's detecting the fourth-floor plane and placing it in place at the right size on the phone here so currently to do this you do require an iOS or an Android app to deploy this one so the 3d the 3d scene is running in a web view through Sumerian but the slam part of this is running through IO IO S or Android for our kit or air core respectively in the future though we expect web xr2 support full slam so that we'll be able to do web-based deploys they are by URL for slam as well so I'm gonna quickly show you a video that shows you a Sumerian experience that uses slam using an iOS deploy so this is a shows a museum artifact one thing that we have here is the same experience on two different surfaces surfaces so this is the same 3d Sumerian experience that our solution architect authored this is what it looks like when you start it up on your desktop remember it's just a URL right so this is running inside of Chrome it goes it's detects that you're on a desktop platform and it gives you this 3d mouse spinnable view with these information hotspots and allows you to go underneath and see the bottom of the model because it's running on a desktop the exact same scene has a little bit of JavaScript it says wait am i running on a mobile device and if I am bring up the AR interface so it's the exact same scene but it gives you the slam implementation to put the object on a table and then gives you the exact same metadata through hotspots here on how to put and how to put the object on the table and one difference if you can't see underneath the object anymore so that third hotspot disappears and you can try this out for yourself through this uh bitly like here bit late zoom - art museum - artifact so I'd like to switch over to a live demo now to show you the power of publishing by URL and also what this Sumerian interface looks like so this will be exciting so the first thing I want to show want to show you is how to get started with Sumerian if you have an AWS account go to the console if you don't search for AWS free tier and you can sign up for an AWS account for free and there was a free tier support level for Sumerian so you can get started without paying any fees here to do with Sumerian just go down on the right here about halfway down you'll see this AR VR I'm just going to increase the font so you can see this a bit better you'll see this AR and VR section and there's Amazon Sumerian if you click on this you'll get to our editor so and this huge fonts here so again this is running in Chrome with no plugins this is just plain old out-of-the-box Chrome so you're right in the editor right away and you can jump right in to starting your first your first scenes so the one where I'm going to do is an AR scene with you and this QR code that I have up on the screen is the live published expiry we're gonna work on together so you can follow along here so please take out your phones point it to this QR code and if you have a QR scanner usually just the camera app will do it if you just opened up the camera app point it to the QR code it should prompt you to open up a URL in your browser and then just go ahead and click OK on that browser and you'll get the experience that we're about to author together it'll ask you a bunch of permissions about getting access to your camera and your motion sensors just say yes to the mall allow to the molecule only allowing the webpage by the way you're not doing this for all things on your phone and then you can follow along with with what we're going to be doing here so if you start up that app anybody have it working after the download anyone ok cool so if you look up point your phone's up to the sky you should see two drones flying around on top of you if things are working here so let me just go down here so here it is it running in Cimmerian what you should see when you look up at the sky is something like this two drones flying around and circling above your head is everybody saying that more or less yes nods yep okay great got a few see a few thumbs up so that's great so the first thing I want to say is that nobody had to download an app you know if anybody is doing app development at home you know how or your company's you know how difficult this is if you wanted to show a are experienced one of your co-workers you have to provision their device before we publishing it on the App Store or the Play Store this is actually quite difficult in mobile app application but everybody's getting it in this room here just through this simple URL let's do a quick tour of the interface here so you know what you're looking at on the left here and the upper left is the entity panel which shows me the content of the scene this is my asset panel down here which is populated with new objects is I want to pull them in this is my main canvas which is a 3d canvas let this lets me look around there are two drones that we're working with these sort of transparent pixels that we see in the background represent the camera feed that comes that comes through when you're seeing the app on your mobile device and these are the drones of course as I select the drones I can then change their properties using this property panel here on the right so to show an iteration quickly of how this works I'm going to change these drones into toasters just so there's a bit of a visual difference here so I've set it up where I have instead of each one each drone I'm gonna put a toaster in place for for each drone like this and then if I hit play and look up so now I've got two toasters flying you won't get it yet I haven't republished it so you don't get it yet so these toasters are gonna be flying around yet uh above our head but I see everybody immediately went and looked to see if they had that change so let me show you how easy it is to see that so if I go up to here into the upper right area of this Sumerian editor there's a publish button and if I go ahead and click republish here and I want you to see how fast this is just so you can appreciate it I'll hit republish and it's done okay so now if everybody hits refresh so you we do have to like a web page you will have to refresh your browser's if you hit refresh and then look up in the sky instead of two drones touchwood if everything's working here you should see two toasters flying around your head so anybody seeing that any anythin perfect awesome and then because this goes through cloud for cloud front I'd like to say that anybody who is watching online on anywhere on the planet we get the same experience whether they're in Hong Kong or in London or in Brazil they could get the same experience when I hit this republish so it's a little only that easy to iterate and update your content you don't have to go through an approval process it's just like producing a webpage and it's really this easy to work directly in the browser so another thing I'd like to show you because we have a couple minutes left here is how to use our state machine editor to define behaviors in Cimmerian without using any code so our state machine editor well I'm gonna I'm gonna I'm gonna like these I'm going to light these toasters on fire basically and so add add a little bit of a fire effect to them so I have this fire state machine which I've put on to each onto each toaster our state machine editor is this graph editor in the on the in the bottom panel here and the way the graph editor works is you have these boxes which are States and then each state can have a series of actions in them and then you translate transition from one state to the next you can only ever be in one state at a time so I went ahead go ahead and create a state here to start with state one here I'll set it as the initial state and I'll call it wait so I'm going to have this first action wait one second so that before we start relight the toast on fire it'll wait a second so I find a wait action the wait action takes a parameter for the number of seconds to wait I'll put in one second and this says this gives me a action label here on wait and so that's what this says is run the action wait wait one second and at the end of one second transition out of the state via this on wait and label so I if I click and drag from that label to the next state what this says it's after one second then chance what's the word I'm looking for here transition thank you transition into this next state on the right which is the fire state which doesn't have any actions on it yet so I'll add one so we actually have an action for making fire so I'll create a fire effects here and we'll add fire so when I hit play here let me just it'll open up wait one second and then transfer over here and light these toast these pieces of toast on fire here so if I look at that and look around we have these flaming toasters going in a circle so we were able to define that interaction without using any code just through the state machine editor and there's hundreds of actions we have built-in and we have to build up functionality within the state editor without writing any code and if those don't do the trick for you then we have a full JavaScript API built in to the editor which allows you to define your own actions and you can build them up for more customized I could customize behavior so I'll go ahead here and hit republish and that's it it's done so if you hit refresh on your phone again and then let it reload and then look up if this is working correctly after one second and when it starts you should see fire coming out of these pieces these toasters and spinning around above your head so I hope that drives home just how easy it is to publish and deploy via a URL these AR experiences and these are running natively in your browsers and I'm looking forward to moving forward there will be more and more functionality supported at the web level as the web exercise specification moves forward and we have more and more capabilities and browsers potential so when you're ready to get started with Cimmerian as I showed you just go to the AWS console you can start up for free and then just click on the Amazon Cimmerian link and without any software to download you can get started the way that Amazon Cimmerian is priced is based on storage and it's very it doesn't cost very much it's six cents per gigabyte destroy the scene and the drone's scene that you should you saw here is less than 10 Meg's so we're talking about fractions like hundredths of a cent to story that per month so it really is inconsequential but there is a free tier for the first 12 months you get 50 megabytes of a month free the first 12 months with Sumerian if you have a AWS free tier account so you won't even feel the fractions of a cent then we charge per view when you're when your customers are using your scene and that's thirty-eight cents per gigabyte per per view per month and again because the scene we saw here was only ten megabytes it's really fractions of a cent to to view it so you know a thousand customers a thousand customers would cost you a penny for example in this case so it's a yeah very very low cost and low friction to get started so to say thank you for staying to the end here we have a special offer as we invent we have a brand new book that is just been published by the Wonderlic group on learning how to create a Masonic Sumerian experiences through creating four specific apps so if you download this QR code you will get the book in PDF version for free it's part of a reinvent promotion that we're doing so please download the book and get started using Amazon through learning by doing we also have online if you search for Sumerian tutorials we have a rich set of beginner intermediate and advanced tutorials to get you started on Sumerian both in AR and VR and web 3d so this talk we only focused on AR but the because we're working in a 3d environment you can deploy the same experience to VR you by putting in a VR camera instead of an AR camera or two web 3d by just leaving the default camera there you can have a 3d experience that's working that could be embedded in a web page such as a product or retail online retail webpage where you have 3d experiences running on a 2d website and with that I'm sure you're hearing a lot of this because it's the end of the conference but please complete the survey session for in your mobile app for this this particular talk I know it's near the end of the conference but we take these the feedback very seriously it helps us figure out what's working for you and whether or not we should run sessions similar to this content similar to this next year so we can tune it so please if you didn't like what you heard or if you did like what you heard please go into the survey give us the rating please put in comments let's go directly to me as a presenter so you can help me improve on what I can do better so that over over years you reinvent gets better and better for you guys so please fill out the survey and the mobile app and with that I'd love to say I just want to say thank you very much I'll be available for some questions to the side of the stage here enjoy the rest of your conference and safe flight home thank you very much for coming [Applause]
Info
Channel: AWS Events
Views: 726
Rating: 5 out of 5
Keywords: re:Invent 2019, Amazon, AWS re:Invent, ARV204-R1, AR/VR, Amazon CloudFront, Not Applicable, Amazon Sumerian, Amazon S3
Id: pS0lvm6Aky0
Channel Id: undefined
Length: 52min 47sec (3167 seconds)
Published: Tue Dec 10 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.