Build Real-Time Application Experiences with Serverless WebSockets, GraphQL & AWS AppSync

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] hello and welcome thank you for joining us this is build real-time application experiences with serverless web sockets graphql and aws have syncs enhanced subscription filtering my name is ed lima and i'm a senior product manager in the aws habsync team so here here's what we're going to talk about today we start by quickly leveling up on graphql as a technology and how aws appsync leverages graphql to provide a powerful and flexible serverless api solution we have a quick recap on the features we launched recently after that we'll get to the main topic of this session and talk about real-time application experiences we then go through our brand new filtering and invalidation capabilities in detail following up with use cases and how customers are using appsync real-time capabilities at scale in production to engage with their own customers and enhance their user experience finally we have a quick demo so let's get started so what is graphql let's start with traditional apis or application programming interfaces with traditional apis multiple end points are exposed to your client application web or mobile or even an internal service these endpoints can access the same data source with different access patterns and requirements in this example you use three different databases for instance imagine one for products one for orders and one for inventory details if we're talking about an e-commerce application with traditional apis loading the data is done sequentially first i need to access my first endpoint to retrieve some data so i got uh data about products but i only need to know a specific product identifier in its description the client application has to manually pick and choose the fields in a json payload and discard the rest of the data this is called over fetching so i pass the product id i retrieved from the first endpoint to the second endpoint in order to get some more data about orders that a given product is part of now i have that about orders and products so i need to call a third endpoint to check my inventory with traditional apis i had to make three api calls because i was either over fetching on under fetched i fetching data for each call so it was i had to make more network calls to get all the data i needed so what if i could get all the data with a single api call without over fetching or under fetching and this is what graphql can do for us let's see how it works with graphql i have a single api endpoint exposed to all clients and i only need to make a single api call the client defines the data it needs in a query and the graphical engine itself contacts all the data sources to request the data data sources then send the response to the graphql engine the engine then constructs a customized json payload with all the data including relationships for instance an order can have many products the client then receives all data it requested nothing more and often less without over fetching or on the fetching in a single network call graphql is a query language for apis and they run time to fulfill those queries it's completely unrelated to graph databases as data is not persisted in the graphql layer graphio is expressed as a declarative language for requesting data from your application backend it uses a type system that allows you to understand data requirements and get meaningful error messages making it easier to use and prototype there are three types of graphical operations queries to read data mutations to modify or write data and subscriptions subscriptions are linked to mutations and allow to send real-time notifications to subscribed clients whenever that is changed by annotation in blue here that's what we call a selection set the selection set represents a collection of fields defined to be returned by a graph cooperation the ability to define selection set is a powerful feature of graphql the client defines that it needs for instance i can define my selection set to specific fields like an id data the total of an order and the shipping estimate in the notation however subscribe clients just need the id the order total and the shipping estimate they're not interested in the order date no need to pick and choose fields from the static json payload in the client code the response is dynamically generated by the graphql backhand according to what the client defines so what's the best way to use graphql on aws well you could run a graphql server but servers are so 2014 aren't they with a double sub sync can build scalable and robust applications over the graphql with both real-time and offline capabilities best of all no service you worry about appsync allows your power applications if the right data from one or more data sources at global scale simplifies application development by letting you create a flexible api to securely access manipulate and combine data from one or more data sources that can be databases or other apis to get started you need to model and define your data in a type graphql schema the data model in the schema tells api consumers what data can be exposed to authorized clients we've automatically generated api documentation as api can be defined if a simple and understandable type schema it allows you to better organize and understand your data in the data flow between different downstream components of your api graphql has a built-in computer runtime component where developers can customize their own business logic directly at the api layer these components are called resolvers and they provide the logical glue between the data defined in the graphical schema and the data in the actual data sources using resolvers you can map an operation or even a single field of type defined the schema with a specific data source which allows you to retrieve data for different fields in a different data sources with a single api call there are core resolvers because they are built-in functions in graphql that resolve types or fields defined in the graphical schema with the data in the data sources then we have the data sources in your aws account where the data is actually stored so these are the main components in the app sync api graphical schema the resolvers and the data sources let's take a look at the specific scenario an e-commerce system imagine there are different services in your application backhand accessible via different technologies user data is stored in a highly scalable nosql table orders are access to a rest api the current inventory stock is checked to alarm the function and price information is in the sql database usually client applications would have to make four or more different calls to each one of those services exposed to different api endpoints to get the data they needed increasing the complexity of the client side you know i should get the data i'd have to make at least again four network calls and most likely over fetch or under fetch data as different data sources would send an entire payload i might not need with graphql and appsync you can access all the data from a single endpoint with a single network call so i think graphical apis combine the data from all those different services into a single payload the client defines allowing to fetch only the data required for a client defined in the graph call selection set that is retrieved from multiple data sources automatically for me and those other sources can be anything sql no sequel http it doesn't matter graphql is agnostic so you can use the best tool and technology for the job in your backend the client just sees a single constructed payload receiving user profile data from dynamo or the details from the api gateway as well as seamlessly injecting specific fields of inventory available availability and price from data and price data from lambda in aurora or if a single network call and from the single api endpoint abstracting all the back-end complexity front-end or api consumer teams can add new features or applications without waiting for a specific back-end team to implement new apis since graphql is self-documented it's easy for front-end developers to learn what data is available from a single unified graphql api endpoint so now we are experts in graphql and appsync let's just have a really quick recap on some exciting features we released in the past six months you can now use custom domain names to access graphcoin points and the real-time endpoint of an app sync api developers can have better batching when using lambda resolvers and configure the match maximum batching size to a size of up 200 2 000 instead of the previous fixed default of five perfect solution if you have to deal with the n plus one problem we created a new extension that allows you to keep the manage api caching in appsync always fresh so each time that is modified with a graphql mutation just that specific item can be evicted from the cache great for performance in lower latency use cases you can also define a directly in the app sync additional custom headers sent to clients upon a response from api call we recently released some new vtl utilities so you can generate case sortable unique identifiers on appsync automatically and also you can send a string of object or object from the resolver directly to cloudwatch logs which is great for troubleshooting the debug a graphical resolver code and finally we have enhanced filtering for subscriptions and that's the focus of our talk today i'm going to take a look a little bit more in that about a new feature epson customers are building all sorts of applications enterprise mobile web iot apps accessing graphical api apis powered by appsync to interact with different data sources in an aws account universal apis are used for sorting data and access data from multiple microservices offline apps to provide x access to synchronize local data when network connections are unreliable and real-time applications provide an engaging experience to users who need to access data as it happens in this section in the session you will focus on real time so what is real time and why it's important for modern applications with real time there's someone or something sending or publishing some data can be a user backend event or service can be anything then there are interested parties which are subscribers who want to receive updates about something they're interested and subscribe to maybe a chat group message an order status latest scores of an nba game a table available in a restaurant and more another should be useful users want to receive these updates as they happen what's the point of the game i'm watching finished two hours ago and i only received the results of the game after i already know them i want you know game stats right after my favorite player scores this update should be sent to me automatically as a user i don't want to keep refreshing a page or looking for updates myself i have better things to do with my time no one wants to be like a kid in a car trip asking are we there yet every two minutes users receive the data they are looking for and they don't need to look for it themselves in order to be flexible a real-time solution should be should enable direct one-to-one messages or broadcast into a select group of users or even a wide audience of users fanning out messages to thousands of millions or subscribers which means filtering is a very important capability if i'm a baseball fan i want to receive notifications about baseball not about the cricket world cup or soccer real-time applications are becoming increasingly popular in the new digital economy it's being used as an important capability for different industries and use cases for instance delivering interactive learning experiences like multi-user classrooms power engaging virtual events with interactive real-time features deliver global real-time experiences to keep fans informed engaged and entertained deliver fast personalized personalized fintech data in real time to mobile and web applications power imperative gaming experiences there are weak that wicked fast and utterly reliable asset tracking live transit updates race critical diagnostics and more much more and why it's so important we're constantly multitasking and working on different devices laptop phone tablets there are lots of things going on and users want to receive only important and pertinent information they need without worrying about what they don't need we're all busy unless your application push information to users they might not engage with it and maybe they will forget it eventually if you want to remain competitive you must keep your user base engaged real time is becoming a hard requirement for modern applications but there is a problem it's not easy to implement infra infrastructure to support a proper user experience for customers in order to do it properly to support your business you need fleets of websockets and pub sub servers and a big team of engineers to manage the whole infrastructure as your business grow you need to think about implementing a properly distributed architecture with high availability automatically scaling up or down according to demand not to mention decent performance to provide data without with low latency then your user base grows from tens to hundreds to thousands the concurrent websocket connections to multiple devices need to be managed accordingly with profit broadcasting and fund out support so data gets to your users at the same time all of that considering reliability security consistency infrastructure management patching operations the list keeps going on and on good news is that appsync got you covered you don't need to worry about this big list we take care of infrastructure management scalability security performance and more real-time data connection scalability fanout and broadcast broadcasting are all handled by the service allowing you to focus on your application business use cases and requirements instead of dealing with the complex infrastructure to manage websockets connections at scale another in one of our tests with our websockets engine a couple of years ago when we released the new pura socket engine we managed to effortlessly reach 10 million active websocket connections snapsync with plenty of room to scale to much more abstinent takes advantage of graphql subscriptions should perform real-time operations by pushing data to clients that choose to listen to specific events from the backend this means that you can easily and effortlessly making make any supported data source in aws appsync real time with connection management handled automatically between the client and the service a backend service can easily broadcast data to connected clients or connect or clients can send data to other clients depending on the use case whenever a client invokes a graphical subscription operation a secure app socket connection is automatically established and managed by it apple sapsync and will remain constantly connected to your backend allowing users to receive real-time data from any supported appsync data source how does it work in practice after authorized clients make a subscription call linked to a specific mutation a websocket connection is established for 24 hours providing a secure channel between the client and appsync clients can be subscribed to multiple mutations and the related subscriptions are all share the same websocket channel in this example we have a single dynamodb table but it would work the same with any supported lab sync data source or multiple data sources graphical subscriptions are invoked in response to mutation or change in data in other words when data is modified via graph communication operation absyin notify subscribers of that data change on successful complex completion of the mutation in short a mutation publishes data which essential client subscribe to it following a publish subscribe or pub sub pattern here we have a client executing invitation for instance create order or create message or update comment or delete it reaches their single graphical endpoint in appsync absync then performs authorization checks and executes any business logic defined in the graphql is over then send a request to dynamodb to create an item in the table after the data is saved to the data source appsync automatically broadcasts the data to all connected clients each client can have different selection sets defined for instance a client might just need an id while other clients define an id and a description which means clients just receive the data they need a back-end process for instance sent by sns or sqs or eventbridge step functions aws iut or any microservice in your architecture can evoke a lambda function that executes a mutation against appsync usually backend processes on lambda can leverage iam to securely call appsync internally same as before appsync would then perform authorization checks and execute any business logic defined in the resolver then send the request to dynamodb to create an item in the table and again same as before after the data is saved to the data source appsync automatically broadcast data to all connected subscribe clients as you notice absinthe needs to be aware of annotation in order to send subscriptions data what if there is an out-of-band process updating dynamodb in the backend without sending annotation to app sync in that case appsync is not aware of the mutation and do not send data to subscribe clients if there's a requirement all changes to dynamodb even out of band need to be pushed to connect the clients we can leverage dynamodb streams invoking a longer function that issues a mutation trap sync so we're not saving data twice or overwriting the data previously saved on dynamo we can use a local resolver for the mutation triggered by lambda local resolvers are internal trap sync and can work as a pub sub channel where no data is persistent client subscribe to the notation linked to the roll core is over are then notified about the out-of-band changes in the dynamodb table and receive the required data automatically appsync empowers real-time collaboration with simplicity real-time data connection management scalability fanout and broadcasting are all handled by appsync allowing you to focus on your application business use cases and requirements instead of dealing with the complex infrastructure to manage websocket connections at scale you can visualize metrics on cloud watch on active connections and subscription status errors message size and more you can get started with real time in absence in minutes with a simple generic websocket pub sub api use case you don't even need to know graphql you can get started by simply creating the concept of a channel that has two properties a name and a data field with few lines of code using the aws cdk data is sent to a specific channel name client subscribed to the channel receive the data automatically in this example it's all typescript the actual graphql schema is generated programmatically after deploying the absync api you can use the amplify libraries to generate all the necessary graphical operations and client code automatically it's not even necessary to have an amplified cli project you just need to execute a single command which is called amplify code jam we're just effectively using the amplify client libraries as a real-time sdk for appsync amplify clients provide simple abstractions to interact with apps and graphical api back-ends with few lines of code including built-in website capabilities fully compatible with the appsync websocket real-time protocol out of the box in this implementation data is not persisted anywhere channels are ephemeral and upsync use local resolvers to publish data to subscribers in case you just need to implement a real-time feature in an existing application this generic pub sub api setup can be easily integrated to any application or api technology while there are advantages in using a single api endpoint to securely access manipulate and combine data from all or more data sources with graphql there's no need to convert or rebuild an existing rest-based application from scratch in order to take advantage of appsync's real-time capabilities for instance you could have an existing crud create read update delete workload in a separate api endpoint with clients sending and receiving messages or events from the existing application to the generic pub sub api for real time and pub sub purposes only best of all this is all completely serverless it scales automatically with demand and you only pay for what to use no infrastructure to manage you can find actual code in an article going through the entrance implementation by following these links here but only if you're curious and and want to know a little bit more on how the the back-end is implemented you can go to the app sync console and deploy the exact exact same setup and have a simple serverless pub sub websocket api even quicker similar to what cdk was doing the previous slide the console wizard generates and deploys a cloudformation template you can reuse or modify then simulate should amplify code gen command the console also generates all the configuration and client code for javascript ios android clients with a click of a button all you need to do is focus on your client code and real-time business logic also based on the simple pub sub api setup i mentioned in the previous slide there's a small tweak that can be made to turn it into a global multi-region serverless websocket api you just need to add one x-ray service to the mix amazon event bridge the whole implementation is both serverless and functionless you don't need to to add any lambda functions or any any client code there or any back-end code sorry clients are subscribed to a specific channel and messages are pushed pushed automatically to clients listening or subscribe to the channel in both regions you can check out details in the github repository link below so now we are all experts in graphql appsync and serverlessrealtime subscriptions let's talk about our shiny new filtering capabilities there are specific use cases we can't have all clients receiving the same data in a broadcast you might just need groups of clients or users to receive specific data maybe a subset of users is interested in a specific subject matter while other users have no interest to get data about it it's important to have a proper filtering solution so specific groups of users only get updates or notifications that are pertinent to them depending on the set of conditions arguments or filters in the same sense this can be this should be done 101 as well so only one client receives subscription data think of a chat app where 101 conversations are private and only the two users in the chat can receive messages there are two options when it comes to real-time data filtering and appsync basic filtering and enhanced filtering let's start with basic filtering so basically with basic filtering you don't need to do anything on the backhand side the subscription just needs to be defined in the graphical schema and linked to one or more mutations all you need to do is define argument values on the client side which means for instance absent can send data only to clients listening for a particular id like a chat id or a group id or order id or user id or all of them together there's a limit of five arguments based on strict equality which means the order of arguments matter and those arguments they can be combined with and logic only so events from the location x and date y and description z allowing for some flexibility when filtering data to subscribe clients basic filtering has been available in app sync since the service was launched however customers told us they want to do more and have complex filtering capabilities in order to address more complex use cases we recently launched enhanced filtering with the new enhanced filtering capabilities you have more flexibility to decide what clients users or devices receive messages based on specific conditions there are 12 new logical operators to choose from enabling you to define fine brain filters for any use case filters are defined directly in the subscription resolver in that sync in a simple json format or directly into the serverless api layer in a center location for all connected clients clients can also send a filter payload that allows you have dynamic filters in case there's a requirement for clients to define their own filtering logic enhanced filtering appsync also enables subscription and validation capabilities allowing you to define a filter to specifically close websocket connections based on certain conditions these new capabilities simplify application development with improved authorization logic over data now let's take a look at the filtering syntax as i mentioned before filters are all defined of a simple json syntax a filter group defines a list or group of filters filters can have one or more rules which one with fields operators and values you can use any field defined in the graphical schema which is published by mutation in the following example multiple rules in a filter are evaluated with an and logic and multiple filters in a filter group with or logic imagine the ticket management api in this case important tickets are automatically pushed to subscribe api clients if a ticket is created with either high or medium priority and sovereignty severity 7 or higher or classified as security tickets assigned to either the admin or the operators group lower priority tickets can still be manually queried however newly created tickets will be filtered in the back end and won't be pushed in real time to clients subscribe to the specific subscription that has the filter defined with subscription invalidation appsync can automatically unsubscribe clients from the server side based on an event triggered by a graphical mutation and a specific invalidation filter effectively closing a given websocket connection invalidation provides more control over websocket connections from the graphco api backhand pub sub invalidation app sync can be configured with a few steps in appsync we define a new mutation with the specific objectives to invalidate and close websocket connections in the mutation resolver we define the subscription that needs to be invalidated and a payload the payload is used as a trigger or argument that should be matched with an invalidation filter with the find the next steps in this case we're using the mayo address in the subscription with the final last step in this case oncreate message on created message we configure an invalidation filter let's imagine we have an api that's authorized by open id connect or cognito user boost clients send a jw token to authorize a connection from a given user and appsync is aware of different user claims or attributes for that specific connection as they are provided in the jwg token itself in this case there's an email claim claim for that specific connection that can be verified by appsync and that's pretty much it we're all set a user or a backend service with a prepared privileges can then issue the mutation to invalidate those connections in this case a mutation called unsubscribe you can easily configure any operation that seems to be associated with a specific authorization mode in this case we can set up the mutation so it can only be invoked if the user is part of admin group so here we want the user if an email jdo mymayo.com should be unsubscribed imagine the user left the company or left a specific group maybe in the chat and we don't want jdo to receive any messages anymore appsync identifies all authorized connections from the user maybe he was logged in with his ipad iphone and laptop and had subscriptions pushing messages or updates to all three devices after the validation call based on the unsubscribe notation all three devices are invalidated and one receive new messages with that specific chat group as destination for instance without this this capabilities clients needed to either be aware of certain back-end events for instance the user was removed from the group chat or a user was unfollowed by his followers in a in a social media app then the client will need to respond to this event by closing the websocket connection with additional client logic and client code so now appsync takes care of all of that for you we don't need you to add that logic to clients anymore so i recommend that you treat mutations user invalidates subscription connections as administrative operations in your api and scope permissions accordingly by limiting their use to an admin user or group or a backend service for instance using uh schema authorization directives so the invalidation mutation could only be authorized with iam for instance appsync supports multiple authorization modes in the single api out of the box so you can have clients subscribed with api keys and for instance backend services using iam or lambda to authorize their calls so before we discuss the scenario where an admin user invalidates connections however we can have a back-end service invoking the mutation to unsubscribe specific connections that match an invalidation filter maybe alum the function listened to an event bridge event buzz assuming an im role to call appsync in this case no clients can invalidate other clients just the specific back-end service can do that so all connections are invalidated imagine for instance it's a sports game where you want to unsubscribe all clients whenever the actual game ends so there's no more game stats updates flowing in their website and connections after the match and you can save on costs so uh we took a look at the new uh filtering and invalidation capabilities let's take a quick look at actually how some of our customers are using real time and upsync at scale in production customers like sky in italy use appsync to provide live sports updates to viewers and fans sky was undergoing a migration from on-prem to the cloud and one of their main goals was to provide faster data when you're following your your soccer team your grand prix motorcycle racing or your formula one season you want the latest updates after deploying the first web components on its new server's architecture they notice absync decreased time needed to propagate data by a factor of 15. it used to take few minutes to push a full day's update with multiple match scores the process now takes milliseconds fab sync sky maintains consistent data performance at peak and off peak times and pay only for the compute resources it uses with an estimated of 30 percent cost reduction proceeding set one and the seven one entertainment group there are german digital and media companies and offer viewers entertainment content across a variety of media technologies the previous viewer engagement solution was an on-premise solution that was difficult to scale and require or hours of pre-show preparation to support an interactive audience experience during large low-vision events by migrating to a serverless solution based on knapsync they reduce its cost by 60 compared to the previous on-prem solution increased scalability during big times and saved hours of pre-show preparation time for upcoming large events by completely removing the need for tasks like pre-warming instances the migration to use abstinence took six months with no doubt downtime or scalability issues during the finale of a popular singing context they experienced some of the highest peaks in traffic that the company had ever seen about one minute after the finale began airing the system needed to scale by a factor of 10 or more they saw latency values declined to 13 milliseconds during the peak of requests which is about 30 35 percent less than the typical latency in the previous system hep health provide intelligent monitoring staff and asset tracking healthcare care they offer a cloud-based tracking solution with radar sensors placed in rooms to monitor and update electronic health records of patient movements such as falls and the presence of staff and other visitors without the use of body worn sensors they integrate an iot backend of appsync to deliver up-to-date data to their application web interface which uses real-time location information to provide hospital staff with a live blue dot experience similar to that of popular mobile navigation apps their hipaa compliance solution to track assets caregivers and patients is 10 times more accurate than legacy services what are the costs of competitors highly secure and scalable not to mention it was brought from an idea to production in only seven months because they did not need to worry about maintaining infrastructure now a retail use case neiman marcos developed connect an omni-channel digital selling application to improve their customer interactions and experience with amplified development tools backed by app sync graphical apis they were able to launch the application in less than four months ahead of the initial schedule using a serverless backend they reduce development cost by 90 percent appsync helps neiman marcus specify which portions of his dot its data should be available in real time with graphql subscriptions associates can engage with customers and assist them from anywhere a high-touch selling approach that was not available before that also tracks assorted key performance indicators as they can view their sales performance in real time hypertrack another one of our customers provides a live location as a service appsync is a core part of streaming hundreds of millions of live location events between apps in real time and allow them to accelerate time as early time to market and increase development velocity saving months of engineering efforts and reducing operational costs by 30 percent by relying on a fully managed service so we talked about real time in appsync our filtering validation capabilities we saw how customers are using appsync at scale in production in different industries for different use cases now let's see it in action so time for a demo so i have here an api that i deployed in my in my account so let's take a look at uh at the api there we go so this is my appsync console and i have a very simple api here so it's a tasks api right i can create tasks and assign [Music] specific data and the task has owners title descriptions priorities and some fields here where our define type right so again graphql is a strongly typed system i have also a subscription here where i track high priority engineering tasks and we leverage appsync building authorization [Music] uh controls so basically i set up specific groups right so only uh uh for instance uh uh they employ a manager group they can get that about tasks and everything else needs needs to be done by the manager group so updating tasks and things like that so a very simple schema and also all we have here is uh basically a local resolver a local data source and a dynamodb so all my data is saved to dynamo and i also have a specific subscription with a filter right so i can have i define my subscription and i have a specific filter so however uh can i just just want uh some users that have a specific uh match with my filter to to receive data so i just want uh subscribe users to receive data if they have the tasks created with high priority or the classification is higher than seven and only for the engineering department and that's it that's that's my field and that's how simple you can create filters you can have multiple filters in the filter group and you can add more um more operators like in contains greater than greater less than and things like that so let's uh first of all test our our our capabilities here so i'm i need to log in with uh user pool so i have a user that's called jane and she's a manager so she can actually create tasks uh so yeah i'm logged in with jane here so let me go to another so there we go i'm going to login with uh joe joe is an employee so he can receive tasks and he can be subscribed right so uh here i'm going to subscribe to that subscription i created and that's very handy in the app sync console uh pretty much open the websocket connection and you can basically test your subscriptions here out of the box with uh with just creating a subscription query and testing it so let's uh let me test with a a new mutation here so again i'm logging with jane and i'm going to use to create annotation so this is a low priority task so it should be filtered by my filter right so it pretty much it won't send any data to any anyone because for instance i'm looking for engineering the path department tasks uh that have a higher priority here that's a low priority task and classification higher than seven so i can send this again i create my my task on my dynamodb table everything is created but you notice my user here my user jdo didn't get any data so it didn't get any information it was filtered successfully filter on the absync backhand and let's just imagine now now i have a high priority task and i want joe to receive the data because now this is a high priority task now so priority high department engineering and the classification is eight so it fits our filtering so if i send so this is the result from the previous previous low priority task let's send another request to create a high priority task there we go and you notice joe just you know that the the subscription filter is matched and received the data accordingly now let's test uh let's test our our invalidation capabilities and i'm going to [Music] maybe duplicate this here and send another subscription so i want joe to be subscribed to two clients imagine maybe he has an ipad and he has a his laptop on the other side so they're going to be subscribed to two to uh two different clients right and again just waiting it received data from the previous annotation is waiting for the next one uh so let's take a look at this our unsubscribe uh uh limitation so that mutation now defines okay i want to be linked to that specific subscription that we executed before and i want to send an email so the email is going to be the payload if the mail matches that connection subsync is track keeps track of con of the context of a connection and that context of a connection has for instance the jwt tokens that have a mayo claim so this is uh pretty much redefined in the mutation and that exact same subscription we had before we just added at the bottom we have that actual subscription filter we added a invalidation filter just if i get the email claim from the jwc token that matches the email sent in my mutation then just unsubscribe everyone so let's test that now i just want you to want to make this easier let's just use the explorer unsubscribe and my jado user email is jdo jaydo mymayo.com so again here i have my user jdo he's subscribed subscribed to two different clients so let's unsubscribe jdo now from both of those clients again the limitation was successful and if you notice here the clients are completely unsubscribed you don't see the the spinning wheel here anymore i can resubscribe just by clicking this i'm creating another websocket connection but basically i close all connections from jdo imagine that he left the company or he left the engineering team but that's pretty much it for our demo let's go back to our slides just in case if you also want to if you want to test on your own time so that's a a little bit of a fun demo but you can basically go to our repository and deploy this in your account in about five minutes it uses amplify hosting if you want to if you want to use it or you can test locally but you have a really fun real-time canvas you can basically draw and have multiple clients receiving whatever you're drawing on one of the clients or one of the clients and it works exactly as animation here feel free to give it a try for yourself so that's pretty much it from our presentation today so hope we are able to successfully explain how appsync can help you in your next development project websync you can start effortlessly with your graphql journey on aws scales of your business and you just pay for what you use provides powerful built-in real-time capabilities allowing to unify and secure access to multiple data sources apis and services here you have uh some more uh interesting links so you can take a look at our launch blog where we talked about enhanced filtering that demo that i i just showed you is pretty much the tail in the block you can you can run it and create it following all the steps in the tutorial there you have developer resources where you can find workshops tutorials guides videos the case studies where we talk about some of our customers we can actually read them in that in this link here we also have a center location where you can access all appsync related blocks just go to appsync blog and you can have everything there uh also surface lan patterns where you can basically deploy uh uh interesting serverless patterns with with appsync and our graphql decision guide thank you very much i hope this is useful and if you have time for some questions we can we can address those thank you very much [Music] you
Info
Channel: AWS Developers
Views: 5,904
Rating: undefined out of 5
Keywords: AppSync, backend development, front-end development, mobile applications, real-time, web applications, Amazon Web Services, AWS, AWS Online Tech Talks, AWS Tutorial
Id: cv5-qzd5tDg
Channel Id: undefined
Length: 51min 58sec (3118 seconds)
Published: Thu Jun 16 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.