AWS Summit Online ASEAN 2020 | Architecting Event-driven Solutions on AWS

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] hello everyone hope you're doing well thank you for joining aw summit online and welcome to this session we are really excited to be here and sharing with you regarding event driven architecture and how to implement with aws so for today we will talk about the concept and implementation of individual architectures and we think this is really important because we have seen that more customers are adopting the event driven architecture but the most important thing is the implication even driven architecture makes their team so much more agile and so much easier to build scalable reliable and resilient system my name is tony a senior developer advocate working for aws and let's get into it here is the agenda that we're going to talk through today we structured this session by starting off with an introduction to even driven architecture to lay a foundation on what is even driven architecture and to give you motivation on why you need to start to adopt if you haven't the discussion will be around some key challenges on event driven architecture and there will be a good segue for us to introduce amazon eventbridge we'll cover how you can use eventbridge with various design patterns and what are the other aw services that can help you to make an agile scalable and resilient system and for sure we're going to do few demos for more better understanding on how to implement even driven architecture with amazon even bridge and other aws services so what is an event-driven architecture the idea behind even-driven architecture is to build a decoupled systems that run in response to events an even driven architecture use events to trigger and communicate between decoupled surfaces and is common in modern applications built with microservices an event is a change in state on an update like an item being placed in a shopping cart on an ecommerce website events can either carry the state such as the item purchase or even can be identifies like a modification that an order was shipped even-driven architectures have three key components even producers even routers and even consumers a producer publishes an event to the router which filters and pushes the events to the consumers producers and consumer services are decoupled which allows them to be skilled updated and deployed independently now the next question would be why do we need to choose an even driven architecture if you've been to an aws conference before you've probably seen a slide like this and let me iterate why people are moving to microservices most of us started with monolith and most common reason why we choose monolith is because it's relatively easy to build and there's definitely nothing wrong with that it's way simpler to understand how the system is going to work when all your data is in one asset compliant database and all of the calls you're making are operating against the same memory space however the more our business group we have more uses we have add new features the more complex our system become it's inevitable and one of the solutions to handle this kind of challenge is to build a distributed system distributed systems of microservices are way more complex but the problem is monoliths don't scale if you have big complex systems and you want to be able to effectively leverage dozens of hundreds of engineers to build features in parallel you have to move to the microservices model so in our journey building the system we'll eventually need to make our system more agile scalable and faster and it's pretty much aligned with conway's law which stated that system design is largely influenced by the organizational structure of the teams that build the system and this is one of the reason why customers are moving to microservices because they grew with rapid pace when we are talking about modernizing our applications in this context with microservices and even driven architecture it also comes back to how we can fully leverage cloud native computing and it makes sense to do that and this is the pattern that supports high-speed innovation and one of the pillars on how amazon.com successfully transformed into microservices in the last decade there are three core architectural patterns there are apis events and data streams and with this kind of paradigm components in our system is loosely joined by apis and events and data flows are able to grow in different directions as a result it allows us to do more experiments which leads to innovation we have discussed in a quick overview level on even driven architecture microservices and cloud native computing and how they can work together now let's illustrate on our journey to implement all of these components as our system group let's assume that we are building an e-commerce application we have order service to process surface and info a service to handle payment from our users this is a really standard synchronized api based service in this case when a client makes a request to the order service it has to make downstream requests to the invoice service and once it gets a successful response it can respond back to the client with the success message provide the confirmation level for the customers and everything works well in this case with this synchronized api based service what happens when we start adding a couple of more services and we need other services to integrate with your order service it becomes like this diagram with all the surfaces calling downstream requests to influence surface fulfillment surface and forecasting surface and this is totally normal as what we know as our system grow we need to add more features and the data flow is totally unexpected but what is problematic about having a synchronous api design is that the order surface needs to know everything from other surfaces other surface needs to call of this downstream request before returning back the response to the users and what happened if one of these dolphin surfaces has a hiccup well in the worst case we get our system x really slow and will return timeout error to the caller and in this case nobody is happy all the surface is not happy our customers are not happy and we are not happy even if we have implemented this system as a microservices it still possess few challenges we talk a lot about how great microservice is and back to our tenant is we build microservice to have scalable and agile system by decoupling these services but the order service still needs to know and locate each of these other endpoints for these services as it needs to understand how to make the appropriate api calls this is when we need to have a surface discovery and our system becomes more complex and also what to do if one of these surface fields we need to have appropriate free trial logic and error handling while we can do vertical scaling for our cloud resources but over the longer term we want to add more and more surfaces to process the data look up query update stocks sent to other third-party services and each time each time that the order service needs to do that it is highly coupled to the other surfaces let us show you in real life implementation how this kind of challenge could happen to any of us this demo is designed to illustrate the architecture shown on the screen we have an order surface surfing request by http api to complete the request the other surface needs to make downstream calls to three surfaces they are inflow surface fulfillment surface and forecasting surface once that the other service receives responses from all services it will return back the response to the client we are going to use amazon api gateway and aws lambda for this demo and the idea is to illustrate the challenge of systematic calls using synchronous apis if it's not handled properly there are two scenarios in this demo first scenario is the ideal scenario with the client calling the other surface via http api and all downstream requests made by auto surface got 200 okay response from other services this scenario is ideal if everything works fine let's take a look we have four lambda functions in this demo they are functions for each surface we shown on the diagram architecture nothing too fancy about the function they only have to print out the event received in the cloudwatch logs and also we have timer there to help us simulate the second part of this demo the outer surface is our main function and integrated with api gateway as this is the public-facing api whenever a client needs to send all the requests now if we click this endpoint it will return to us all the response from other services and if we simulate the traffic i'm using artillery toolkit with 100 requests everything is still good with all requests written 200 http status code all good everyone is happy now for the second scenario we're going to add some changes in one function the idea is to illustrate what happened if one service fails to complete requests in a systematic synchronous api design let's assume that the forecasting service needs to process a really compact scenario and it takes 30 seconds in average to complete a single request and let's try to simulate again the traffic with artillery tool we're going to use the same config with 100 requests hitting the same end point and let's see what's going to happen as you can see that all the requests they end up with 504 http status code which means the request is timed out and this is a classic challenge in a systematic synchronous api design and you can do vertical scaling increase time out in api gateway but that doesn't solve the underlying problem it's the design now in the long run we might need to add more surfaces and our architecture now looks like this and you might already guess it this is example on how we could move to even driven architecture to handle the challenge that we discussed earlier so instead of having all the surface to call each of downstream services it's now sending an event for example an order was created and we have other services to subscribe to this particular event maybe not all services interested on taking any action to a particular event maybe only one or two services but with this kind of architecture we are giving a freedom for those services to act independently based on the tasks in surfing requests now it's great we can decouple our system components and practically scale independently based on the demand and the next question will be what do we do with these arrows in this case we need to have even routers to reiterate in even driven architectures we need to have three key components even producers even consumers and even routers a producer published an event to the router which filter and push the events to consumers producer services and consumer services are totally decoupled which allows them to be scaled updated and deployed independently the even router will automatically filter and push events to consumers and acts as an elastic buffer that will accommodate such in workloads and also a centralized location to select and filter events audit application and define policies and by placing even router in our architecture we get advantages we can transfer data between system we can develop scale and deploy surface independently from other teams we can also fan out the event without having to write custom code to boost to its consumers the router will push the event to the system each of which can process the event in parallel with a different purpose and lastly even router establish indirection and interoperability among the systems so they can exchange message and data while remaining agnostic and what kind of service that we can use do we need to build one or is there a better option introducing amazon eventbridge eventbridge remove the friction of writing point-to-point integration by letting you easily access changes in data that occur in both aws and sas application fire a highly scalable central stream of events with eventbridge you get a simple programming model where even publishers are decoupled from even subscribers and this allows us to build a loosely coupled independently skilled and highly reusable event-driven application it's fully managed so it handles everything from even ingestion and delivery to security authorization and error handling making it easy to build scalable event driven application and because even bridge is serverless there is no infrastructure to manage and you can only pay for the events you consume an amazon event bridge is a significant addition to aws offerings on building event driven architecture giving you freedom to choose the best tool for your case now there are two main types of routers used in event driven architecture they are even bus and even topic you can use amazon even bridge to build even bus and you can use amazon's simple notification service or sns to build even topic and you can integrate amazon sns and amazon sks to build a powerful pub sub system to distinguish when to use even bridge and sns let us help you with this simple guidance amazon event bridge is recommended when you want to build an application that reacts to events from sas application aw services or custom application while in the other hand amazon sns is recommended when you want to build an application that reacts to high throughput and low latency events eventbrite use a predefined schema for events and it helps you to select and filter events while sns topics are agnostic to the event schema so let's see amazon event bridge in action in this demo we're going to see how amazon event bridge can solve the systematic synchronous api design challenge by changing the architecture with leveraging event bridge as an even router to handle the asynchronous communication between services so this is how it looks like whenever all the service receive requests is going to publish an event called order created to even bridge even bridge will then push the event to respective surfaces which are being the target for the particular event furthermore few of the surfaces will publish their own event and so even bridge will handle and push the event to the target services as well additional event that we're going to create is fulfillment completed by fulfillment service which will be consumed by logistics service so the idea is to have a complete data flow whenever the system receive order creation now these are the lambda function that i use for this demo you might notice that i've added few additional surfaces and these services will be used in this demo to demonstrate how scalable and robust if we build a microservices with asynchronous communication with even router so it's more complex compared to our previous demo one thing that i like to emphasize is none of these functions know the existence of each other and no link whatsoever all the communications will go through if and bridge as this function publish their own events okay so moving on the first question is how to publish the event to eventbridge now this is a snippet code on how you can use aws sdk and this one is written in python using boto3 it's really straightforward you only need to call put even api and then fill the parameters such as source detail type and even bus name the detail variable holds our payload so your data will go there and all of these parameters could be used when we want to define the rule in amazon event bridge now let's see how i configured the event bridge to handle the event let's go to the rules section and select the event bus on this page you can see that we have few rules for each service we have fulfillment surface invoice service logistics service and notification service let's take a look into one of these rules one of the most important component in event bridge is event pattern whenever it receives an event it will match the event with the pattern that you have defined before routing the event to the target now in this context if the detail type contains fulfillment completed it will go to the logistics surface which is exactly what we want to have so whenever the line items are fulfilled it goes to logistic surface to deliver them to customers so if we go to fulfillment function we can see this function publish the fulfillment completed to even bridge there's no downstream call to logistics surface it only publishes an event and also let's change the sleep time to 30 seconds same with our previous demo and to simulate the fulfillment service takes 30 seconds to complete one request similarly to our previous demo the flow is started whenever we hit an api request and the order service is our public api and integrated with amazon api gateway so if we open this endpoint link it will give us http ok service code and an empty response so let's test it out this is our dashboard nothing too fancy over here it's just to give us an info from our backend whenever it completes processing so when it completes something it's going to update this dashboard whenever we make an api call to order surface endpoint the order service will publish an event to invoice service so it could send the invoice to the customers the event that the order service published it will also be consumed by the fulfillment service now remember that fulfillment service takes a bit of time around 30 seconds to complete the request and once it completes the request it's going to publish another event which is logistics started and it will be consumed by logistic surface and that will be indicated in this column that it's shipping to customer okay cool so let's start this demo let's use artillery tool again to create 100 requests as you can see the other surface is now updating and infosurface is being called and sent in first as well good now the fulfillment service is trying to complete the job and you notice once that the fulfillment completed its shipping to customers as well and now all requests were handled properly no time out quite fast i should say and everyone is happy and that's how you use amazon even bridge to choreograph asynchronous communication between microservices so let's take a look at how eventbrite works it all starts with an even source this can be any one of any adw services your custom applications or sas applications so when you use a sas application integration there is a special resource called partner even source that provides a logical connection between the partner systems and your aws account without the need for provisioning and managing cross-account im rules or carrying sales at the core of even bridge or even boss if you are familiar with the cloudwatch event default event plus and this is exactly the same thing except you can create your own custom eventboss as well as a dedicated event bus to ingesting partner events once you have an even bus you can associate rules with it rules allow you to match against failures in the metadata and payloads of the events ingested and determine which events should get routed to which destination we mentioned this earlier and just want to reiterate again that by using amazon even bridge we can expand even sources and that's not limited only to aws services but it could also having third party integration with eventbridge you can easily integrate your application for example with zendesk with datadock mongodb sugarcrm alt0 segment and more are coming we'd love to think that with this kind of integration it makes more easy and seamless for developers to subscribe to standard third party events that you might already use at this point we've established a foundation on even-driven architecture and how to implement with aws services but the next thing that we really need to discuss whenever it comes to even driven architecture is how do you manage the event types as your application grow you have more features more services and definitely more events created by producers through routers and for consumers now this is a non-trivial task to do because as it grows your system more interconnected through events and you need to spend more effort on defining the right event structure to produce and finding the right events to consume but also understanding the structure like for example what is the even structure for order created how is different with order cancel and how if i need to react on payment refund event and what do i need to send for subsequent event because the nature of producers that they are decoupled from consumers they have their own autonomy to define the particular event and in this decentralized communication it's really a good idea to have a centralized even structural management introducing schema registry and discovery which is now in preview the amazon event bridge schema registry stores even structure or schema in a shared central location and maps those schemas to code for java python and typescript to make it easier for you to use events as objects in your code now the cool thing is schema from your event bridge can be automatically added to the registry through the schema discovery feature and for those who are using jetbrains products and visual studio code you must be happy to hear this as it has an sdk toolkit to connect and interact with the schema registry so how does this work to react to events in their applications developers need to know the event structure or schema the process of finding events and their schema is typically manual developers have to look through the documentation or talk directly to the events developer now the event bridge schema registry makes accessing schema easy by centrally storing schemas generated by all your even sources so any developer in your organization can access them you can also download code bindings which allow you to represent the event as an object in your code and take advantage of ide features such as validation and autocomplete you can download code bindings for programming language and use this code in your ide via the aws toolkit or in the aws console when the schema discovery feature is turned on all events sent to an even bus are automatically added to the registry so there's no need to create schemas manually i'd also like to highlight the importance of state because in any kind of architecture each of our services need to interact with another surface to complete a specific workflow and by understanding this concept it will definitely help to complete our microservices with even driven architecture workflows are made up of a series of steps with the output of one step acting as an input into the next and if you think about it most of api request is actually a subset of this so-called workflow in e-commerce we have a workflow to complete order which might include checking the inventory processing payment updating inventory sending notification to user and send job order to logistics service so how do we track the state between all of these services how do we make sure things are executing in the right order at the right time we need a state machine in simple terms we need a way to orchestrate the different workflows between all of these services set timing on tasks interrupt execution have tasks send heartbeats and monitor and audit at fine granularity many application contains workflows that must execute tasks in a reliable and reputable sequence using independent components and services before step functions customers who automated business processes had to spend weeks to write and then manage complex custom workflow code and interfaces now our customers use aws step function to add resilient workflow automation to their application workflows on step functions require less code to write and maintain step function integrates with aws lambda which makes it easy to add lambda tasks to workflow such as calculating credit score as a part of loan application workflow or creating customer account as part of a subsidiary workflow developers can connect and coordinate various aws services database and messaging service in minutes without writing any code now this new service integration simplify and speed delivery of solution for workloads such as order processing report generation and data analysis pipelines by integrating edible step function and edibles lambda it makes really easy to coordinate the components of distributed applications and microservices using visual workflows automatically triggers and tracks each step and retries when there are errors so your application executes in order and as expected it also locks the state of each step so when things do go wrong you can quickly diagnose and debug problems and you can also integrating edible step functions into your even driven architecture to manage specific workflows and reuse the existing lambda functions that you have so at this point we have covered the basic concept of event driven architecture and aw services to help you we have amazon evil bridge to help you as an even router and third third-party sas integration we have enabled step functions to help us coordinate our business workflow and we have aws lambda as our surface compute so let's see how of all these components can work together in our next demo in this demo we'll demonstrate how we can use amazon event bridge to receive events from sas applications using partner integration and webhooks the use case in this demo is how to process payment and support ticket freshworks is a complete customer engagement platform and one of the products that they have is freshdesk we're going to use freshdesk from freshbooks as a ticketing application which has a built-in integration with amazon eventbridge and with this integration you don't need to implement your webhook configure your api and doing validation before processing for any events you just need to enable it from the sas applications and configure how you want your even bridge to route this event however you can also use eventbrite for other sas applications using webhook we're going to use stripe as an example stripe is an online payment processing for internet businesses and in this case we're going to implement a webhook endpoint to receive event whenever there are events emitted by stripe so the story goes like this whenever there's a payment event triggered by stripe it will connect to our webhook and our webhook will publish the event to amazon event bridge amazon eventbridge will then crowd the event to respective surface in this case is payment service once it completes the operation the payment service will publish a new event called payment process to amazon event bridge amazon eventbridge will then route this event to adaptable step functions directly to process the data and upload into amazon s3 to be displayed in amazon quicksite as our dashboard in the other hand if there is a new ticket created in freshwork it will automatically trigger the amazon event bridge because it has partner integration event bridge will then route the event to support surface as it needs to do a sentiment analysis to the ticket support surface uses amazon comprehend to do sentiment analysis to get the sentiment whether the ticket is positive or negative it's super useful to get the feedback from our customers so we can react quickly after support surface process the event it will publish an event called ticket process to even bridge which even bridge will trigger either the step functions again and upload to amazon s3 edible step function role here is to help us determine whether the data is a payment type or ticket type and do for the processing before uploading the data to amazon s3 amazon event which can also integrate easily with other aw services and in this case it's a double step functions cool so now we understand how this demo works let's see how we can easily integrate partner sources into amazon event bridge this is fresh test dashboard and with the great event bridge all i need to do is to go to apps integration i already have even bridge installed and to integrate it is just search amazon even bridge in the marketplace and install the application once it's done you'll see that there's a new record in partner even sources and once that you have it you need to associate with even bus and even bridge will create a dedicated even bus for the integration now we have dedicated even bus and next thing that we need to do is to create the rule to create the rule specified even bus and for the event pattern choose the surface partners there are plenty of partners that you can integrate in this list and for this case let's choose freshworks and after that we need to define the target called supersurface alumna function to handle the event from freshdesk and click the create button to create this rule as we have freshworks integrated with amazon even bridge let's see how we can integrate stripe this is the striped dashboard and i've built a webhook endpoint to integrate any events triggered by stripe and this is the lambda function that will handle stripe event whenever it receives an event from stripe webhook it will publish an event payment received into amazon eventbridge let's create a rule for this integration first we need to go to amazon event bridge even bus page select the event bus and click on create rule button then we need to define the even pattern we want to have all events with detailed type payment received is handled by this rule once that we have done this we select the target our payment service lambda function and we're done with all set let's test it out first let's start emitting an event from stripe this is striped dashboard and i'm on webhook page in developers menu what i need now is to click the send test web book button okay now we have our event emitted from stripe let's head to freshdesk from freshworks to create a ticket i'm creating a new ticket here and saying such an awesome surface and thank you cool now we have all events emitted from our sas applications i believe that all the microservices already done processing and now let's head to step function and see if it's done processing so here we add the step function dashboard as you can see we have two records of execution here and both of them will succeed that's a good news let's see the details in the first state mission execution we can see that it's processing the payment from stripe and successfully upload it into amazon s3 nice let's see the second state machine execution for the second state machine it's processing the ticket and successfully upload it into amazon s3 as well cool let's see if the files are generated in amazon s3 bucket okay so we are seeing files from amazon s3 there are two type of files generated by our microservices one for payment and one for super tickets and from this data we can create official dashboard using amazon quick site and this is how our dashboard looks like in quickside by getting data from amazon s3 and that is our demo to help you understand how to integrate sas applications with amazon event bridge with this integration it unlocks many possibilities you can use data from your sas applications to trigger workflows for customer support business operations and more so that's our take on even driven architecture it's our hope that by implementing even driven architecture it helps you to build a more reliable resilience and scalable systems so you can continue to experiment and be more agile with faster innovation to give more value to your customers thank you for attending i hope you enjoyed this session and we had fun together exploring how to architect event driven with aws we really appreciate your feedback so that we can better understand the topics and services you would like to know more about so please do take the time to fill out our survey and let us know what you think once again thank you for tuning in i'm dhoni and see you next time [Music] [Music] as a solution architect having an aws certifications allows me to measure how much i know about aws solutions i would recommend the aws training classes because they are able to offer you information that is direct comes from aws platform themselves i started with solution architect associate and then sysops administrator associate my journey towards sysops administrator associate came out when i was trying to build a solution then suddenly i you know it came out like it is going to cost me use bill that's where the sysop administrator associate gives uh that kind of knowledge day to day most of the administrators uh spend a lot of time on fixing the application issues where the sysops administrator particularly on a aws platform will give much more insights how to you know troubleshoot and fix the application site so definitely it's worth spending some time to do multiple certifications [Music] professional level certification is quite challenging to get it definitely validates a person who has actual skills and i would suggest to take it for to engineer who really think he's at that level when he can approve that he can work on quite challenging problems [Music] i'm using snagit director for enterprise id our role to the business is to look create and implement solutions to give voyager group the best track environment so we can call you voyager group has been an aws customer since 2014 and today we are already using more than 58 services across various businesses like ec2 sg rds direction for data and likes to land for chatbot platform that is also integrated through social media match point where transactions are streamed to see our case management our business is in a better position to positively innovate and support better customer events at voyager our goal is digital and financial nutrition provided by digital we do this by building innovations that matter to their everyday lives our co-host is a customer through our main platforms we offer consumer and enterprise solutions on the areas of payments digital ethnic voyagers award winning technology platforms support the following digital services paybaya is the most recognized prepaid payment wallet service enabling filipinas to shop online pay bills buy loads and sell money dubai business is the largest mobile aquarium service in every enterprise of all sizes to accept the utility smart needs leader is our digital landing platform prime is our required stop for free access to the internet we live and breathe technology and it's very much part of our dna in implementing erp that is digital business configured we look for a technology platform that has proven performance supports scale up and can reduce id complexities by using aws our engineers can move ahead at maximum speed without much provisioning and maintenance overhead on the security aspect including regulatory compliance we take it very seriously as part of our solution in process to ensure data is protected recently together with the ldp we welcome you investors this was the biggest fundraising for the company in the philippines we're quite excited to further accelerate the intelligence we've be moving quite fast but we will move even faster sap as for nano effect is designed to drive the cloud of improvements we use awkwardness because of its vector operability for global implementation approach and flexibility to integrate with other corporate information systems in a multi-application environment for our processes on customer acquisition data lake azure platforms and many more that's why we use aws together with the sap to enable voyager group with technology platform that offers quick availability faster deployment cycles flexibility with shortest path to production and built-in cloud security framework and business continuity you
Info
Channel: AWS Events
Views: 2,475
Rating: undefined out of 5
Keywords: ML, Cloud Computing, AI/ML, Amazon Web Services, Lam, AWS, Elgin, Amazon SageMaker, Machine Learning, AWS Summit Online ASEAN re:Cap
Id: CHEkey1TKmU
Channel Id: undefined
Length: 39min 58sec (2398 seconds)
Published: Fri Aug 28 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.