Microservices architecture | Asynchronous systems ft. Vaughan Sharman | 11:FS Explores Lightboards

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
this situation down here where we have an asynchronous architectures is way more easy to plan for and handle large unexpected amounts of volume [Music] so let's get the scene set so perhaps you might be in a world [Music] where you have some mainframe services and perhaps you're looking to migrate into the cloud lots of different organizations are in different stages of this perhaps you might be here perhaps you might be looking at this and perhaps you might be in the next phase and then again perhaps you might be at that phase now where you're starting to put microservices into the cloud perhaps you're in aws gcp azure there's plenty of options and there's plenty more so often the first step that people take when they're moving into the cloud is they move towards a microservices environment so the most natural step that most organizations take is they will move to an implementation where they perhaps have a few services so let's draw a few out so let's imagine we have an architecture here where we have requests coming in from the outside world and then perhaps we have micro service a which is talking to microservice b which is then talking to microservice c and perhaps there might be a few database lookups happening in some of these services so this all seems reasonable and this is a fairly natural progression that many organizations find themselves in because they want to get to a space where they can release code often they can decouple teams and they can start to think about the core components of their system and perhaps migrate away from a large architecture where all of this is one big thing however as your business scales you may find that this kind of architecture starts to strain so let's just look at a a situation here so so let's say a request is coming in from the internet and then perhaps you require the response from service a to come back to the caller to happen in around 15 seconds so what this actually starts to mean is that service a now needs to call service b which needs to call server c and all of this needs to happen synchronously and then respond to the user all within that time budget and what this can start to do is it can start to really get very complicated because perhaps your system starts with three services but then over time maybe more services get added and perhaps a dependency comes in between here and now suddenly maybe b has a dependency here things can really get messy quite quickly and if you look at the state of lots of different companies architecture at the moment you'll see and i'm disappearing behind this cloud of of dots here you'll see that many organizations have this architecture and what can originally seem like a very logical way to do things can get very messy very quickly so today i'm going to talk about another alternative which doesn't work for all use cases but it's definitely worth considering for many use cases and what it is is moving i will migrate down here it means moving to a slightly different model of communicating between services so let's start with a bus and what do i mean by a bus what i mean by a bus is a way to asynchronously send messages between services so let's call this a bus so let's imagine this bus is i can put a message on this particular we'll call it a queue and then they will build up and a service can be listening or multiple services can be listening to the same queue and when something happens it receives a message and it can do something with it so let's draw this abc architecture using this asynchronous methodology instead so let's stick the a service b service and the c service so let's imagine this this previous flow so the message comes in to service a and service a will instead of actually calling b directly it will put a message down onto a queue so previously we had the situation where b was the one who had to respond to that so we're now going to have b listening to the cue that a is pushing to and then b is going to do something and it's going to push its own output to its own q and we know that c cares about the output of b so c is going to be listening to and i seem to have gone through a pen so apologies for the color change but so c is listening to the output of b and then likewise it's going to output to its own q for which a can listen from so at first it might just seem trivial the advantages that we can gain from this but if we think about the way we can then deal with significantly different amounts of traffic or use cases so so imagine we now have a scenario in this previous one where perhaps we have a large event which is important to our business where perhaps we get millions of customers in one day what can happen here is we suddenly might be exposed to some significant hotspots or bottlenecks in the system whereas this situation down here where we have an asynchronous architectures is way more easy to plan for and handle large unexpected amounts of volume so for example as long as our a service can handle the initial inputs we could potentially get sent a million requests and these requests can all just be put on this this cue here and our b service here can just chug away and do what it needs to do at its own time and then just slowly drip these things back onto its topic and eventually just consume through the items so you know this a topic can just dump a million items on here and our system doesn't break whereas in this system here if we dump a million requests on b it's uh very likely that system is going to fall over so what this also means is we can then start to think about the contracts that services have so the next topic we'll talk about here is schemas and hopefully that fits on the board so next idea is schemers so we can then start to think about each of these components as isolated items in the architecture so b doesn't need to know or care about how any of the other components work all it needs to know about is the events it needs to listen for and have a standardized schema which it pushes out which any of these consuming services need to know the structure of so schema is essentially a message structure when what this means is instead of if we have to add more services to here we have to really understand all of the the coupling and dependencies whereas here we can add any number of services and we already know what the schema output is so imagine we were building a bank of this system and suddenly we had some new reporting requirement or perhaps some new ledger based view we needed to build off a system we actually don't need to care about the entire architecture of the system all we need to care about is the schema that b outputs we take that information perhaps we want to aggregate that with the schema that c outputs so maybe we listen to these two topics or queues we build our view of the world we output what we want to output and we're done and it means that it really simplifies and improves our ability to build upon our system with a lot more confidence and reliability and knowledge that we're not going to break other things so i think this was just a a short overview of of why this is worth being aware of this architectural pattern and it's definitely not valid for all ways of um building things and it's definitely there's some cases for example if you were building a bank perhaps if you want to have fast you know sub second responses to payments perhaps there's some use cases where it wouldn't model but for most use cases it works very well anyway so i hope you've enjoyed this talk and perhaps learned a few things and see you next time
Info
Channel: 11:FS
Views: 3,516
Rating: 4.942029 out of 5
Keywords: fintech, financial technology, banking, bank, finance, financial services, uk finance, finance news, money, devops, microservices, devops tutorial, devops beginners, architecture, monolithic, cloud, azure, migrating, services
Id: WMmoEBv5CNA
Channel Id: undefined
Length: 10min 30sec (630 seconds)
Published: Mon Sep 07 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.