Reactive Microservices with Akka and Docker by Heiko Seeberger

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
all right I think we get started welcome everybody to this talk about reactive micro services with akka and docker as many buzzwords as possible in one short title my name is haiku I'm working for the fantastic company code centric we are a sponsor check out our booth if you want to talk by the way I'm also the author of this little German Scala book so we give it away at our booth if you're interested so what is reactive who has read the reactive manifesto show me your hands most of you that's great just in a nutshell reactive defines a couple of traits or qualities which modern systems should have and the the most important of these traits is responsive which means that a system should always respond to requests if possible at all and apart from a lot of non-functional sorry functional requirements there are two really important non-functional qualities that are required to be responsive and that is resilient so a system should stay responsive even in the face of failure and elastic means that the system remains respond responsive under load and if you put message driven to the whole thing underpin it with asynchronous message passing you can enable these other traits for example because the message passing means that you decouple the sender from the receiver and you can have delegation of failure handling and all these cool things next question what's a micro service so for me the size doesn't matter but for me it's more of the UNIX philosophy do one thing hopefully do it well and one other really important aspect for me is that a micro service should be able to react or to act autonomously and that means it has to be isolated from others it has to be self-contained and means to own its data and we will see what that means with one example later by the way this booklet by Johannes Panera has inspired a lot of the ideas and terminology I'm using here if you haven't read that check it out it's a quick read so if you have a microservice hmm that doesn't really give you too much you need a lot of micro services they need to collaborate and in this book Yana says 1 Microsoft's is no micro service they come in system and this quote is actually stolen from it goes back to various until biology people and I think it's really important understand that for this sort of collaboration we have to think about how exactly that would happen in a synchronous fashion or asynchronously oh yeah let's take a look later in detail but for me it is totally clear that in order to have autonomy those micro services must be isolated and you can achieve a solution using asynchronous messaging asynchronous communication because that decouples your services in time so that means that you don't have to wait for the other service to be available to talk to it and you can also contain fader you can compartmentalize your system so what everyone convey is that one micro service should not access another one in a synchronous session when serving requests because that would totally break isolation that would mean no more autonomy so for erected microservice this is a no-go so how does aqua fit into this picture I think the ACTA model is a really great fit for any sort of reactive system weather or micro service or not if you look at akka it's not a single jar anymore it it offers you really a lot of high level modules for scalability and resilience for example you can use our KH TDP which has been released in occur - 4 - a couple of months ago and that is really something you probably need for almost every micro service right usually they have an HTTP API and there's not a module is called akka SSE server sent events which which I initially authored and this is really great for asynchronous event based collaboration as we will see later in the example next question how does docker fit first of all containers provide isolation maybe no perfect isolation but at least some sort of isolation on on a host and in my opinion using containers and the Tucker workflow whether well the container workflow makes the automation of the life cycle a lot easier if you look at docker in particular there's a lot of tools and infrastructure available namely dock compose and and duck a cloud you should take our Tucker cloud that's pretty pretty awesome ok so the rest of the talk I will show you a demo it's of course a very simple chat application and there will be two services there will be a user service to identify users and then there will be a chat service that needs some information from that user service and I want to show you how a chat service can get access to those information we get there step by step so I will sit down because I have to do some hacking and coding so that's easier when sitting I hope that's okay for you so the first thing I would like to show you is just akka HTTP how to run the server so there's two things you need to do you need to bind to a socket and you need to hand request personal responses right and those handlers are implemented using akka streams they are a flow which is a linear processing pipeline taking requests and emitting responses but you don't have to really dive into the details of the akka streams API there's a nice routing DSL which can be used and I want to show you how to build a really very very simple web service we call it the user API so one thing you need to do is you use the HTTP api of akka HTTP call the bind and handle method with a route and you give it an address and a port and then you react to either successful server binding or a failure but that's all you need to get the service started the interesting part is here in this route and this method apply creates the route and currently it doesn't really do a lot of things but it hopefully shows you the nature or yeah the feel of this DSL the directives which are used here are a path matching directive to match the route path single slash path and then we only filter get requests of course there's post and delete and all the HTTP verbs and if such a request comes in we complete it which means we create a response with hello world great huh okay that's how it looks and in order to run it on docker I have to show you how to create images so this can be done with SPT and the fantastic expert in a packager plugin the guys downstairs from Goethe Fogg have genetic Pachter t-shirts go and grab them they are awesome and with that really smart plugin you just have some settings in your project and and using the task publish local or publish you can really create your docker images so let me show you that first of all let's take a look at the build dot SBT file where you have to put a couple of a couple of settings for docker and yeah as you can see probably the most important one here the base image using Java 8 and and with that in place you can then do darker publish local and if you're familiar with docker you probably know these print lines and now we have latest version of the Kepler user service so let's run it I have a simple script let me show you this script that essentially just wraps the docker run command with a couple of flags that are necessary in particular I want to expose the port to the host I wanna I wanna publish the expose port to the house so I can access it so if I do a run gabber user the docker containers started I can I can check out the log and it looks good so it seems to listen so let me check out what happens if I if I call that oh I did I don't need that I just used the loot path and I get my hello world so yeah akka is up and running on docker of course a a proper user service would not only say hello world but instead it would consume and and create properly properly formatted well yeah because the responses and usually HTTP uses JSON and rhtp has a really neat feature it's called marshalling which means you can complete your requests with normal domain objects we have already seen that in our case here the domain object was a string but you might have asked yourself oh why can I complete a request with a string I mean I need a HTTP response right so what you really need type wise is a to respond to Marshall level something that can be marshaled as a response and of course strings can be marshaled the response will be strict entity and the code will be a 200 ok and it will be encoding to explain all these things if you do Jason marshalling you of course have to tell our canoe P how to exactly marshal your domain objects and akka supports Spray JSON and XML in Jackson out of the box and their self project which I have started and which is a great example of community driven project because I didn't write that much code it's all the other computers okay Sookie Jason supports additional JSON libraries like Searcy play Jason and so on I personally really like Searcy because it has fantastic feature I will show you and I'm in a moment to automatically create the decoders and encode us or the marshalling infrastructure which is needed for case classes and also for shields trade hierarchies so that's really pretty awesome so let me just show you how a complete user service would look like so we are now talking to a user repository but the most important aspect here is that we're using more of the directives to properly implement a couple of API features so when we get a user's path which is then ended we get by completing with a future so we ask this is the ask pattern right from akka we ask the user posit Ori get users and the message we will get back is a user's object which is just a set of user or wrapping a set of user and with that in place we can have complete requests we can complete the request with a set of users with a set of our domain instances and that works because of these two imports there's just the SUSE import which comes from the arcade GP JSON library and you also need this auto magic import that makes the all the heavy lifting possible to be honest that comes at a price there are macros working under the hood so the compile time will be even worse it will come slower you know it's already pretty slow Scout compiler so but it's very convenient you don't have to doing anything to your case classes and if you really want you can write those decoding encoders manually mmm apart from getting users we of course also have the ability to post users in order to create new users and and here I have something which I call a diverging response because if I send this add user a message to the user repository they're two possible outcomes well that was the failure case because I get back future but in the successful I could get a message a response username taken or an event user edit right so I need to deal with that by by matching on the diverging options but that can be done with the on success directive and yeah I don't want to dive into all the details but hopefully you can see that there's a couple of directors which are really powerful which give you a lot of flexibility in power to well build your route let me just build that again and okay we have it we have it no ok now we have it okay so let me again start the user service take a look at the logs okay that looks good and and and now we can do the following we can first just try to act the current users which gives us back an empty array and then we could for example create a first user and that worked well so if I try to get again I get and write with one element and you can figure out how the marshaling works for case classes it's trivial just mapping the parameters the fields two keys in this JSON object and if I try to you at the same user again we get user name taken all these things so akka HTTP he takes care of a lot of things content type a content negotiation and you even get nice States codes and all these things okay great now we have a system which does what should do is your possible to add users and and remove users all these things but it's of course not yet reactive why because if I just do a remove for this container sorry it's gone it's no longer available so it's not really super reactive it's not resilient so we need to take here of that and one important piece here is the persistence of the state of actors I haven't shown you in detail how this user repository is implemented but it's just an actor right so you send messages to it you add user and delete user and it has an internal state and once you stop that actor or just crash the whole system that state is gone and the question we have to answer is how can we recover how can we restore the state when we restart an actor or start an actor and the answer is ARCA persistence is event a sourcing please familiar with event sourcing okay most of you so for the others in a nutshell event sourcing is really different from traditional let's say create an update persistence because you never delete anything you never overwrite anything instead of storing the state you store all the state changes which are expressed as or model as events so in our persistence we would create an event for any very command Abela command would be add user with a user name which hasn't been taken yet okay which is still free and then we create an event called user edit and then we let the journal persist even the journal is the abstraction provided by our consistence for the database it could be any database I would say currently the Cassandra plugin for the journal is probably the most production-ready this was a very good Kafka base one and there's a couple of local ones which are not that interesting for me because if I want to build a reactive system I have to really go and build a distributed system and therefore I think it was to the database is the best option but anyway there's a journal abstraction when the journal comes back to us here and acknowledges that even has been persisted with then and only then apply the event to the actress state that's the whole idea here it's pretty simple and whenever an actor which the persistent actor gets started all the events are replayed so it ends up in the old state the state in which it was before there's so much more to say you can do time trials but not replaying all the events there's snapshots to avoid huge recovery delays because if you have to replay millions and millions of events that might just take too long so there's a lot to say which I don't have time for but therefore I just want to give you a quick glance into how you would how you dried that code so let's take a look at the user repository so as you can see here we are extending persistent actor that's the trait or at the clasp we have to extend from and a persistence actor has a persistence ID that has to be stable that assigns the events for this particular actor in the journal and and then it has to receive methods one is called receive command which is just for the normal messages I would say that replaces the receive so if there's a command which is not related to persistence at all like get users we can just handle it like we did before and if we get a persistence aware command like add user remove user for example add user is handled here in the add user method we first validate the command is valid can we add a user so if the users already contains the username we cannot so that's an unbalanced command and we send back in the bad confirmation username taken but else we invoke the persist method that tells the journal to take this event user added and store it and only when the turtle gets back to us it will invoke this callback and in this callback we do two things first we change the state by invoking receive recover receive recovers also the method that is invoked for recovery therefore we achieve getting into the same state again on recovery and after that we do some side effects like logging and sending messages okay so it's important to stand that only after successful persistence we apply the state internal state changes and that's the pattern and once we have that in place we can we can give it a try so let me first remove the current running thingy I also have to start Cassandra or some journal which I'm with her I'm doing I'm using Cassandra here I have to check the log because some was really a little slow in starting now it looks good now it has started and and now let me just run the Gatley user again looks good okay and then he'll get the users empty now let's add a user okay add it looks good we can now just remove the container let's just simply crash it and start it again and see what happens if we try to get the users again Oh hooray the user has survived the crash because we have used persistent actor so that's already getting us closer to a really reactive micro-service but there's more work to do so what we need next is okay here I already mentioned that if if we really want to have a very reactive system we need to distribute it we need to have multiple nodes we want to use our car remote you want to use our cluster and in docker you have so-called networks that isolate the containers so within a network containers can talk to each other and if you want to externally communicate you need to publish ports and if you go for a multi host deployment I highly recommend using overlay networks like supported by two o'clock or OS and so on because then you have like a virtual network where those hosts sorry those containers can talk to each other you don't have to distinguish between public and bound address in our remote I can remote is a feature where you can have a find a dress and a published address which is great if you cannot use overlay networks but with an overlay network you just don't have to worry you don't need that nice feature okay so in order to become reactive with a cluster we need to use the cluster actor ref provider akka is distribute by default so everything should work in a distorted setting most of the things you have to do is just configuration one thing to note is that in order to form a cluster there's an approach using so-called seat nodes which are so and in the darker case those container dresses are not known upfront so it's you cannot use seed nodes like you would do in a more static setting and that's library called constructor which I have offered that helps with bootstrapping in a cluster so if you have a coordination service like at CD or zookeeper or console that is used by constructor and the existing nodes of the cluster are looked up there and bootstrapping all the corner cases are hopefully covered I don't have the time to really show you the details of the state machine but anyway that library helps with constructing an ARCA cluster and with that in place we now can use one or more of those high-level modules which akka cluster gives you and one is the cluster tools library it has the cluster singleton implementation and that is to run exactly one actor instance in the whole cluster well not exactly at most it's more precise so there's a class of singleton manager and that will manage the life type lifecycle of this singleton actor so if that goes away for whatever reason it is started on an existing node again so you really don't know where this actor is living and therefore you need the cluster singleton proxy to talk to your actor to your singleton actor let me I'll show you how to do that so in in the app in this class which has a main method where we just create an active system and then create a route actor which does all the other things we we create our user repository sorry here it is the user repository actor not directly as a child actor of the route like we did before but instead we use the cluster singleton manager which is started on every node and therefore via communication with its peers knows where to create that singleton it's usually the it is the oldest node and only if that goes away it will create a new one okay so um let's see how that works so what we can do now is we can run a play user with the default port and also with another port so what that gives us hopefully is that we can access the users out right there is no good okay Tucker logs Kapler user ah yeah I see so I mentioned this nice little library to bootstrap the cluster called constructor you remember what it needs a coordination service so I have to run at CD okay let me do that exercise again so we have Cassandra we have a CD and now I'm trying to start a getler again yeah go away please okay now I have really high confidence that it is working yes great so this one is working let's create another one and curl it hooray so I'll do it again we can either call the one or the other and both of these running services will give us that one user and that is because the user repository is really only running once in in the cluster and I can so the oldest one was the first so I can I can kill one of the notes it is kept by user zero and still the system should be available as you can see here so everything is fine class of singleton helps with that and now we enter a a library and a pattern which is really neat for event based collaboration of services because what we have seen was a single service and as I said you know said one Microsoft's no service ah so we need those services to color bright and so some events I think are a nice way to do that I just came to I just noticed that I don't know half a year ago because usually service and events are just for pushing events to the browser it's a standardized API and it's supported by most most every reasonable browser and yeah so it's a very very simple protocol completely based on HTTP you do a get and then the server responds with a response which is kept open it's not closed so it's almost like long polling at standardized and content type has to be text event stream and the events themselves are really simple new line based text-based things utf-8 so that is a neat way to communicate or to let them collaborate so what I want to show you next is how to implement a way to publish user events I mean when we create a new user or if we remove a new user that needs to be published to interested parties to other services right and that can be done using certain events so let me show you that so the user API is here and let me just compile that so it's getting faster later our route which is shown here not only contains the users it now also contains the users events and this route is a little more complicated because in the service and event specification the client could send a header called last event ID which the server then should treat in a way to only start pushing events from that last event ID and that is optional so we have to deal with that so but what we are doing here is we go to the user posit or II we tell it or we ask it to get the user events from the sequence number which is which is defined by the last event of E and we map it to user events usually event is just another element or class in our two main model as you can see here and then we map the user events to server sent event which is essentially a case class you can see here provided by the akka ssv library that in place as you can see here we can complete the request so what we are really doing here is we complete the request with a source of with a source of server-sent events if you look at the type here oh it's a future okay anyway you're just great too so we can complete requests with futures as you have seen before and in this case here when we map it to service an event okay as the sea takes care of the marshaling and and treating those events that when they when they arrive as as as needed okay so let's check that out has been built I run it what's that constructor it's not creating the system okay I will just abort everything huh those pesky live demos let me run ad CD let me run Cassandra and maybe this state of the example is broken so I will even move forward to the next state and I have tested that before so I know it's working okay so what what we are getting at now is two services collaborating and I will show you this how such an event stream looks like but the base idea is the following if you have two services where the user events come from the right side your your client your service which is consuming them should just well run in a loop and and consume the the events as they arrive and if the connection breaks or for whatever reason is closed it just reconnect it uses this last event ID so any client of the user service interested in user events would would collect those and extract the data it needs and store the data in its own database I mean this is the idea of autonomy right a service needs to own its its data and in the case of the chat service now we have a second here and that the chat service also has a repository for users so it it takes care of storing the users but you cannot add users to the chat service directly this is done by going to the user events from the user service so as you can see here the user depository at the chat service has a URI which it uses to here which it uses to connect to the the user service and get the events this is abstracted away in this server sent events client which does all they have lists lifting with the reconnection if if if the connection is dropped all these things so it's some quite interesting stuff with a graph so it's it's not a simple flow at the linear processing pipeline but akka streams used as a cyclic graph pretty interesting stuff you can you can take a look if you're interested but yeah to make that easy to use I have abstracted away in this simple servant event client and what I'm going to show now is the following if I really start to of those services so what I'm going to do is now I run together user and I also run the gap of chat and I hope what oh I have to build it hmm ah maybe that was my mistake before I don't know so let me publish local okay now both images should have been built so I run the user service and I run the chat service my script doesn't remove the old one that's bad now okay docker logs gather chat up and running that looks good grab a user up and running that looks good so let's follow the chat here so that's the chat service and that it's listening to events of the user service so that means if we add a user to the user service currently we only have 0 users because we use restart Cassandra now add one we added and can you see it here so it was pretty quick huh so this is the chat service and it added the user with user name one which had been added in this other service so let me do that again it is quick but still so if you look up here it takes some time like two seconds three seconds so it's a synchronous collaboration using events and that brings me to the end of the talk everything is available online in particular the sample application I have shown you the code I have shown you under my github hi cozy burger HT burger account and I think we we have time for questions are there any questions how do we do that - do we have a mic people who have questions laughter you can repeat the question hello oh no it works so I have a question about the different services being autonomous do you create a cluster for each of those innaka clustering or do you have a share cluster with multiple sources in them yeah that's the question yeah that's a very good question so I didn't explain that no so each services autonomous and could have should have it's it's its own cluster because its autonomous it should not be in the same cluster I mean Anaka cluster the membership to a cluster depends on the name of the active system so if you name them differently like the Java user and the Kepler chat then they cannot be in the same cluster anyway and I think that's that's a good thing that's what you want right so each service autonomous deploy to one know two three four ten nodes and and that's it more questions hello how can you or how would you suggest dealing with serialization of the of the user model between so from the chat to the Goblin you sent the user model from one site to the other so with context rejection the serialization yes so how would you deal with handling the user objects that you defined in the one application in the other application and how would you share that definition or yeah yeah so that's a really good question it's a really hard problem because even if you solve it once things might change so if you think about serialization of data either to arca persistence or to another service to consume it you might run into version problems if you change your object and therefore change the serialization format you need to think about that up front you definitely need to think about that often it's very important and I think the way to treat it is the way in which we always treat like incompatible changes so that might be compatible changes in JSON if you just add a field that should not break the user who doesn't even know that field but if you rename a field that's a breaking change so then you should probably offer a second API version to version 3 whatever so you would then have to support at least for some time two versions of your API so in this case the user event stream there would be two versions of that two streams right one with the old and one with new format if they are not compatible that's really technically it's easy to do but for a product lifecycle manager that's a hard thing and should be considered upfront so would you recommend tools like protobuf and three so would you recommend tools like prototype birth or something to extract away from the versioning that gives you some way to maybe maybe I don't have a final answer to that but definitely definitely some tools like that or or other by the way akka does offer help with that in particular with our consistence where you really want to be able to restore the old events from last year or maybe from ten years ago there's an adapter layer in our copper systems which you can and probably should use all right so we are running out of time Thanks you can catch me down at the booth or somewhere enjoy the rest of the conference and thanks
Info
Channel: Scala Days Conferences
Views: 6,976
Rating: 4.9012346 out of 5
Keywords: Heiko Seeberger, Reactive, Microservices, Akka, Docker, Reactive Microservices, scala, scaladays, scala days, scaladays berlin, scala days berlin, heiko, seeberger, conference, konferenz, session, talk, IT
Id: nL4XoH2_Lew
Channel Id: undefined
Length: 45min 43sec (2743 seconds)
Published: Fri Jul 22 2016
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.