Arun Gupta - Refactor your Java EE application using Microservices and Containers

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
good afternoon let's play again good afternoon thank you all right well my name is Aaron Gupta and I'm glad you chose my session to attend the conference today I know there are a lot of parallel tracks running so thank you very much for attending my talk we'll talk about how do you refactor your existing Java EE application using micro services and containers there's a lot of buzzwords on so my goal here is to kind of hopefully put some positioning around what we think micro services are and how do you take an existing Java EE application and run them as micro services or you refactor it into micro services now as part of using them as a micro services you know they're all different applications running so how can containers fit into that entire picture that's sort of the thought process over here I'm a practitioner myself so I like writing code so everything that I'm going to talk about is based upon code experience so that you know you can learn from it as well I work for Red Hat actually I'm the director for developer outreach there are lots of ways to reach me Twitter handle my blog literally ten minutes ago I started writing a blog where I will not only share the slides but I'll also write a little bit of more my thought process on what I think you know we should do around micro services and stuff like that or you can shoot me a mail or encrypt that redhead comm let's take a look at existing monolith application by monolith what I really mean is a plication where all of the functionality is packaged into one application think of a movie plex application let's say you know you're running a cinema theater multiple movie theaters you need to book a ticket you know add a movie you know accrue the points on the movie tickets and so on and so forth that's a significant chunk of functionality you can package all of it together in a war file that's about it right and for each functionality you have a UI layer you have a database layer you have middle-tier you have these different layers and you package them all together so here is my UI here is my data based persistence layer here is my business logic there are different ways by which we accomplish these but a classic way specifically in the Java EE LAN is you have HTML CSS pages could be XHTML if you're using JSF for instance typical database interaction is using JPA or you might be using hibernate for that sake in terms of business logic again lots of different varieties but what I'm showing you is a very classical java ee structure on how do we see these monoliths application to look like but for you for container or for wildfly application server which is a Java EE 7 compliant application server this is one war file it doesn't care about what's inside it it says let me take it and run it as a single application and that's what makes it a monolith now you can scale this application you know you can have run multiple instances of it while fly can be configured to run in a cluster or JWoww's EAP can be run in a cluster so you can run it that way you can front-end it with a load balancer very very classical scenario here that war file may not necessarily be just a war file it could be an ear file that's the alternative packaging so remember I'm using yellow for war and I'm using grey for ear file and by this I mean the black powder around it I mean that's out of the boundary of where or the scope of my archive is whether it is an ear file or a war file same thing you know I have a ear file I could have multiple instances of it front ended by a load balancer own cache own database you know very classical very simple dumb down architecture there are several advantages to this architecture well we are all been using it for a few years now so your monolith application is typically packaged in a single year or a single war file that makes it very very IDE friendly so tools like NetBeans Eclipse IntelliJ they all understand oh this is a ear file I know how to generate this maven archetype etc all those are available it generates it you can edit it you can debug it lots of advantages like that it's very easy to test because once that war file or ear file is up because everything as is part of that archives all the services that you need so to say are up you know your persistence context is loaded your EJB back end is up maybe lazy loading but at least is up or when you ask for it is going to be up you know that for sure things like that so that's the monolith application that's the nature of it it may take some time to load up but one says the archive is up is up and everything is ready to run and these are very simple to develop you know because we have followed these simple is overrated word but at least we know what the issues by building these monolith applications are okay let's take an example you know what are the issues with this classical monolith application well I have a simple application version one is live version 2 comes out where I change a certain component of my application but because this is a monolith I go to rebuild my entire archive and drop that entire archive in the application server there comes version two even though my UI my database none of that concept is changed or maybe I just changed one functional component of the app but still you know I need to redeploy my entire war file version three comes you know I may change something in the UI layer or again you know this UI may be of a functional component but you still have to redeploy the entire war file once again you know so if anything changes in a monolith application there is no way for you to say oh by the way just redeploy only that particular part of it nope you're going to deploy the entire war file and that's when the new version goes live so with that you know what are the disadvantages of a monolith application well they're difficult to deploy and maintain you know god forbid if you are using spring your war file is going to be a hundred megabytes so that means you know it's going to take a while to load that spring war file with Java EE we have skinny words at least so the war file gets loaded fast rather but if you are if your war file loading time is bigger that means it's going to take a lot longer to now have more frequent deployments of your war file specifically and an agile wall you rapid deployment more frequent deployments but if you have to really deploy your war file and go for a coffee and come back and make sure that is up that's not going to work the other one you know which I talked about is obstacle to frequent deployments you know you want to be at least if you are doing a CI build if you are running a deployment pipeline so to say you want to make sure you know the entire deployment pipeline is running under a few minutes but if few minutes is just your deployment to one test environment it's not going to work now it's not going to scale it's going to be a big obstacle the other significant part of a monolith application is like when you design your monolith application like let's say this is Java EE application you are stuck with that technology stack or framework whatever you choose spring whatever you chose or application server you chose because you even though I say yeah is a standards-based application but eventually you start getting into the proprietary API is descriptors etcetera management all those things and you build your infrastructure around it so what that means is you know you have been building your application over three years now this is this new latest greatest technology comes up I can't change because my DevOps guy is not going to let me do it or my ops guy will not support it stuff like that so that makes it really hard to try new technology you might be the expert at you know Java EE but you want to still try out a piece of our functionality but then you have to extract that functionality out of your monolith and then go from there so some of the issues with a classical monolith application here's this beautiful book called as art of scalability this book talks about you know how there are three axes on which software can scale there is a x-axis there is a y-axis and there is a z-axis okay so here x-axis is where it says horizontal duplication so you want to scale your application it's straightforward you know you just have multiple instances running you know you do this is what we have been using all along you can do that z-axis is where it says oh I'm gonna do sharding now database sharding so if a request come from Europe I'm going to use this particular DNS server or database server if you up comes from us I'm going to use this and so on so forth so that allows your application to scale to an extent the most interesting part here is the y-axis split which is the functional decomposition where it says you start functionally decomposing your application and we'll talk about that in a second but that's what the focus of the talk today is and that's if I were to say like you know describe microservices in like just a few words like three or four words I would just rather to used to which is functional decomposition of a service of functional decomposition if a monolith application that really to me is the essence everything else goes down like you know so 50,000 feet view for me for a micro-services functional decomposition everything else then you start digging deeper like you know forty-five thousand forty thousand so and so forth and then you start peeling the onion and getting more details so Martin Fowler you know is guy who is sort of coined the term and he's been talking about it what really micro services are all about so he says really micro-service is basically a suite of services instead of one big application billet is a suite of services they are all independently deployable and scalable so that means once you have a service functionally decomposed hold onto that thought once you have that functionally decomposed each service can scale independently as x-axis or y-axis because z-axis I'm sorry x axis and z axis because y axis is functionally decomposed already there have very clearly defined interfaces each service because is running independently they are defined interfaces like rest based services they can use whatever programming languages they can to and each service is typically managed by different teams so let's drill down on each of these points and see what does it mean in our context this is not the mantra that we want to say that you know every couple of years a new technology comes up and then we say in what I'm going to refactor everything that I do using so ah now I'm going to refactor everything using rest now I'm going to do everything using soap or I'm going to use everything WebSocket for it so don't think micro server is our the golden bullet or the panacea for you that everything you are doing just drop it and just jump onto micro-services not true so instead of talking about no we sort of looked at the brief definition for it and let's look at the characteristics of what a micro-service are well first of all is a domain driven design and that's what I mean by saying it's a functional decomposition functional decomposition of your application so let's say go back to the movie plex example I have a booking functionality I have adding a movie functionality I have accruing the points functionality so you define what your domain is and that way there are very well defined principles of domain driven design so that moves as part of it now because you are looking at a domain the typical way your teams are distributed are oh this is a UI team this is a database team this is a middleware team and then they focus on their area and then they define interfaces between them here because you're doing domain driven design you're going with the full stack development team by full stack I mean is you know in that functionally decomposed team like you know eight to ten people team for example you define what your UI layer is going to look like you define what your middle tier is going to look like you define what your persistence layer is going to look like you have the full stack okay and we're going to talk more about that in a second each service is very explicitly published on an interface now the most common way of publishing that interface is rest endpoint so you say here is my rest endpoint where in my service is available and it has a bounded context in the sense you define the service is this is the interface and this is the payload and that's how I'm going to communicate with you in this payload I can expect either JSON or XML or Djamel what about that data format is we talk briefly about how each service is independently deployable and automated and also you can take down the service you can bring up a service you can upgrade the service you can extend and or independently on any level these services are truly decentralized so you know these services for example if we think in terms of Java EE or multiple war files running around you know in your network and they are using IPC mechanism IPC is a very remote term I mean it's a really old term but really effectively what we are saying we use something like a rest endpoint to communicate between them where they say okay these services are communicating each other with each other using rest terminology another key concept which we talked about is you know how you are not tied to a particular framework because you are using a predefined interface say a rest endpoint the other service which is consuming the interface really should not care about was the implementation level detail so today I might be using spring I don't like it I can just say in what just get rid of it I'm going to migrate complete thing to Java EE 7 and that's how I'm going to use it as long as I maintain the contract and the interface and the payloads and all those things because effectively to other services that are running with it that's what matters everything else you know it should be within that team so tomorrow a new technology comes up that gives you the ability to try that new stack for your particular service another key part of this is you know and this sort of kind of summarizes all of these is how a rest interface is so critical you know back in the solar days you know another name for micro services is called as so of our hipsters or so our 2.0 people say call all sorts of names but there were some fundamental issues why so did not really God broader adoption in the developer community it was more sort of on the ops community and things like that but here is gaining big traction micro services is gaining big traction in the developer community now with sowe're came the concept of enterprise service bus which was applying complex algorithms and logic on how do you transform the data make them communicate from one point to other point in this case endpoints have the business logic the pipes are dumb dumb as an HTTP dump we just send the data over it and it just transmits the data to the other side of it and the end point which is generating or consuming the payload understands it and then deals with it another very critical and I mean if you look at Netflix now at a given point of time you know in the US like Friday night or Saturday night 33 percent of the Internet traffic is by Netflix that's a lot of traffic that's a lot of bandwidth about two years ago I read an article I read the article recently but the older data is on a given day Netflix gets two billion calls using api's partners etc ferrara each of those calls gets translated to six to twelve calls within Netflix itself Netflix is running on all on Amazon they have three zones you know one is all three or hot so imagine running on Amazon so many billion calls is bound to fail it's 100 percent bound to fail so that's where it says fault tolerance is a requirement not a feature so you have to build resiliency in your services that if you are building micro services what if you try to connect to the service is not responding is it timing out and there are different patterns that I will talk about in a second but hystrix is a library for example that allows you to build resiliency very nicely in your micro service architecture I don't know how many of you know Uncle Ben from spider-man yeah so well he made this excellent comment says with great power comes great responsibility let's go back to the classic use case of Netflix now with Netflix at a given point of time you know I mean Netflix is a is a very classic story for micro services in that sense in Netflix any developer is free to push out any feature at any point of time it's completely up to them okay what that means is at a given point of time is a self provisioning environment they can go to the Netflix dashboard or internal dashboard and say provision 100 instances of Amazon here push my feature over there then they do cannery testing and then they be testing and then they go live but it's completely self provisioning now they know if the feature breaks they going to get a call and they have left to fix it they are responsible for maintaining it the key concept being you build it you run it now if you build it and you know that hey there's a ops guy sitting on the other side who's going to take care of it I'm not going to get a call at 3 a.m. in the morning you'll be less scared of careful about it but if you know you possibly might get a call about it you will be damn careful about it because you don't want to get up at 3:00 in the morning you don't mind the ops guy getting 3 in the morning so I think this is a very fundamental requirement for micro service you know the service if you think about it the concept of micro service DevOps containers kind of all linked with each other but we're going to focus on the micro service aspect of it so another Conway principle which is very very important for micro services where it says your organization will design an application which will mirror is communication structure so and this is sort of the biggest fundamental change that is required for micro services to get ready when I talk to customers all the time I was talking to a customer in Berlin earlier this week and they are trying to take an existing monolith application and converting it into micro service their teams are classically designed as UI database middle tier and that's how their applications were working so far so good but they fell the need for doing micro service because they want to have the flexibility and more more frequent deployments stuff like that so they literally had to change the team structure that nope you are not a UI team you know they literally had to go through the entire organizational structure change in order to adopt micro services architecture and that was I asked him in order to convert a monolith to a micro service what was the biggest challenge and he said that to us was the biggest challenge technology everything else is a detail but that to them was the biggest challenge where they had to completely reshuffle their entirety so what are the strategies for an I mean you have a monolith how do you decompose it you know you start putting each function in a separate class and said as my micro-service not true so there are some at least based upon our experience there are some well-defined patterns well design architecture which have been tested to some extent let's take a look at that well as I said functional decomposition is my two-word definition of micro-service and also the very important part is you know you look at this is going to be my checkout functionality this is going to be my product functionality there's going to be my catalog functionality so on so forth so you look at that aspect and say this is my product micro-service this is my catalog micro service or checkout micro-service and then they make talk to each other you can use a noun so as I said catalog product service UNIX utilities are a beautiful example you know something that we have been using forever and they're a beautiful example of how micro services should be designed now think of LS you know LS in the command is a single responsibility principle what that means is it does one thing and that it does that one thing damn well does that really well actually I mean I know the options are very confusing sometimes with lots of options but if you type LS it will type know all the directory listing if you do CP if you do mV each of those utilities of Eurex there does they do one thing and they do that one thing very well now you can start piping them and that's what we're saying hey you know we are actually making those micro services work with each other effectively so I think they are a very classical example of how micro services should be designed so how do we start moving towards microservice then now we talked about a ear file with a bunch of water files which of jar files in their their own cache database so as I said functional decomposition that if you want to go back with one message think about functional decomposition that's the message you want to go back with now here in my ear file which is scaling all that stuff I would recognize oh this is this war because ear has multiple war files I can recognize this war file this makes sense actually you know it could be is all it is it's basically a bunch of rest endpoints I'm going to extract it put it outside the ER file these are possibly hosted in the same Val fly or JBoss EAP instance and that's fine that's sort of to begin with now the advantages because these two are possibly running in the same EAP instance j-bars instance you can scale them independently this needs to this will possibly have its own cache and database because this is a rest endpoint possibly pushing out some data and as a rest service in terms of load balancer these this application might be calling all of these services directly or you may even expose this service directly to the load balancer itself the second pattern that we see is okay let's say you know there are three services running you know once you have done the functional decomposition and I'll talk about functional decomposition using a live example that have been playing around for the past few days now let's say you have functionally decompose you have service a service B so we see they will each have optionally have a own cache and database another big challenge typically that we have seen you know and when I am talking to customers who are migrating their monolith to micro services in a monolith architecture this database is all common like this is one big lump of database when you're going micro service the strong recommendation is each service is completely independent they have their own database so how do you refactor your database how do you normalize or denormalize a database like that it's not going to be easy and if you do do need if you do be normalize or normalize then you have to start replicating data so your overall size grows up so I think that's another challenge that we have seen in terms of refactoring but anyway once you have service ABC each having their cache and database possibly optional your load balancer which is where your client is coming in it could talk to an aggregate or service and all is saying is you know what I understand these are the three services that you need to talk to these services they could be an internal detail to the client all you exposes aggregated service now that aggregator by itself could scale also so that sort of a marker service architecture or design pattern you may say no we we expect to see and I'll show some samples of that the other one is in that case a proxy pattern you know where each service again is scaling independently on X or Y axis X z-axis they have the cash in database and all you are doing is you are setting up a proxy to these services now in this case is not really aggregating in the previous case it was pulling the response from all three here you know you are calling the load balancer this is like your case of your sharding so you could say oh I'm going to call this service because this service talks to Europe zone this service talks the u.s. zone this traffic takes care of rest of the world so you can start proxying based upon a certain parameter in this case another pattern that we have seen is called as a change pattern so here your client goes to the load balancer it talks to service a there are service B and C and then service a talks to be talks to see and then accumulates result from the client perspective again your interface is service a this could be internal detail these might be exposed directly to the load balancer as well now if somebody wants to call them so just a design pattern another one is a branch pattern where you know you call service a you could then based upon you know some business logic over there you could call either service B which is running in its own environment or you could call service C and B which are again in a chain pattern so again very standard practices that we see happening but this is like a branch pattern now remember in this case they have their own independent data you could possibly have a shared data here depending upon if they're running in the same container or not so here you have the shared data between the two another design practice that we see is how well if you are using HTTP and as a messaging pattern that's what was being used in the previous design pattern so far HTTP inherently is synchronous our HTTP is synchronous because this synchronous is going to be a blocking factor so then to work around that what you start using is asynchronous messaging so here you know you're calling load balancer this is calling service a and then you know all these different services are generating and consuming messages from a queue so then you are using a messaging pattern over there we have talked about a lot of mod mark surface but let's see you know this talk is about micro service so let's see what are the advantages of micro services well they're easier to develop understand and maintain because Amazon has this culture of two-piece or team that means a team should be big enough so that it can be fed by tupiza either the guys need to eat less or the team should be small the recommendation is you know anywhere from 8 to 10 people know the size of the PISA could matter as well I love the pieces here so a very thin crust let's love it so it says advantage of the micro services or because it's a tupiza team so you're looking at 8 to 10 people in a team as a full-stack team those services now you will hopefully have a better communication better coordination among those developers so they are much more easier to develop and maintain they definitely start faster than a monolith you know as we said monolith could be a much bigger war file here you know it could be a much smaller size and who knows you know it may not even be a war file could be completely different technology since we are not tied to a technology a local change can be very easily deployed so now we are not going into the concepts of continuous deployment but this is definitely a great enabler how continuous deployment can be done if you're doing a micro service based architecture each service can scale on x-axis and this should be z axis because y axis is functional decomposition that's a typo here so each service can scale on x axis which is you know you run it multiple instances of container in your environment or z axis which basically says or z axis we say oh you know what now I'm going to do sharding so each service is independent of that it definitely helps you improve fault isolation how many times you see you know you go to a JSF page you see exception track trace and this NIC for level of nesting that's happening it becomes nasty you know it is to me it becomes nasty and very hard to debug it as I just throw the application and lock it all over again in this case because you are using a micro service you know where the error is coming from hopefully the code this is not that big so it will simplifies or at least somewhat simplify on how you can do the fault isolation and the key part is because you are focused on an interface on how and what the context needs to be you are not really committed to a technology stack you know I know it's not a easy task but at least with eight to ten people imagine a big monolith application built by 100 people as opposed to you know a micro service built by eight to ten people and other changes you can imagine yourself is much easier to implement in a micro service based architecture another key part no as I said you know is just because you're building a monolith you know the whole team has to tie into a particular technology stack in this case that's not a requirement you know it doesn't mean that every time you take you know you see a square peg and you try to jam it into the round hole now yeah so refactoring yeah is fun but just the fact that your monolith is like a spaghetti doesn't mean you know you can actually start refactoring it into a well design or micro service itself so there are certain fundamentals that are required for a well-designed monolith itself make sure you know you you're you don't have 500 classes in a package you know things like those make sure the packages themselves are functionally decomposed that's what allows you to start a start refactoring your application into a micro service based architecture I talked to a customer once a few years back their entire website was one JSP the entire MVC is possible he did it yeah I just jammed the Java code in there I do the transactions in there some 50,000 lines of JSP and boom that's it and that's a big monolith imagine refactoring that into a micro service that's gonna be a lot of challenge so let's take a look at a classic example here is a Java EE 7 application now this is a Java EE 7 hands-on lab that has been given at all around the world this is what I designed when I was at Oracle about two years ago two and a half years ago so this is a movie plex application I got user interface look at the functional aspect of it user interface show booking add delete movie ticket sales movie points chat room etc they're all taken to the same database ok that's a very classical monolith application this is actually packaged as a war file in terms of interfaces we are using JSF and JavaScript here in the front-end for rest we are using JSON and we are using all sorts of technologies here EJB is batch JMS WebSocket and JPA for all the back end connection so very typical Java EE application this could run on GlassFish or on wild fly or this won't run on J bus EAP because that is not Java EE 7 compliant yet now the way I would package this application you know I would have a war file remember yellow was the war color so I will package this in a war file in the war file I will have web pages classes and some configuration files that's a again a very boilerplate war file so what I'm recommending is C I think one part we need to understand is refactoring your existing services to micro service has to be evolutionary architecture you can't say I'm going to throw away my existing application and I'm going to start brand new and then I'm going to start building you know micro service base architecture yes you can do that if you have time resources money and all those things I don't so my customers don't so they are always looking for evolutionary approach so then what I am recommending is and hear me out you know and this is the first time I'm presenting these slides in my thought process so hear me out so take a war convert it into an ear so what you're doing is you are refactoring your webpages classes and config files into functionally decomposed modules now remember in a war in an ear file a war file is one big monolith here is a bigger monolith as well but in an ear you have lots of different words which are possibly functionally decomposed by themselves you will still have some jar files which are possibly shared across these guys you you will still have a web module where you will have a web pages and configuration files but that's sort of the first step that I have started doing now how do I take a war and refactor it into an ear so who is that Lee okay well so what I did is this is a existing we actually have animated this sorry this is a war file structure so ignore the red circles in the black arrows for now I literally built this slide can our go on the top what you're seeing in this red circle are all the web pages this is a war file okay in the web enough classes I have all my packages as I was saying earlier remember all of these are functionally decomposed packages already so I have rest points JSON entities client all of these are functionally decomposed packages already and then I have a template dot XHTML so that's my very classical war file now if I've been able to convert this into an ear file if I look at my corresponding ear file well all my webpages effectively or at least the entry level web page it goes into my movie plex web module that's my war file so I put it in there the corresponding page so like batch and batch here so all those classes because they're functionally decomposed and remember the app has to function by itself so the batch class and the batch pages I put them together in a war file here similarly booking and booking web pages they go in the corresponding war file so I got different war files which are already functionally decomposed this is a template or XHTML now in Java EE 7 terminology we have this concept of a resource library contracts so template basically what that gives you is the ability to apply the same set of templates across your war file it's basically a library and it's a predefined data format so I bundle this template and my corresponding CSS which are not shown here yeah my CSS files are not shown here but template dot XHTML and CSS are bundled up together in the contracts jar file so I put them up here my entities are here my JSON utilities are here so I put them in the lib so that they are accessible by all the war file so that sort of my first step where I take a war file refactor it into any year file now at any point of time you're welcome to skip the step and say for example jump to straight to state 3 where I am taking an ear and converting it into a war file so remember here this is running as one unit so ear is still deployed in my wild like container I'm still having the common database there is still some sharing going on between these files and now I am saying oh by the way now run them as separate war files maybe the same wall fly container because those are already functionally decomposed so test it out you know let me start throwing them as independent war files the the biggest challenge that I really had was in my monolith you know I had say index dot XHTML that was my entry point to the interface as I said you know all the services are up all the pages are up all the CSS days are up all the classes are accessible so I can use any class that I want to but when I started refactoring here the biggest challenge for me was oh you know what no this is living in a different service so I need to define a contract for it now I can't just use a class just like that because it's living in a different micro ServiceNow or entities you know entities need to be used by all of these services so where do I place them I need to place them so that they are accessible by all the classes should I bundle entities into my all the war files or still make it accessible so things like those you know kind of started coming along so at this point now I have my ear file now go splitting back into war files each of these war files are potentially running you know as independent war files and then my interface is basically composing all of these different services together from here what you can do is you can say you know what hey instead of taking wildfly now as an application server and running in the same application server I could easily create for example a docker image and I'll talk about docker in a second but effectively I can create a docker image it's a war file living somewhere in my Nexus repository from the Nexus repository I'm going to grab that war file I have a pre-built image of wall fly J bar / wall fly and I pick that wall fly image I'm going to drop this war file in there and that's how I'm going to compose my application so these are really each independently functionally decompose services running in their own container and Here I am even showing my database even running as a docker container for instance and once you have done that level then each of your docker container could independently scale very well you know on the x-axis or z-axis sharding or horizontal scaling whatever you want now what are the concepts with micro-services you know people talk about does it really mean pushing the complexity around yes it does because effectively you know you are saying all that complexity of a monolith etc you know once you have functionally decomposed the app but remember we were looking at it how I have to make sure which jar sits where and so there is some code which is has to be duplicated as well so effectively the plumbing that you're providing that that you're adding to coordinate those microservices becomes confusing no that's where you're pushing the complexity around so what we need in that case is what is called as no ops no ops is a movement which was started around which was kind around 2012-2013 more like 2013 ish and here we're talking about these are some of the aspects that required for a microt service to be successful so for example services application you know who's going to handle service application yes service is running in a container but does that mean I'm going to write my own script to monitor the container and the container bounds goes down should I bounce it well that's where things like kubernetes which is a project by google and heavily contributed by Red Hat is very useful it allows you to schedule your containers very easily how about dependency resolution you know I mean what is my service how I'm going to resolve where my service is living or service discovery you know how I'm going to do that so Nexus is one way by which you can do dependency resolution but this should be more like dependency or service discovery we talked about what you know how failover has to be a key feature of your micro service architecture so that's where now if you have read the book clean it it talks about this circuit breaker pattern now remember go back to the Netflix use case we were talking thousands of servers running across you know across the world on AWS what if a thing you're pinging a service you wait till you get a response can you do that no what if you know then you have to build a circuit breaker pattern circuit breaker is basically like circuit breakers in our home you know if the thing is going to high voltage or to low voltage the circuit will trip and that's what you're going to do over here there is there are libraries available in Netflix open source github repository which allows you to implement circuit breaker pattern which says if you try to ping a service and is timing out you know increase the time after which you're going to ping the service similarly keep increasing and once you have finger service and the service is responding back then reset the status back so that's kind of a cool aspect of this failover resiliency you know I mean hysterics allows you to define how your code can be more resilient you know what are the if the service particular fails is there a failover mechanism where can I go and I mean if this service is not available is there a default page that's going to come back and show that this actually works I was talking to a friend in at Amazon and I was asking him you know when you guys went to the micro service based architecture what is one thing that you would really invest in and they said this is going to be the biggest thing for us to invest in you know monitoring alerts stash log stash and stuff like that because that's what allows us to identify how our entire application is doing you know you may do monolith whatever it is but this is where the biggest investment is and they would recommend people to do that I'm going to skip this graphic so micro services is not a panacea as I said this definitely adds complexity because now you're running in a distributed systems so libraries like hysterics you know or Netflix open source libraries resiliency failover all those need to be taken into consideration it does require a significant operational complexity so now a very high level of automation is very very important because if it's not automated then you can do your continuous deployment and things like that very easily with this with the monolith when your monolith comes up everything is up you know it takes a while but everything is up and then it's ready to go but with micro-services now what if a service is coming with a new version what if you want that new version to be effective for other services so there is some sort of a manual coordination or a tool coordination or a jenkins coordination is required or a deployment pipeline needs to be built which will allow you to coordinate deployment across those different services now Microsoft you know when you start building a brand new application and this is sort of a catch-22 when you start building a brand new application bu right away start with micro-service probably not because you know there's a lot of infrastructural investment that is required to build a micro service you know understanding those libraries using those libraries and making sure you start seeing benefit of those you won't see a benefit of micro service till you have a few services really collaborating with each other so then you say oh you know what never mind I'm going to start with the monolith application so you start building a monolith application and then your monolith application becomes too big when you say oh you know what I cannot be factored into micro service so I think that's sort of a catch-22 you have to think about yourself where you going to get that investment from because there's going to be a slower ROI to begin with specifically in the early days when you are investing into you know how I'm going to build that operational complexity automation and things like that so we talked about you know running those services in a container so how do I see this running now well the way I see this is how each service could very easily run in a container now there are a couple of container technologies that have prominence the docker and rocket or the two prominent ones docker is definitely the more heavyweight for now more mature so what is darker well it's an open source project and a company so you can go to github.com slash docker / docker you can see the it's not visible sorry you can see the number of stars are about 20,000 number of folks are about 4,000 so it's a pretty popular project it's basically used to create containers for software applications now Java gave us vohra right once around anywhere what docker gives us is poda or packaged ones deploy anywhere because effectively you can package your application once using docker and once you have packaged it you know publish it on a SAS or registry somewhere anybody can download that image and run it exactly the way you wanted it to run so the underlying technology for that of course is all written in go so this is a snapshot from the languages you can see almost 90% is written in go it uses several linux features which were heavily contributed by which were actually designed by Red Hat so in Linux there is a concept of namespaces and just like Java packages it allows you to gives you isolation they cannot interface with each other and that's what docker uses to provide that isolation it in Linux also has this concept of control groups which allows you to say how much memory should be utilized how much Hardware resources can be utilized and to occur expose is that when you run a container at that kind of time you can say I don't want this to be a resource hog just limit it to like you know 128 megabytes RAM or half a gig of ram and it's not never going to go beyond that Union filesystem is a very key concept because what you you start with the way so Union filesystem is a very standard Linux technology you start with the very core operating system you want to write something on it you cannot write on it it's a read-only file system when you want to write something on it you basically write a layer on top of it you want to do some operation on it you write another layer on top of it but effectively you have multiple layers and those layers are joined together using this Union filesystem and that's what gives you the idea that ok in what for a user you get one single feeling but that sort of the internal detail the way it works and lip container and oh these are the for technology lib container defines the container runtime format so the key well there are three concepts of docker one is the build container so we want to build aspect so the way you run a container is using or defining everything that you want to do in a docker file the name is do aqua file it's a basically a text file but in the text file you define what commands are required like what operating system I want what JDK I want what apps are what I want where is my war file you define it docker again is very well-suited for micro services because docker has only one entry point you know just like you know your single responsibility principle it can only do one thing there's only one entry point this is my most trivial docker file all I'm saying is from fedora latest command echo hello world so that's your hello world docker file for example so if you have your docker engine running or on a Linux box if a docker you say docker run this image is going to download the Fedora latest image of docker and it's going to run that command over there and that's it it's just a hello world and it runs really fast you wanna go a little bit more advanced as I said you know there is a concept of namespaces or derivatives you can define base images which are very very heavily used now for all different purposes so this is a JBoss slash wildfly base image and this base image all I'm doing is I'm running a command now by default if I just say JBoss wildfly is going to start up wildfly for me but in this case I'm overriding the command I am saying Oh run that means it's going to overwrite the command what was given earlier I am curling a war file into this particular directory which is opt j-bars wildfly standalone deployments directory so all I'm doing here is I'm saying startup wildfly and copy this war file into the deployments a rectory that means this app is going to be automatically deployed for me so literally these two lines and the JBoss wildfly image now is derivative of another image and another image and so on so forth so the fundamental purpose of docker is how you can have those derivatives you know using those base images very well so what are the advantages of containers several actually faster deployments now because the fundamental difference between and I'll explain that in terms of containers and virtual machine is huge containers really start in a matter of couple of seconds you know as opposed to a VM which takes a few minutes to boot up because of isolation because of namespaces and features like that that are available Linux the containers are fully isolated and they cannot interfere with each other so if one container goes rogue doesn't mean the other container will also go rogue because they are completely separate in that sense typical problem you know if you want to ask somebody that hey you know what this is my test is failing or my code works and say hey it works on my machine I don't know what the environment issue is with docker you know you can actually replicate the exact environment because in your docker file you can say this is the exact operating system this is the exact JDK this is the exact application server this is exact mixes tag for the maven or the water file and you can reproduce it so it gives you that replication of environments no more it works on my machine with docker again you know because you are the images are maintained in a repository you can start tagging them and you can fall back on a particular version of image say in what okay I'm going to download that particular image instead of the latest image so there's a concept of snapshotting that is available each docker can quickly you know I mean I was talking to a customer I think three days back they were talking about how they had to test their functionality against multiple database servers now installing a database server is a quite a nice task and if you have to run it and icon a MySQL and Postgres and three versions of Postgres and then you know hsqldb and all those and so on and so forth so all they did was they quickly created those security sandboxes they said in what hey here are these seven docker images just fire up those docker images run the test and throw them away because then they know it works against that docker image and they knew exactly what words and it is going with so that that was one big advantage for them I talked about how it can allow us to limit the source usage in a microservice world again it's going to be very critical because you don't want your micro service to hog the entire space with the dockerfile you can clearly define how the dependency mechanism is going to look like the sharing is very easy so once you create an image you can push it to a docker hub and then you can say you know you can download my image you can download my image they're all publicly available anyway so there are some pros to docker which gives you extreme application portability you know how you can take a container and share it with others it's very easy to create your own derivative and work with those derivatives so for example we looked at J bar slash wildfly how easy it is to derive from that image and add your own functionality and it's a very fast bouton container so it really allows you to simplify your entire development cycle however docker is a very host centric solution you know you can run those multiple containers very easily on a single host now docker has dog or swarm which they announced recently at docker con in last year November but it's still a very host centric solutions so that's a single point of failure that doesn't that's not going to work for me now you're spinning up all these different docker containers you need something on top of that to do a higher level provisioning that's missing from docker again docker swarm is the goal to fulfill that and we'll see how that evolves and with docker there is again no user tracking and reporting so those are the missing pieces from docker itself so meet kubernetes now kubernetes is a open source orchestration system or scheduling system for docker containers now today is designed to work with docker but hopefully in the future it will work with other containers also but docker is sort of the primary container technology anyway the coolest thing about kubernetes which I like is how it allows you to say you know specify the desired state now how many times you know you have managed a server where you say oh you know what I need three instances of my application server running I'm going to write my monitor scripts which is going to monitor if a server goes down I'm going to bounce it again or if or additional server is spun up I'm going to kill that server now that's the beauty of kubernetes - kubernetes you say I want three instances of my server and kubernetes does all the dirty job for you it makes sure there are exactly three instances if there is an additional one for some reason it kills it if there is a less one which is there it spins up an extra one for you so that's what makes scuba net is so powerful because you are just defining the desired state and kubernetes maintains it for you and that's what call we call as self-healing in kubernetes language it automatically restarts the containers as well it works across multiple hosts so you can start up a kubernetes cluster which could run across multiple hosts and it works with multiple VM providers so you could run it on for instance on Amazon Google compute engine VM where vagrant you name it all different sorts of providers and it allows you to replicate those containers as well so I'm going to quickly go through the concepts because I want to show the key part on how we how we see these micro services going to be composed with each other so pod is a co-located group of containers without getting into too much details typically the idea the design pattern that we are seeing more and more people are using is typically a pod is one container in certain cases you may say I'm going to bundle up containers together so for example a web server and some something that along with web server is pulling some data from the file system those two are logically related the idea is they share a volume and a share an IP address now the IP address of the pod itself is ephemeral is it is assigned an IP address but kubernetes holds the right that if a pod goes down it can start it up on a different container and assign it a completely different IP address so that makes the pod really inaccessible to anybody in the cluster that's where you say I'm going to have a service in front of a pod which is the IP address that can be used by anybody so you define a service and service also acts as a load balancer so for example you can have multiple power being front ended by a service this has a IP address that is fixed and is reusable within the kubernetes cluster now typically you would not create a part by yourself you know you won't say create a single part you will say I want to this is a redundant concept to me but bear with me you would say create a replication controller in the replication controller you will define what your pod recipes and how many replicas you want so for example your replication controller could say four replicas of a pod and you define the image as a cookie cutter recipe so it's going to spin up four instances and then you will front-end it with the service and all of these you know which part belongs to which service and all that stuff is maintained using labels so quickly if we see you know in terms of kubernetes really the pros are it allows you to manage all related docker container as a single unit it allows communication across multiple hosts and it automatically allows you know automatic or manual scaling of pods so it gives you scalability monitoring and all of those within the kubernetes cluster however it is missing several things you know as a Java developer remember we talked about I'm taking my existing Java EE application and refactoring it into micro service so I want I I care about a Java EE application how do I take that Java EE application do a full build deploy manage promote without worrying about the fact that underlying this might be a kubernetes docker thing you go figure out the details because kubernetes requires extensive configuration files am I supposed to write it is somebody going to generate it for me that's not going to work how do I port my existing maven projects into kubernetes configuration how do i do my DevOps and how do i do my multi-tenancy now now this is available in kubernetes today so my typical application you know if I have refactored you know functionally decomposed it now this is not truly functionally decomposed I shouldn't be saying that but I could have say let's say imagine from kubernetes perspective I have a database running I have a Java EE service running I have a messaging service running I have a web service learning okay those are all services back ended by I am showing only two parts but could definitely be multiple parts that's my service and that's what we are defined as a application in OpenShift terms now open shipped is Red Hat's open-source pass platform and in openshift we can't define this concept as an open shipped application OpenShift v3 which is scheduled to launch later this year is completely using docker as a container technology and kubernetes as orchestration framework now at the bottom of OpenShift v3 now we use rel 7:1 and atomic atomic is designed to run with designed to run containers as a first-class citizens on top of that the container technology is darker cluster management is kubernetes and that's where openshift provides user experience for example it says hey you need full tooling support you need those utilities which would allow you to consume the Maven project and generate an application which can then talk to kubernetes configurations and all those things so kubernetes will pro the openshift v3 will provide entire experience on those lines so that's pretty much my last light and I have about a minute more maybe but effectively you know from the JBoss side from the Red Hat perspective I think we have a whole variety of tools that allow you to build your micro service now whether is eclipse whether it is you know which provides integrated tooling with open shift whether it is j-bars EAP you know it provides a first-class wonderful platform for Java EE applications whether it's mobile applications whether is apache camel for integration whether you want to replicate your data whether it is persistence whether it is testing whether it is polyglot environment using vertex so I think we have a whole variety of tools that we offer from Red Hat perspective that should make your life easy I think that's about my time it says 5:20 here is there a session after this we can keep chatting otherwise there is a session Oh well I guess that's the end of it I'm going to be outside in the hallway if you have any questions I'll be happy to take them thank you so much
Info
Channel: Codemotion
Views: 44,026
Rating: 4.7797356 out of 5
Keywords: Arun Gupta, Codemotion, Codemotion Rome, Codemotion Rome 2015, Code Refactoring (Software Genre), Java EE Application, Microservices, Containers
Id: Jogdz6gvodU
Channel Id: undefined
Length: 60min 26sec (3626 seconds)
Published: Wed Apr 22 2015
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.