Kubernetes: Your Next Java Application Server by Burr Sutter

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
okay we're ready to get started we're gonna have a little fun in this session hopefully that's what the t-shirt thing was about for those of you walking in late you're like what the hell's you talking about I know you gotta be here on time okay we're gonna get started we're gonna talk about kubernetes hopefully you guys are excited about kubernetes by show hands if you don't mind how many people have put hands on docker at this point I've tried some docker stuff the majority of you okay that at this point several years into it should be that way how many people in hands-on kubernetes at this point all right a fair number of you so how many people though have actually been an app server developer like how many WebLogic people out there use WebLogic in their history okay fair number of you how many people use WebSphere in your history a few more WebSphere than web logics interesting how about JBoss all right I'm an XJ boss guy and technically still a JBoss guy up this part of Red Hat so do you app server people but what I'm gonna be trying to do is show you how kubernetes is essentially your new app server so that is the point of this presentation but we're gonna have some fun with it I have three incredibly complicated demos cuz I tend to go for really complicated demonstrations which might fail spectacularly but if you remember a couple years ago we did some really awesome stuff with like finger-painting from your phone and balloon popping like fruit ninja we're gonna do something not quite as elaborate as that but it'll still require you to get your phone out and help me out a little bit okay so one thing is we're gonna give away Chromebook based on a raffle based on your tweets if you mention at Birth sutter you have a nice photo and of course you have the hashtag de box will actually select that and it's just a random selection and based on whatever the software produces from a random standpoint that person gets a Chromebook which is here on the table with me as an example must be present to win because I'll say can't hand it to you alright so anyone outside the room it won't actually work now there's a bunch of free ebooks and other things that you might want so just keep that in mind there's a bunch of free things from a micro services perspective reactor micro services also a deep dive sto tutorial that could take you like eight hours to go through I'll give you a little taste of this do just to give you some perspective on it but again we're going to show it to you in a few minutes what normally would take you eight hours let's say to do on your own there's also this great book around deep-dive for that you might want to check out Dookie - a little bit dated at this point it's based on an older version of sto myself and Christian are trying to get it updated right now so obviously I'm here not doing that if you did come to dev box a couple years ago you might have saw Ezio nagas great presentation on how to deal with a monolithic database and a microservice architecture we have a free e-book on that topic very very popular book because everyone has a monolithic database right not everybody's using all kinds of cool no sequel databases they still got the big honkin Oracle the big honkin db2 or sequel server out there now let's talk about communities and app servers now for people who are coming from the kubernetes community they're like why is he comparing us to app servers those things aren't so cool anymore well I'm here to tell you that stuff is still pretty cool but I'm going to show you how they compare and contrast and what you get from one platform versus another so I am counting on the fact that most of you already know docker I'm counting on the fact that most you already know app servers and we're going to show you kind of the basis of kubernetes now the app server definition you see in Wikipedia looks like this one an application servers a software framework that provides both facilities to create web applications and server a server environment to run them you get a set of components based on api's standardized ap is the implement services like clustering failover and load balancing and so developers can focus on business logic that's a pretty reasonable definition would you guys agree you can say yes you know it's okay to say yes okay are you getting out your phone though and sending out those tweets and by the way you can say things that are not nice - I'm okay with that okay but here's the key point about this ap is clustering and developers think about those three things for a second we can boil it down to those three things the purpose of the application server because I start with application servers 20 plus years ago okay it was the point it was to give you some API that you could write your business logic against give you some form of clustering and failover and load balancing capability so your software stood up and stayed up and so that developers could go about doing their job as building business applications that was the whole point of application servers all right so let's show you a little bit of Ischia real quick or sorry will show you sto and kubernetes because you got to get to criminate e's first i showed a little bit of this demonstration on Monday but I'll go a little bit deeper here based on something that's kind of fun I have a simple little application running in my Kerber Nettie's environment called open shift back here so openshift is red-hats distribution of kubernetes that's not the application we're looking at we're looking at this one alright so we have customer preference and recommendation right there three little services all connected together that's actually in this case a spring boot spring boot and a vertex application but it doesn't matter what the implementation is we have several examples of that for different implementations let me look at my recommendation service okay I'm gonna make a different name a different version this here I'll just put that right there all right since he was helping me throw t-shirts and then we're gonna basically change the the logging information here to alright let's put a B to there so super obvious and this is just a simple vertex application by the way right it's basically looks like no GS if you remember that programming model but it's based on the JVM and I did a deep dive presentation on this two years ago but it's still my favorite you know how to quickly build a little Java based application kind of technology because I really love this declarative router capability you get with it it's also super fast and super lightweight but let me do maven clean compile package alright I made a code change using Visual Studio code right there we get that new fat jar being created alright so there is our recommendation not jar and and with true fat jars right you just say recommendation Java - jar recommendation it deploys super fast let me see what it looks like and if you think it by the way I talk too fast this is kind of how I always talk I apologize for that well maybe we'll slow down a notch no we won't have time okay so there's my little application it looks pretty good from the Java perspective AB you build that you test it kind of make a change to it but what I need to do is now kind of get it ready for going into my production environment Mike Rubin Eddie's environment so the one thing I got to do is build a docker image right so let me look at my docker images let's see if I'm connected correctly because I've been changing things here so it could have best up alright so there's the three docker images that represent customer preference and recommendation I'm gonna do a docker build besty example cuz recommendation in this case not customer I gotta spell everything correctly v2 and dot gonna get a new docker image all right you can kind of see right there notice it built very quickly because I've already downloaded all the actual sub layers from that perspective but now I have my docker image built I can also do a quick run - i t - p 8080 map - 8080 and example if i spell things correctly recommendation v - lowercase b - and i think that's right let's try that okay there we go it's deployed and in this case because i'm running in a virtual machine which is running my mini shift my Mike Rubin Eddie's environment I need to see the IP address that is going to be that IP address right there and I can say curl 8080 all right so it looks like it's nicely containerized now so the good news is I have a little java application easy to contain our eyes threw it into a docker image and now we're kind of ready to deploy it into kubernetes land so if I go over here and watch QCT l get pods if I'm in the right place you can see I have these three pods already running so I have customer preference and recommendation as individualized pods which include that docker image that docker container and of course it also has another sidecar container in there that's what says - bye - and we'll explain that a little bit more in a second you can see I've had lots of restarts on this example because I've restarted it killed it restarted it so many times but this is what's actually running those three pods or what's customer preference and recommendation it's that cool so far okay what we need now is to actually run our deployment so if I su do cube cuddle get deployments we can see that there is in fact a customer a preference and a recommendation deployment and that of course is the governing entity the basically says my declarative state for kubernetes is to basically have a customer a preference and a recommendation up and running so if I say Q cuddle cube control describe and deployment due to deployment and we'll go look at customer as an example if I look at what's inside there you can actually see there is well let's go up a little bit higher you will see that there's my customer image right there example customer so that's what I want to run the thing that I built for customer you can see what ports it exposes and then includes you know things for getting out monitoring as specifically as well as the 8080 that I need for my end user aliveness probe in readiness Pro which are super critical and we're gonna show you an advanced example a little bit later on that takes full advantage of those live as probes and ruinous probes but keep in mind that you would need to have them set in place so that you can ensure your application is in fact up and a rolling update is successful okay you can also see we actually set things for like X MX here because guess what by default the JVM often will eat all its memory especially with Java eight and blow up okay so when you start messing around with Java in a containerized environment cgroups may be the thing that actually bites you it's not actually docker it's not actually kubernetes the cgroups it's actually part of the Linux core Linux kernel that'll basically shut that thing down based on using too much memory so you can see we specifically set the memory we know we need and then of course we have this little extra thing that was added called the envoy sidecar so this is a feature of Sto so it's do layers on top of kubernetes and gives you some additional capabilities right now all we're seeing is generic stuff so far okay so we're gonna we're just showing the easy stuff but let me go and deploy that criminales or that that vertex application I'm just gonna copy and paste this line again everything here is fully documented like I said you can spend eight hours going through our tutorial or more but there we go I got that deployed watch the pods down here I now have a new pod coming up see it's one of two and we have to wait for it to go to of two but watch what happens up top here when it goes to two of two basically by default you get load balancing for free so as I mentioned if you think of our old school servers right being we have api's we have clustering and we have a developer experience in this case the clustering is out of the box I don't have to think about scaling the application per se I just simply say here's my code you mister server run it for me at scale and so in this case it's scaled up based on the fact that I have now deployed a second recommendation v2 version into production and now it's load balancing for free so that by itself is straight up plain old kubernetes right that that is just pretty normal stuff we showed you this four years ago when we first introduced curb notice to the world at large and people kind of like that is amazing I get load balancing for free I get essentially a clustering model for free you get some failover for free and it's essentially what we have with app servers back in the day okay even though vertex is certainly not an application server by the traditional sense but it works for anything that you can put inside a pod maybe your dotnet application maybe your Python application maybe your nodejs application you get all this out of the box anyway but there is our little application by the way to deploy a simple application when I mentioned developer experience I could have also just come in here and you can see it right here recommendation one or two I could easily just come into the user interface right and said Browse catalog and pick something I want a new my sequel I want a new couch CakePHP I want a new no GS application I can just follow a wizard as well okay I'm kind of showing you the hard way more the kind of standard way if you will but there's also user interface capabilities basically say load it right out of github set it up with web hooks so that every time you change in github it automatically changes within the runtime environment we're gonna kind of ignore that for now but this set of pods is up and running okay notice also out of the box now because I have sto installed I get monitoring for free so you can kind of see this is in fact tutorial customer name space tutorial pod customer in this case service customer and you can see it's performing really well there's no non 505 hundred responses right all 200 responses there's another thing that we you should be aware of and that is this thing called key Ally ok key ally is a service graph that we've implemented with that Red Hat and we've donated to the upstream is geo community and so you can kind of watch your transactions see how they flow here ok so we have basically we have traffic coming in from my curl command going in through customer preference recommendation 1 & 2 so you can kind of get a visual feel for how the traffic is flowing throughout the application and of course if you want more details on those traces we have tracing in here also let's see what that looks like yep here's our tracing and I can basically show you know things about what is the performance of each of those N points within the application alright so the concept of Jaeger based racing ger fauna based monitoring that's based on Prometheus Keala based the service graph as well as other application health and well-being details you can kind of see I got a bunch of things running on this particular cluster all of that is part of this infrastructure and again if you are familiar with application servers of old WebLogic WebSphere JBoss they did these things too they allowed you to manage things at great scale so developers could go about just building the applications they want to build as an example ok but let's show you some fun stuff that's the easy stuff that's the stuff that's just kind of out of the box ok let's kind of have a little fun with it now we have this little application running you can see it's going back and forth between this version 2 which as a count of 91 this version 1 of count of eighty 877 okay still going along there but now I want to change it up a bit so I'm going to move this around I'd get pods cube cuddle get pods alright so there's our there's our four pods that are running and I'm gonna come in here now and do this okay OOP not that one I have a bunch of scripts to make this little bit easier on myself so we go a little bit faster I'm gonna basically scale up though I'm gonna basically say now I want to a version two running okay so it's just a declarative state that you want I want two of those things running because my application needs to be more h.a right needs to be highly available and wrote aside just simply because I declared I wanted to you can see the second one is spinning up right now and now it has come online and you notice it's now part of the load balancer and this is just a standard curl command so you can kind of see right now there's that new version too you can tell by the number the increment number and more importantly if you look at that string right there that is the computer name that that java application thinks it's running on okay so this is the host name right here that the computer thinks it's running on the Java Java thinks it's running on and it actually maps to this pod name this pot identifier here so see right here you can see that's the pod name and this is the pod name and you can see host name here so if we look back at that Java code real quick let's just go look at it you can kind of see host name okay host name so it's basically going to system get env get or default hostname and that's pretty straightforward so the cool thing is thinks it's running on some computer it's unaware that all this magic is happening around it and that's where this pod concept comes from so now I have one third of my traffic going to version one two-thirds of my traffic going to version two that's kind of cool by default but let's kind of change that up a little bit let's actually scale back down I don't need to version twos I kind of like the 50/50 for a little while longer but you can see it's now terminating it is now gone it's no longer part of the load balancer okay but here's where it gets really interesting what if in fact we deployed that change too rapidly to production because one of the missions were on as software developers at this point is no longer deploying every three months or every six months but to deploy every week and as a matter of fact we want deploy every Thursday at 10:00 a.m. local business time with users on the system so we still want to have that rapid deployment cycle that's where we are in this new world of DevOps and see ICD and everything else but in this case I deployed but the users don't see it and now this is where sto really starts to take over in this kubernetes ecosystem I can change the routing logic associated with those different pods you can see I still have the two pods running I've version one running a version two running and now we're but our users are only seeing version one right out of the box so that's kind of cool by default I can basically say hey I can roll the production but maybe then slowly it incrementally roll out version two so in this case I can say I don't want 25% of the transactions to go to version two and let's see here if it takes effect you'll see a few version two showing up there there they go so it's a random load balancer so it's about you know 25% if you count them all you should get about 25% but one thing that's really cool about this again this is an SEO feature you basically are in a situation where you can define what that increment is in the case of old-school kubernetes it's always round-robin meaning you basically get if you got three pods you get 33 33 34 if you got four pause 25 25 25 with Sto I can say I want 1% of my user transactions going to this new version okay and if again if I don't want that I can go back to version 1 so we can just roll out a little canary release see if our users want to interact with it see if we get tweets that hate us on social media see if our users complain and if they do roll it back very easily but let's show you something a little bit more interesting and that is what if I want to see this in Safari okay so notice we're still on version one for most users perspective if I come to Firefox let's go here it's still version one also okay interacting with that endpoint but what if I actually go into Safari now that's this window here okay you can see Safari is all version two so you can actually pick different HTTP headers and decide exactly what you want to route on and this is actually a very powerful concept in the standard book info-dump demo which comes with this do it actually uses just the log in are you logged in or not logged in but you could do things like are you a beta customer running in Canada with you know with Safari on iOS then you're into version 2 so you can be very unique and discreet and I've actually talked to managers who are very excited about this kind of concept because they're like we're gonna roll out our Canaries only two employees first because every one of our employees use the system as well we're software's a service company now and they're gonna make employees test the canary and then they roll it out to beta customers who have logged in and opted in for beta they get the canary and then it rolls out to everybody should it should actually work out very well so that concept is actually very powerful and let's go here now okay well if not that one this one we're gonna clean that canary out so basically we've wiped out the canary concept and they steel rules and now we're back to standard 50/50 okay this is cool so far am I going too fast okay normally you're supposed to say yes to that question okay but I got you all right now let's show you this one this is known as the dark lunch so what I've done now as I've gone back to another average users most users only see version one you can kind of see based on my curl up top it's all version one but look at the look at the bottom down there okay do you see version two do you see that so version two is also being called because all transactions are being married to version two yet the users are only getting responses for version one now this is a very powerful concept if you're thinking about that you know we're gonna deploy every Thursday at 10:00 a.m. we're gonna deploy every week we're gonna deploy multiple times a day like the super unicorns from Silicon Valley what this means is I can roll my Trant my new application change through my CIC deployment pipeline literally land in a production because there's no environment like production anywhere else in the world we know that right it's certainly not like my laptop production is very unique but it means I can run that code monitor it for let's say exceptions showing up in Ed's stack traces right and looking for SEC traces and the logs looking to see if it's blowing out its memory which is fairly common with a java application right looking to see if it's running out of CPU and then I can decide let's let users see it or not see it so this concept of the dark launch is very very powerful it was actually popularized by the folks at Facebook that's kind of what we noticed well when this concept came to into vogue because they launched facebook Messenger and on the day they launched it it went to half a billion users and you're writing and as a software professional how do you go from zero to half a billion users in a single day that's actually you know it defies the laws of physics in some cases and it's because they did a dark launch of it long before you ever saw it and marketing happened on a certain day but the software had been long and production being tested long before the day you saw it and inside your user interface okay so that concept is also very powerful and I really like it but let's actually show you something else okay we're gonna get rid of this dark launch concept and we're gonna clean off that mirror it's called a mirror and let's see here all right let's go to a 50/50 load balancer again but this is 50/50 the sto way so it's a random 50/50 as opposed to the straight 50/50 you saw with regular kubernetes you can see it's changed now okay and it's actually kind of looking at v2 pretty heavily there but let me see I'm gonna show you one more thing here okay let's go here and then like I said I have two other even more complicated demos than this ones all right so we're gonna go back to a 2v2 see how there's two V twos and this becomes important you'll notice then we should get doubled up we should get 2/3 at a traffic going to be two at this point the second v2 in this case so you can kind of see there's the first one the first v2 going by and this is the new version two going by there we go alright so let's pick on one of these guys let's let's pick on the older one and that is the inci b9 j that one alright let's go in here we're gonna mess with them a little bit ctrl C cube cuddle exactly you can actually ssh into this guy here okay and that's what we're doing we're basically shelling in and we're gonna interact with them a little bit so i'm now inside that app that machine right that pod that container there it is I'm interacting with them a little bit but we have over a little special flag if you look at the code we have a little flag that says misbehave alright so basically it's always returning 503 s my programmer did really bad things okay they basically made a code change but you know it just didn't go very well so let's go back to this okay you can see we have diversion to this is the one that's misbehaving right here so it's throwing out 503 s into the mix we don't want that for our users that's not good for our company our customers okay so we have this really kind of neat little thing go in sto called the pool ejector and what the pool ejector does is it looks for misbehaving endpoints and throws it out the load balancing pool for a little period of time okay and you can choose what that time is you can kind of see most of the 503 s are kind of gone now because the pool ejector basically said just get it out of the system and but 503 G's can still happen because that pot is still very much misbehaving it's throwing up 503 is like crazy so we might still see one so we can apply another sto rule so there's our another 503 another sto rule okay that is apply a retry rule and what this means is now we have the ultimate resiliency we can apply circuit breakers and all sorts of exception handling if you will kind of at that network level and in this case what we have is a misbehaving version two pod and version one still running out there but the misbehaving one actually is not only thrown out of the load balancing pool but when it does show back in the loop load balancing pool if it costs up an error we retried to go to the other version two okay so we should we no longer ever need 503 anymore even though we still have that bad behavior from that bad actor out there I'm kind of curious to see what our graph looks like over here and let's see we're sort traffic animation okay and you notice there we go so it's trying to show us what's going on inside the system and again we have all this and actually let's go over here real quick recommendation okay no refresh refresh you know we have our monitoring tools and let's go to v2 see notice we're getting some 503 s now okay we're getting some errors in the system so we can kind of go figure out where the problem is and then fix it so that's just a whirlwind tour of kind of intro to kubernetes cluster incapability failover capability and kind of awesome Civ Sto all right you guys still with me fantastic because that was the easy stuff okay let's let's kind of walk you through some slides I have two more really complicated demos to show you because I kind of want to show you what this really means but let's kind of talk back tell our story here a little bit more ok so I want you to go back to 1999 I know it's a long time ago for some of you some of you were still living with your mom and dad some of you might still be living your mom and dad but that's a different story right you might have been in school at the time you might have been you know who knows what you were doing back in 1999 I was doing a lot of different things in 1999 because I started programming actually in 1986 so and I've been around the block a little bit but this was my favorite movie in 1999 okay so mentally get back into 1999 mode that opening scene were Trinity kicked that cop back in the room that that was amazing right that was like my favorite movie of all time quite honestly Star Wars with Jar Jar Binks made more money than the matrix in that year but I could not put that on my slide okay The Matrix is by far and away the most awesome movie our top singers in 1999 look at that Cher right TLC it was awesome back in 1999 and the ladies dominated the charts you can see it right there right they kicked butt they had the top five songs of that year okay and the u.s. wins the FIFA World Cup do you guys remember that Wow this was the most amazing year I'm not kidding you as an American this was an amazing year we won the World Cup now I want all of you to check your bias for a second we assumed something didn't we but let me tell you a little story about this because I actually at this period of time in my life not only was I about to prepare to volunteer for the local java user group which was just a little bit after this time period but I was a software developer in Java teaching Java developers at that point how to build web applications and servlets and back before there was even really ejbs we were showing people how to do server-side java not just applets and I ran training classes during the day but my evening job was to basically form a soccer program for young girls and I had over 700 girls in a program I had to recruit all the coaches I coached hundreds and hundreds of girls myself and so in 1999 this was an incredibly important moment I can tell you that my little girls loved this and they and I'll tie it up to age 18 I would take any 11 of you here and put you on the best of you in this room right now and my girls would destroy you ok they were really good but so but I want you to think about that for a moment because the world has shifted our world has shifted ok so think about the 1999 for a Java developer all right and for the Java developer Java was in this kind of tug-of-war it was kind of interesting I found this on info world as I was trying to look back in time the Java market was heating up as we read fact at that point in time Java surpassed C++ is the language of choice now you have to think about a survey for a second and decide what did XML come from you know but that's ok it you know we surpassed C++ in 1999 and I love this this is in the same article I didn't go looking for this by the way it said the most obvious recent examples Red Hat's Linux which in a short time has grown from hobbyist operating system to competitor that even Microsoft has taken seriously son should learn from Red Hat and if you're familiar with what's happened in the last couple weeks yes son should have learned from Red Hat ok all right so this is actually the real important part about what happened this timer aired specifically if you remember Java code in this point time you had to buy your IDE you had to spend as a matter-of-fact $25,000 for 10 developers to get an ID at that time it was an expensive thing to start your first block of Java code it was thousands of dollars per developer to basically do HelloWorld on your desktop that was the world we lived in back then and a lot of people forgotten that at this point because we have so many great tools like you see me using vs code on this laptop but you know even IntelliJ has a community version right we have eclipse we have NetBeans we have so many options there but also look at this particular advice we want to ensure your applications are created in a consistent fashion they're scalable reliable and compatible with other enterprise applications we strongly advise you to look at j2ee this was the world we were in back then everything was completely proprietary except for Java job was the only thing that we could build our applications in and have us even a chance of moving it from one platform to another I don't know if you worked on pyramid machines like I did or he worked on unisys machines or you worked on HP 3000's or an AAS for 100 previously a system 36:38 but you could not move that code from one machine to another that was the world we lived in back then and this was our top app server of the day they won the award for 1999 BA WebLogic ok and you know the number one feature they loved the clustering capabilities because that was why we loved her app servers that concept of how to do run our application that scale scale it up scale it down load balance have failover the same things I just showed you at kubernetes ok now here's what it cost back in 1999 to start do hello world for our website in Java these are actually fairly real numbers you saw the 25,000 that actually picked out the news article there but it actually if you read the fine print it was 25,000 for the licenses and then 25,000 more per year for the subscription for support so $50,000 for Symantec cafe to get started and you can see my WebLogic there might be a little light but I called some people who worked for BA back in the day and they're like you have $60,000 for an average little you know 2 core app server ok you know small little server that you throw on your rack you see the sun sparc boxes were kind of expensive I called a friend who worked for Sun in this era and said can you go back through your old price lists they're like yeah here's the prices and of course Oracle is the most expensive item but the concept is half a million dollars to get started we no longer live in that world we can start with an thing on our laptop for free all we have to have is the laptop and the operating system is free now - if that's where we want to go and we can launch it into the cloud for a few cents per hour okay very different world than what we lived in before let's talk about api's for a moment and I want you to think about your application on the whiteboard for just one second because when because well that list for this way as developers we're looking back over the code base of the that we've been given and we're thinking what the hell was wrong with that programmer that got this code base before me what was wrong with that architect who came up with this crazy idea but in that day would have been 2002 or 2005 or even in 2013 that architect did design something pretty nice on the whiteboard they actually had three tiers in her architecture write user interface and logic and data they knew what their data model should have looked like because on the whiteboard it's all good and then we you know we had our three tiers right we had all our components nicely talking to each other we knew what that architecture shouldn't look like but this happens in real life and it certainly happens over time it happens when there's a lot of cooks in that kitchen in there working away on it and trying to change things over time okay so this is kind of the mess we've created for ourselves and we refer to this as a monolithic application and it's a bad thing but it actually runs our business it runs all the billions of dollars of transactions for our organization so it's technically not that bad a thing but it is the kind of a problem and our stack might look something like this you can kind of see we had maybe you know depending on what error you started that application right we may have had JavaServer faces and ice' faces we might have angular now or ionic now many people still have a custom MVC anybody still there was a custom Model View controller a framework inside their environment only a few of you still fantastic you know struts was born many many years ago and spring MVC has been out a long time too when Keith Donald came up with it I think in a 2007 so it's been you know you could have moved NBC's for quite some time it's pretty common for us as developers to try to reinvent the world right we're low we don't need a new dependency injection framework we'll just build our own we don't need a messaging broker we'll build our own we don't need an app server we'll build our own okay so here's the concept of you see them right here struts and spring JSF wicket maybe jax-rs of that tier as well to communicate with a fat client like angular or view and you can kind of see we have maybe our a JB's or our spring beans our CDI but we might have camel or drools or JPA and harbor native this tier or our baddest lots of different things that we might have used we might have had SS o of course across this we might have had messaging message broker technology caching technology or they even stored procedures in our database but this is you know we had this sort of stack inside of our application and we had these standardized API is to work with JDBC J STL jaxa WS and that was awesome this gave us the framework to get started to build our applications it gave us the recipes if you will so we knew how to build the application on top of that and now we have Jakarta EE to replace j2ee and Java EE so Java he continues on with the DES Clips foundation as Jakarta ye now and they're going to continue to finding more standardized DPI's and more Universal ways of building applications on top of this next generation platform you can kind of see the performance of the current tcks moving through the clips foundation you know there are about 80% they're getting those tcks moved over and the micro profile community which was meeting last night of the buff here in town write a birds of a feather gathering they're also defining further api's for cloud native architecture for taking full advantage of a kubernetes and sto based architecture that might be behind it as an example so you can build a lightweight enterprise java application that is still standards-based and still run it anywhere that you want there's a huge community in the micro profile world you can conceive Fujitsu and Tomi tribe RedHat of course you know these are all partners within this organization IBM as well as part of that ecosystem working to make micro profile a better set of api's and API is that we should be familiar with because they do come from the original Java EE kind of ecosystem now there's all these other fat jar architectures so not just micro profile as a fetch our architecture just basically bundle up the whole application like you saw me do into one single jar but drop wizard really defined this idea early on vertex came up with its second right and they can decay into being at the same time spring mood of course made it fairly popular but there's like thorn tails a micro profile implementation and you might have seen a session here at dev dev ox on micro not as an example as a new player in the space and then there's the Java ecosystem api's so not just our standardized API is that everybody could build on top of and it wasn't just us as business developers that added our business logic on top but it was a whole ecosystem of open source frameworks that added new capability on top of those standardized api's and we took full advantage of these things right we leveraged hibernate and spring we lose use Kamel for integration we use Kafka now for streaming based systems but we still might have used ActiveMQ for regular JMS technology right we might use hit and finish ban or reticence or cash all of these things are super super popular and probably mixed into all your pom xml's at this point in time okay now clusters just briefly about clusters I'm going to show you a demo of advanced clustering idea and by the way these slides are available at bit ly q --b app server that get you access to all the other links let me check on our raffle over here let's see if we got anybody in the game yet all right fantastic we'll come back to that in a second but let's do this what I'm gonna show you now all right this is a complicated one and actually you guys had your phones out I'm gonna need your help okay let's see here do I have this up yeah I need you to go to bitly pop movie one all right we're gonna try something with you as I mentioned the clustering is now part of the apps the app platform which is kubernetes right failover load balancing things like that but one of the things we got super excited about with old-school app servers was the ability to put stuff in a shopping cart and if that note died if your shopping cart was still intact we called that the session we used it extensively in servlets and jsps and spring MVC and everything else this application does the same and what I want to show you is we can do a rolling update if it all works well against a live shopping cart so if you guys put stuff in your shopping cart you can kind of see our let's pick some stuff here we just grabbed the data off the internet by the way so that's where those movies from and you can see I have these items in my shopping cart right now crazy rich Asians by the way is a great movie should go see it okay but let's go you got a bitly Punk movie for me you can that way you can keep me honest here and actually let me bring up this other one okay let me refresh real quick I had a couple things here okay let's see here what do I have there what I click got those two things okay now let me find the right window here this little application is running in this case at Google so I actually have not only a crib Nettie's running here OB chef running here I have it running across the three public clouds as well this one's running on Google we'll come back to that demo in a second I want to show you this one though this is where this little application is running let me find the right window here let's actually go let's look at the code just briefly bring up our visual studio code on it okay and make that little bit bigger doo doo yes I know you have Java support no worries there so here's the thing with this one this is using the health and readiness probe and you see right here the live is probe health to see red in this probe health see initial delay 60 seconds give you some time to get your act together but what this means is we can pre warm our caches we can basically go into the actual Java logic now and say hey is the cache manager all intact is it ready as we have we replicated the user shopping cart over and it only will kill the old service once it knows the new service is live and ready all right this is just a feature kubernetes and we leverage that to basically say move the users session data over all the in-memory State can get moved over okay so let's see here let's actually just change something simple okay where it says shopping cart see right here says shop where it says my movie cart but that's the previous version let's actually make this the DevOps cart okay and let me I need to login to the right cluster because I'm on a different cluster right now so let me copy this login command we'll see how much I mess this up okay there we go OC projects are switched to the right names faces well this is pop movies so we deploy into the right namespace and then I have a simple script but basically we use fabricate maven plugin to deploy this application so canary they're all right it's gonna do my deploy so that process the deployment does a full maven build at this point and let me see it it will see it pop up down here when it gets going a little bit further you kind of watch in the background and also watch it here in this user interface you'll see that it's not only doing a maven build okay it's also doing a docker build deploying it directly from the change on my laptop into the production runtime environments so as I mentioned earlier think in terms of api's clusters developer experience developer capability this is just another way to make it more easy to deploy into kubernetes rapidly okay i still the deployment yeah Mille i still can have a docker file but i can remove those two things but you can kind of see right now it's actually going through the process of doing that deployment and kind of see it's starting that up here and it's then of course the network is rather slow forming but you'll notice that you'll see the docker build happening right down here and if we we can actually watch the log so it's actually doing the docker build not locally anymore like I showed you earlier but actually in the cloud and running it there and the cool thing is I can then easily just like I showed you with Sto earlier I can roll this into production and in this case with no data loss all right so if you remember the 12 factor rules right stateless stateless stateless 12 factor says there can be no data in your application your application has to be completely stateless and a cloud native architecture in this new era you can actually keep data in your application as well so you can kind of see it's chugging along there again because the network could just kind of slow it's taking a little time just to simply update my user interface but it is going through the process they're up trying to update almost almost all right and it's trying to check over here this is just a network connectivity issue ok yep yep I think it got through the process let's see all right let's see if we got through the process so I did built I building I did my build I did my deployed did anyone put stuff in your shopping cart ok cool all right so there's my stuff I'm just gonna hit refresh here just didn't make sure yep that's still my shopping cart all right we're gonna roll out the new one ok so we're rolling out the new one and let me double check this one over here I'm running two versions here okay so my shopping carts are still in tact all right and we're waiting just the 60 seconds if you remember though we basically set this readiness probe for 60 seconds right there initial delay 60 seconds before it does the check that gives us plenty of time to warm up our caches connect to our databases send the Space Shuttle to the moon whatever it is we want to do in our java application and then we'll have a whole application so we'll just a few more seconds here for this to come up and then we'll see it join the cluster all right and then you notice this light blue that's because it's not yet past its readiness probe and there it goes the first pod just did so I can start tearing down this old one I'm not purposely not using a rolling update I'm more of a canary deployment notice also if you look closely the bottom of your screen you'll see the actual computer name your java application thinks that's running on right there and let's see what this one says okay notice there they were on two different ones okay then we hit refresh here and hit refresh here make sure my shopping cart is still intact all right and then let's go ahead and kill this last one in that case because I killed one you would have noticed that your your server would have changed for one of them that one did right there and then let me go here refresh that's the dev oxcart now and refresh okay come on refresh for me so go go go sometimes the browser doesn't refresh let's see so this will refresh and you guys can refresh too and you guys will see that you should have your shopping cart still in play though my Firefox is acting up there we go base to the network is not performing very well but basically we rolled a code change to production just like we wouldn't a super fast you know deploy everyday kind of thing and we all our data is still intact is that cool okay now we're gonna run out of time and I've got plenty of more things to show you but I want to hit you at a couple of things real quick let's just show you the punchline of this one all right we're dealing with a timeline that looks like this this is actually not that new we've been thinking about these problems for many many years how to break up big teams into small teams big waterfall projects into small agile projects and now we're breaking up big ol applications and the small things called micro services and we've been dealing with this Netflix world for quite some time and now I'm here to tell you we're dealing with a kubernetes way of dealing with things okay so kubernetes came out 2014 we were part of the effort at Red Hat to bring that technology to market we were in from the very GetGo we're the second largest contributor to kubernetes and we call that thing open shift that's our supported version of it and to make the point of in 2015 we launched a thousand containers live on stage for an audience twice the size of this one and we invited everyone in the audience to then claim their container use their app server that we launched for them in two and a half minutes but thousand-plus app servers in two and a half minutes that's kind of astronomical if you think about it so that's the technology everybody's kind of fall in love with the criminalities at this point all the players that used to fight against it now our part of it okay and this is the ecosystem we're living in for kubernetes so it is vast it has changed dramatically let me show you the kerbin it is clustered the concept here is we have these series of nodes for our pods running on them but we have this master controller here so that dev developers like you see me interacting with it but also ops can interact with it ops can set up things like quotas they can set up things like resource constraints they can determine how many cores you get and how many CPU how many how much memory you get and the developer can just deploy their app using the GUI or command line like you've seen me do okay but this is an important point about dev and ops we should separate we've separated Devon ops in two different silos inside organization but I show hands how many people actually have a dev ops team inside their company right now yeah a lot of you how many up your hands up if there are developers on that DevOps team oh there's still a few of you fantastic most DevOps teams don't have any developers on them they forgot the point that as dev and ops and you can't just throw it over the wall all day longer I know we think we're bego Batman right and that's Harry Potter over there but we can't do that any longer we have to work together deliver that software okay so I want to show you another demonstration it kind of makes this cool because and you guys are again welcome to have all these slides alright but we're gonna run out of time so let's show you this other demo this one's even more complicated than the one we just saw because this one involves all those four clouds that I'm now running okay so I'm basically have a crew bodacious environment on Amazon alright running here that's my amazon screen but this is in fact the kubernetes environment the open shipped environments sitting on top of that okay and again it's a little bit slow to load there but this is my Adger alright and again I have the kubernetes environment open ship environment there and I have via sorry Google as well so Google Adger and Amazon now I need you to try a different URL for me okay a different URL for me if you have your phone out you want to go to bitly Hybrid open one bit late hybrid open one that's gonna give you this user interface let's see if this works for me here I'm gonna push in a request and notice a good it hit the Google cloud and it says Aloha burn when you push in your request you're gonna get an Aloha based on the response from the message endpoint so you're just basically are publishing a message okay and it's going to go across the network find the most appropriate place to run that transaction and give you a response you can see a bunch of people joining me now real fast notice you're all on GCP though you're all on Google right okay so let's fix that real quick let's come over here and let's look at this guy and this guy let's actually take the Google one offline alright so I'm gonna come over here and go into demo two amq the right namespace you will see again when the when it decides the network is gonna decide to load that screen for me come on alright but you can kind of see we got a bunch of things going in there oh wow this is slow okay but and notice over here it's flashing up Google Google Google's got all the workload granted we're gonna change that now oh wow okay come on come on browser load that page okay so by asking you guys to get on your phones you guys just destroyed the network for me here but but here's the point you can kind of see that the you can kind of actually see everything updating in real time the fact that we have a Google processor wow that's really getting slow to paint the screen we have a Google processor I wonder if I can connect to it from here or how about that Oh see you get pods let's see if the command line up connect faster than the browser no it's not cute code will get namespaces can I get connected oh man okay well this is where the network gets you isn't that adding that part of the problem but you kind of see a bunch of actually got messages in there forming you can also see the performance characteristics of the different clouds you can kind of see right here how many are being processed on Azure how many being processed on Google how many being processed on BER which is my local cloud and I can't get the Google server to respond okay so yeah so it's responding very slowly there we go I can get pods okay let me see if I can do this real quick edit deployment don't intend it vote at cubic it'll get two planes let's figure out what our deployments are real quick here's all I want to do I want to just kill that processor on Google okay man one minute left and stick it too long come on show me those deployments I can't remember the deployment name I'm still connected to the Google cluster but I'm connect to any I'm from here uh come on now all right OSI project demo to amq let's make sure we're the right one don't intend to come on oh wow the user and face come back nope all right QB cuddle get deployments let's see what we have here all right a MQ interconnect and our worker is not even showing up there that's interesting Oh get pods all right there's our worker worker worker worker okay let's see if I can just kill it for now you guys keep pushing messages informing and then two let's just wipe it out and see what happens here burr again there we go so if you notice we failed over - burr but Google's now come back online again so by default it's going to restart those components oh there it is now the user base is coming up let's kill it now you're gonna fail over to burr alright so if the there we go we're on burr but I can actually come to my local one now and kill it because basically what we're doing is we're routing around the entire internet so your transactions no matter where they're coming from let's actually kill burr in this case and now you'll fell over to stuff okay you'll now fell over to Amazon so your transaction was execute on Amazon or in this case one on Azure right you see the azure one going by there and if I actually spin up a bunch of load if the responsiveness for the network will work for me and we are at a time but let's go here you can kind of see what happens when things are going well the whole concept is we can now move a single transaction around the internet we can burst from one cloud to the next to the next my application code is identical across all those clouds the user experience is identical across all those clouds and because I built my application the cloud native way the kubernetes way and I can simply deploy it you can kind of see we have transactions now running on Azure we have transactions running on Amazon you kind of see where they're originating from and you can even watch the if the animation will work right it'll show you those what the flow is there you can see the different directions things are going in okay because all the messages you guys are pumping in here are actually coming through Google which comes back to BRR which goes out to Azure to process or Amazon the process well again we're basically out of time but I wanted to kind of show you those cool things okay there's a lot of cool things to do look at and finish baños how we did the shopping cart thing but these are the technologies we used for that messaging technology you saw there if you want on Kafka on kubernetes you do this with this project if you want to real-time ETL this is a Bayesian project deal with your model database lots of great stuff here and that is really the end of our show do remember that you can get this slide deck right here a bitly cube app server and now we're ready for our raffle you guys ready all right see here I'm going to go down this one this one here I'm gonna basically show you the number one finisher we got a lot of people in play and then let's see who it is there we go alright so Sebastian Laporte you win our Chromebook today thank you so much thank you so much and thank you so much for your time and you I do see you guys tried to break my application it looks like it worked out okay alright thank you for your time if you have questions I'll be around for a long time to come
Info
Channel: Devoxx
Views: 6,177
Rating: 4.9615383 out of 5
Keywords: devoxx, devoxx2018
Id: T7swgJzx4a4
Channel Id: undefined
Length: 53min 12sec (3192 seconds)
Published: Thu Nov 15 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.