SpringOne Tour: Booternetes II - May 12, 2021

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
i'm josh long and i'm mark heckler you know mark i'm not getting any younger i've got gray hairs things hurt i wake up every morning like a baby giraffe how so growing to meet the brand new day or stretching or growing to rise above things or excited for new heights and horizons no wobbly disoriented blind and slow sounds like you might just need some stronger coffee because i'm getting older i just don't have the time for handcrafted artisanal paths to production i want to get there as quickly as possible and i find it irritating to be ever so close yet so far what do you think about using kubernetes have you ever seen that show naked and afraid where two people are dropped naked into the wild and forced to survive entirely off the land for 21 days with nothing but their wits and a machete or a fire starter yeah well kubernetes is all machetes all the time it's just sharp edges you don't even get a fire starter although there are plenty of fires look i love kubernetes but i'll be the first to say that it's not a particularly uh productive out of the gate story it's not a platform it's a foundational layer on top of which to build a platform i don't think anybody would argue that it is the easiest most optimized path to production kelsey certainly doesn't and to my mind platforms should be optimized for the 80 case so what is the typical use case well it's obvious that it help people find that perfect cup of java no but i take your point my 80 case may not be your 80 case so the ideal platform should be optimized for the 80 case but support the exceptions as necessary you can't have everything josh don't be greedy remember that time when you tried to run slack and emacs and do a scala build at the same time there are some very productive application platforms out there like azure spring cloud k native and cloud foundry that sit on top of kubernetes so we won't retread that path today instead we're going to look at how to write software that doesn't just run on top of kubernetes oblivious to its surroundings but that responds to and extends kubernetes we're going to look at ways that we can write more elegant cloud native software with spring and kubernetes sounds good to me that should be a lot of fun there are a ton of interesting things a spring application can do in kubernetes that it couldn't do nearly so easily otherwise the spring ecosystem is flush with kubernetes integrations too many even we'll need some help absolutely good thing that our friends have agreed to help us today we'll be joined by tiffany jernigan who eats production fires for breakfast mario gray the original full stack developer and fellow spring developer advocate chris sterling spring cloud services guru and lead of spring cloud for kubernetes dave taranski a major contributor to all things messaging and integration on the spring team and nate schuda an architect who truly truly wants to help and our special guest star joe a co-founder of the kubernetes project to quote the ninth doctor fantastic we're going to need a domain for our demo though something that speaks to us something that brings out the best of us that inspires our better angels something that is our constant companion even in the darkest of times our families no coffee oh right all right good point let's get started first a bit of housekeeping today we're going to look at a lot of different modules so to make it easier to follow we're going to use icons to indicate on which modules a given presenter will operate we're going to look at a shopping cart that manages the inventory and orders for coffee a kubernetes controller that interacts with the controller api a point service that manages loyalty rewards associated with each account the spring cloud dataflow stream that calculates the rewards associated with a given account an api gateway that friends requests to our downstream apis when we introduce the section we'll highlight the icons corresponding to each module this will act as a sort of you are here marker on the map of the architecture that we're building hey there it's me again josh long i'm i'm up at bat first i'm going to get things rolling with some functional if not naive code that manages data associated with a coffee shop and the loyalty points for that coffee shop is this code ready for production and kubernetes not hardly let's take a look hi spring fans you know i just want to get softer into production if you know me then you know i love production i love seeing software delivered to the hands of the users that can use it and hopefully where it'll delight them but that is obviously not as easy as it sounds there are a lot of things you need to care for and that's okay because there are some with a corresponding sort of increase in complexity comes an increase uh in opportunity and one place where i see that so often these days is uh kubernetes kubernetes is a foundational platform on top of which you can build really amazing things and so we're going to write a very simple application we're going to build a coffee shop sort of domain and we're going to deliver that into production there's obviously a lot of things to do a lot of things to learn but software is a team sport and i'm going to lean on my friends today to help us get from here from concept all the way to customer let's get to the code all right we have to build two different things first a cart service to represent the catalog of coffee and the ability to make orders for that coffee let's go to start the spring rail my second favorite place on the internet after production we'll build a new service with a group id of booter netties.cafe and the application id will be cart we're going to add some dependencies the reactive web support the postgres reactive sql database driver the lum buck compile time annotation processor the spring boot actuator and the spring cloud commons so spring cloud we're also going to add r2dbc which is the reactive relational database connectivity abstraction and then we'll hit generate our service is going to manage two types of entities the first of which is a coffee it's the kind of coffee that's on order so we'll give it a primary key for the id and some usual fields i want to be able to map the id to a primary key in the database so we just bring that up our spring data id annotation and of course we want some getters and some setters and so constructors and tostring and all that we use table cafe okay we're going to need to manage order data associated with orders for the coffee and same thing as before getter setters tostring and a annotation to map it to orders in our sql database we're going to need a repository to manage instances of this data and we're going to need a rest api to surface the coffee catalog itself we need a constructor so we'll synthesize that with the required aux constructor now we want a endpoint to manage orders from the cart or for the cart so let's create a new res controller this order service is going to publish a an event to the spring cloud dataflow source which will then do some calculations and then eventually update the point service the loyalty point service the reward service so we need to inject the configuration value for that url so private final string cart points we're also going to need a few other things to do our work we're going to need a coffee repository sorry the order repository and a web client to make that call to the dataflow service now we need a constructor for this and the only tricky bit is that we need to make sure we inject the configuration here cart points let's say sync url and there we are there's our constructor now we can actually build up these endpoints that will support what we're trying to do given a post to the cart orders endpoint we'll be able to place an order i want to log out each order that's been saved and then return just an empty mono now while each order gets saved we then want to publish the event to spring cloud dataflow so i'm going to create a method here that actually does the work of sending it and we're going to call that endpoint from this we're going to say flat map this send all right there's our order rest controller and our coffee controller the only thing i can imagine needing at this point is just a little bit to specify first of all the coffees that we're gonna load into the database we need to connect to the database we need to initialize some schema that kind of stuff so let's add a um a con a customizer a connection factory initializer to initialize the application now sadly this stuff isn't automatically handled as it is for jdbc out of the box you have to actually do this work yourself but it is coming in the new version of spring boot and in the meantime it's not that big a deal schema sql data.sql now we don't actually have anything we want to put in data.skill but it's just not to know that you could so what we need now is a initializer we're going to inject the connection factory here okay we have we've got our connection factory initializer we're also going to need to make an hdb call to the spring cloud dataflow endpoint so we're going to factory a non-blocking web client http web client okay we've got everything we need i think the only thing now is you know how do we actually connect what does the connection to our application look like our database look like so we're going to specify that as some properties uh the first property of course is the list of coffees so we're going to put in some sample coffees in here just to get things going we're going to need to specify that url for the sink we're also going to need to connect to a database i've just done something that is completely contrary to all good security practice i'm going to hard code a username and password obviously you should use a kubernetes config secret but for our purposes this will get us across the finish line now this is all invariant this will this will be loaded no matter who is running or where it's running it but i also want to have some configuration that's active only when certain profiles are active or when a certain environment is detected and in spring boot we can now do that all in the same file the same property file it's always been possible to do that in different yaml files but now you can do the same property file so we just need to say dash and now we can specify the activation criteria and the default profile is a special sort of pseudo profile that is active when no other profile is active okay so it's great for local development that kind of thing we also want to activate certain config only when the application is on kubernetes here we're going to use a more you know more parametrized more secure url we're going to use some environment variables that have been set up for us in that environment as well and of course you can refer to the github repository for the full pipeline the full kubernetes description and deployment uh we do need to specify for later we're going to need a bootstrap properties and here we're going to give it a give our application a name of just cart and the other thing we want to specify is the schema in case that hasn't already been initialized and the data there's nothing in the data but we do need to initialize the schema all right there's our schema for the two tables cafe and cafe orders i want to type that all out live there's no need but you can see it's just a pretty trivial table okay we've got our schema we've got a schema for postgres i've got postgres running in a docker image in the background here obviously in production we'll deploy a real one let's go ahead and run this application okay the application is up go to the command line and i want to log into the sql database to make sure we have some data we need some data so we're going to insert it confirmed now we should be able to get that data from the api there it is good so that's working we've got the cart service now let's turn our eye to the point service the membership reward point service so we'll return again to start that spring layout here we're going to create a new service with the group id of buddha nitties and the artifact id of points and we're just going to bring in lembuk and their reactive web support our point service is going to manage the points associated with each user uh after the calculation has determined how many points they should have for each order the point service for our simple demo will just be an in-memory map of data so we'll create a service let's see we want to create the map of data for a given username and some amount of points so we'll say return map.of username points i and we want to support retrieving that data from this in-memory map given just a username so we'll say and what that'll do is it'll create a map of the data you know it'll get the data from the map with the returning the current username and the number of points that there are on the database or zero if there are none right and then we're going to use that to create an endpoint to add points so the existing points are here and we want to put that into the database want to update the database using the username as the key and updating the value and then we want to return the points all right there's our point service it seems fairly straightforward fairly trivial now let's create an api that people can use to manipulate that data you we're going to read the data that will come from going to the points using an endpoint so we'll return a response entity and here to read the data we'll say var points equals point service dot points4 and we'll get the username that'll give us a map maybe it's better to be explicit here since it's a string we type the map and we're going to say if the point is not empty i'm going to return a response of ok i'm sorry of and then we're going to return if there is no if they if it uh is empty then we're going to return a not found okay so that's the reads let's look at the rates and the right will be a payload that comes as a post json post and that's it all we need to do now is to specify on which host import we want to run this okay let's test it out no data okay let's now make a post to the order service now that we've updated it let's go ahead and read the data and there we go it reflects that we've just made that right we've got 200 records all right so we have a cart service we have the point service obviously these are intended for production kubernetes we should definitely take advantage of the spring boot build pack integration so you can say maven spring hyphen boot colon build hyphen image and you'll get a docker image that you can then docker tag and docker publish and then reference from your kubernetes deployment yaml we obviously have a lot to do to make sure our code is more robust we should certainly externalize the coffees themselves not to mention we should initialize the database to read that that coffee data and there's a missing piece between the cart service and the point service well where we'll need some configuration to happen there's a missing piece between the cart service and the point service where we'll need some configuration to happen and so we're going to take care of that as well that's going to be a spring cloud dataflow pipeline and of course these apis are insecure and they're just out there in the open web we would want to front these with an api gateway and so we'll look at that and of course there's opportunities since we're running on kubernetes to extend the platform to accommodate some of our more particular use cases so we're going to look at all these in the next few sections this is all for me thanks so much for watching [Music] [Music] well a bunch of us got together and had nothing better to do uh and so we said hey let's go through no the reality is is that inside of google we had this idea you know of how systems could run based on some of google's internal systems um and uh and like i was working on google compute engine at the time and you know i presented to one of our leaders and like you know launch a vm ssh in see a prompt and his his answer at that point was like okay now what do i do because inside of google we had this whole system that got you like not just to a prompt but actually got your application deployed and running and that was our standard and so kubernetes just like so much of what we're doing across the industry is really about what now okay i got a machine that's great what now do i do with it and so that that sort of moment crystallized for me this need that getting people a vm wasn't enough we have to make it usable we have to make it so that they can actually apply it to the problem that they have get their application up and running and really be successful in terms of using the cloud not just having the cloud cube control is what i say but i am like you know my last name is beta but everybody says beta or beta and so like you know i'm not picky about this i know what you're talking about i'm not going to sweat it so i i'll accept cube cuddle cube ecto cube ctl i'm cool with any of them our cart and our point services work but they're hardly reliable if anything should go wrong anything at all they'd collapse like a house of cards distributed systems become even more volatile the larger they grow and the faster they evolve in this segment mark heckler will show us some ways to reduce those risks hi it's mark again now let's take what joshua lovingly gave us and let's add some resilience capabilities to it some things that allow our applications to maybe behave a little bit better and maybe even behave better when someone's not listening on the other end so we want to have the capability for a calling application a knocking application to either time out to give up and go away when no one comes to the door or to maybe have a certain number of times that they'll try to knock before they give up and go away or to add some capabilities to uh to handle when inevitably no one is home so with that let's go to the code we're going to at this point examine what happens when things don't go as we would hope because in many cases of course they do josh and i've had some long conversations about this we're using spring cloud dataflow obviously throughout and spring cloud dataflow sometimes we think the df doesn't stand for dataflow it stands for doesn't fail and of course that makes things terribly inconvenient when you're trying to show what happens when things do go off the tracks so what we wanted to do is maybe take things in a little different direction and show what happens in the ways that we can perhaps accommodate and embrace failures so that we can show some resiliency factors so what we wanted to do is show a fail service that we've created here to indicate and to represent what can happen when things aren't going perfectly according to plan so we have uh three endpoints here an okay endpoint as you might imagine which means things are okay right a timeout and a retry end point that shows kind of what happens when things aren't perfectly okay and we're going to go through those examples here momentarily now i've got this service running or i will have this service running on port 9090 so it will be listening for requests coming in and i'm actually just going to scoot that off to the side here and then what i want to do is is change where we're accessing here for our cart cart points sync your url say that three times fast so i'm going to point to localhost our service our fail service 9090 and we're going to just go to the okay endpoint and what i'd like to do first is just test to make sure everything is fine everything is indeed okay so i will run those we have both of them running and i'm going to go ahead and push an order or actually a representation of of the the points that we're getting so we have a payload that came through and we're getting back a complete signal so everything is fine everything is okay so let's see if we can fix that at this point so what could happen is that we we might have the case where we would time out coming back so what i want to do and i'm just going to expand this again so we can see this a little bit better so if we have the case where we have a timeout what we want to do is set a timeout here let's say that we're going to allow 10 seconds for a timeout before we just give up on the the entire interaction and and just pack it in for the day right and i'm going to change this to our timeout endpoint and our fail service and we'll see what happens at this point now i do want to actually show one thing because things can get a little bit messy here so i'm going to go ahead and show this uh perhaps the messy version first so i'm going to go ahead and restart this well actually the messy version second the absolutely clean everything is happy version first so we have a 10 second timeout if we go over and look at our timeout endpoint which we're uh well we will get as soon as i send this we can see that we have an eight second delay so we've got our timeout payload we've we've received this payload and we're going to wait eight seconds before we before we send something back but we have a 10 second timeout so all is well right and and indeed all is well we get back a complete completion indicator so that's great but what happens if we have let's say a five second timeout so let's go ahead and restart this now and of course this is where things can potentially get messy and this gives perhaps more information we necessarily need we may want to handle things a little more elegantly but this gives you kind of that raw exposed look inside at what's going on now we have a and we've received our payload here on their fail service coming back we get this rather large trace and if we go up here we can see that we've cancelled uh here so we've got our indication that that we just gave up right we we waited five seconds didn't get what we needed and we just gave up we cancelled the request and that's okay you know but we can actually go ahead and and tidy this up just a bit uh if we insert an on-air resume here what i'd like to do is is take that error and then we'll just do some things with it let's let's take a look at it so error and we'll just take that and actually once again i'm going to expand that and we'll do this and get our localized message and then we'll just uh return a mono empty so that you know that that should make things just a bit neater for us in terms of our logging and we'll go ahead and restart that and we'll send our latest update here oh well of course we should probably wait for that to happen wait for everything to come up and then there it is we're waiting five seconds it comes back and this is a little bit cleaner so it just tells us hey we didn't observe any item or terminal signal within that five seconds that we expected that we we specified so yeah we're just going to go ahead and continue um and and do kind of a graceful um graceful degradation if you will so now that's what happens if we need to do a timeout now what happens if and there are some cases that's very valid right if in cases where we know that if something isn't there when we expect it to be there it's likely not to be there even if we wait a bit longer but there are going to be times where it makes a lot of sense to have something like a retry so we're going to maybe examine what happens when we retry so we'll retry when and and of course we need to indicate what happens if we don't just want to send a bunch of requests back to back to back because if something is is has is not available at this point in time if we wait three milliseconds is probably not going to be available then so we need to give it a little bit of time for things to uh to to heal or to come back or to become available so we're going to back off i'm going to indicate a back off in this case i want to give five attempts right and i'm just going to back off a little bit each time a little bit more each time so we're going to increase our back off here by one second per shot and there we go and then i do also need to change this to instead of going to the timeout endpoint because i want to demonstrate retries at this point we'll go to the retry end point for our fail service and we'll restart that and then once it comes back up and i will give it plenty of time to come back up this time then we'll see what happens when we when we make a request and i should actually show really quickly what happens for our retry endpoint we will accept requests and then of course the first few times in this case if the counter is under five times so the first four times we're just going to send back a bad request response and then after that any subsequent requests we get will return a success so there we go so let's go ahead and uh pull this up and submit this so we see that we've got a request we failed once we failed twice we failed three times and we can see each time that we're getting a little bit more of a delay to give things time to recover and to to do things elegantly gracefully and of course after that fourth time we we make that fifth request and then it's a successful retry so that works out very nicely and we can see here that we have errors until we get a completion signal so that's great that works really really well now just to switch gears here for a moment what i want to do is show another way of operating because there are times that you want to to limit what comes out of requesting applications and not just accommodate on the back end what to do when things go wrong or on the front end in terms of okay retry this many times or just give up after a certain amount of time there are going to be times where you'll want to to limit the amount of requests the number of requests that are being issued from a consumer from a an upstream service so what i want to do at this point is just to to configure a rate limiter so we'll do that rate limiter uh and we'll call this cleverly enough rate limiter and i'm just going to define a rate limiter of a few different parameters i'm going to set this up we'll just call this our data flow rate limiter and then i'm going to uh use rate limiter config to establish a custom rate limiter config and we will re limit if i can type and we're going to limit the the number of requests going out to 5 per period and to define our period i we'll define our limit refresh period and this is the amount of time that we would specify to fence off that those number of requests so in this case for each second we want no more than five requests to be allowed and then we want to define what happens in terms of a timeout duration and clearly i have gotten out of my there we go there we go so our timeout duration and we don't want to have things kind of hung up forever so i'm just going to establish this for let's say 25 milliseconds and then let's build this and clean that up so now we have a rate limiter that we can press into service in order to use that ray limiter we actually need to send multiple requests so i'm going to just well i'll just i'll just edit this that's fine so i'm going to save all and i'm just instead of a single order what i'm going to do is establish a flux at range and we'll just say 0 to 10 right and i'll map each of those and we'll take the value that comes out of each of those and we'll just map that to an order perfect and that is pretty close what i do want to do though is to set up an on error resume so that i can take that and when that error comes back so when we exceed our rate limit we want to do something with that so i'm just going to uh once again set this up and we'll just uh print this out so we can see kind of what's going on we'll log this so get localized message and then we'll return once again oh come on intellij return mono dot empty that's kind of a nice way to wrap things up yeah so that uh that looks pretty clean and at this point what i want to do is just go down here and um we'll just clean this up and let's do a uh transform deferred and we'll take our rate limiter operator right and we'll set this up to use our rate limiter that we've defined now the other thing that i need to do or can do i'm just going to go ahead and go to the okay endpoint because yeah you know that's uh we don't want to mix things up a bit i think it's clear to see what happens when you don't mix concepts so you can see very clearly what's happening when it's happening and i'm going to go ahead and similarly go ahead and clear that up so that now oh where did i miss here oh yes well that uh that's helpful so go ahead and restart that and once everything is running we'll go ahead and push that update yeah so we see that we got through exactly as we expected we limited the calls to five five got through and then we see that our rate limiter doesn't permit further calls and that's exactly again what you want to see when you limit the requests going out from a particular upstream service so with that that's pretty much all for me at this point i will go ahead and hand this off to mario thanks so much and here we go [Music] do [Music] well the way that pod death works in kubernetes is that once a pod is scheduled on a node it lives and dies with that node and so if that node dies the pod dies with it and what happens is we have higher level concepts than pod these things called replication sets replica sets that um that create a new version of that pot so the answer is yes if your pod dies in real life you die but you immediately get cloned with all of your memories so you don't even notice right i think so that's what's really happens when you when your pod dies the code is a far sight more robust but we still don't have real coffee in the database the coffee catalog should of course originated its coffee from some external configuration in this segment mario gray will look at ways to do just that with kubernetes configmaps and spring cloud kubernetes hi mario here i'm a developer advocate for spring at tanzu nothing seems boring to me like a coffee shop without coffee what kind of fly-by-night shop has no coffee i need caffeine here in this section i will show you how to source coffee beans not from an organic farm but from configuration i will demonstrate how to use spring cloud kubernetes config to read load and reload application properties from config maps then we can create coffee beans and then we can code some more let's go application properties so we have application properties that we can actually incorporate into our application runtime state and in that case we have coffee sumatra we have breakfast blend we have blend of the day we're going to read this as an environment variable and then we're going to push that into our database so let's go ahead and add it okay we're going to add a new class this class is cafe initializer and it is a component and it is also going to use required args constructor and a log for j logger we're going to need an environment then we're going to need a repository we have a copy repository then we're going to write an event listener we'll listen to application ready events and it will execute method run me and run me does something like um what should it do coffees equals let's say it pushes um env dot get property that property is called cart dot coffees and we will take that and say val coffee stream coffee stream and we can create a flux of coffees of the coffee names we'll split and then we'll we'll um filter it we'll say object details does is empty if it's not empty we want to keep it and then we can turn that into an actual coffee bean okay so we have that now we want to do something to our database let's let's um let's initialize our database by first deleting it and then adding it the copies let's then add all the copies so we can save then many and save all what we're going to do is save that copy stream then we can monitor what the output of that is using subscribe okay so that's that's going to get us a nice state a nice uh application that knows what coffees are in application properties uh so that we can serve coffee from our command line or curl or a web service in point so we have the first few adding coffee which is sumatra breakfast blender and blend of the day what i would like to do is i would like to say dot trim there we are so i like to do that okay so let's check that out from the curl localhost ada cart copies we have application.properties now but we wanted to load it from the cloud and the best way to do that is to add a a resource to our dependency tree so our application properties runs fine but if we want to pull new properties and we always have to restart the application that's probably not the best thing to do so let's go ahead and change that a little bit so that we're not only changing the application state but also our application is able to be made aware of property changes so let's go ahead and and say palm xml and we'll do dependency addition and we're going to add org spring framework cloud so we're going to add not that of course it thinks i want that but okay we'll add the kubernetes fabric 8 configuration dependency and that's going to make spring able to uh select a config map and pull its contents into our property uh maps now i can read that as a environment from our property map okay so after adding it i am going to need to use something like a bootstrap property let's talk to bootstrap properties and say spring application spring application name we'll call it cart then we could say spring cloud kubernetes config name space sbk i want to configure reload then i want to configure the mode so that's the the event and this is the configuration source so whatever it finds in this config map will be used as a configuration property all right so the next thing we want to do is we know that we have this we have a palm file here that palm file has what our kubernetes fabricate configuration so if we load this right now if we hit play we should see the config map so we have the shakes cloudy blend sumatra breakfast blend and blend of the day if i went to the terminal and said curl localhost i'll have the shakes cloudy blend sumatra and so on and so forth let's add one more we we have a more genu and coffee here this is going to be called the procraftinator uh let me apply it and then let's check the endpoint the procraptinator okay great so we know now that uh application state is uh maintained as a steady state but it is also able to contain uh new uh properties as they're fed into config maps so now that we have config maps uh completely taken care of what's next [Music] [Music] [Music] so no kubernetes is generally not to the right abstraction for developers but kubernetes was built to be built upon it's more easier to build that right abstraction for a developer based on the primitives that are driven by kubernetes and a key insight here is that there is no one system that's going to solve everything for all developers our world is specialized to the point where you know there is no one-size-fits-all solution so the goal with kubernetes was to provide enough so that you could provide the rest of that experience in a way that really fits the needs of the developers that you're targeting and if that higher level abstraction doesn't work in some situations you have a much more you know even slope uh off the the sort of the beaten path versus having a cliff into the ocean which we see with a lot of other systems that are really aimed at developers so the idea is that kubernetes provides a great baseline but it's not going to get you there all by itself we've got a cart service and a point service there are some calculations to be performed once an order is placed before updating the point service in this segment david taranski will look at stream processing with spring cloud dataflow on kubernetes hello my name is david taranski i'm a spring developer currently working on spring cloud dataflow and it's related products i also spend time with our customers as a spring advisory architect to help them get the most out of spring i'm happy to be here today to talk about spring cloud data flow and to show a brief demo and boy do i love coffee so let's get started cloud dataflow is a runtime environment that provides tools and services that provide a rich development experience when you're composing complex topologies for streaming and batch data pipelines these are topologies of distributed spring boot applications that use spring cloud stream or spring cloud task and spring batch or a mixture of both and data flow allows you to create these topologies with a graphical editor and or various other ways of doing it and manages the deployment and undeployment of these distributed applications as a unit which is very convenient if you're doing a lot of this kind of thing so the best way to get started with spring cloud dataflow is to go here to dataflow spring.io and you can find out anything that's related to dataflow here is a great way to get started for example we can look at how to install on a kubernetes cluster using a helm chart which is what we've done here on our cluster we've already installed dataflow using the helm chart that is published on bitnami and so you could follow these instructions for the open source version and and do what we've done so here we see dataflow running here on our cluster it consists of a few components the data flow server itself which is the main point of contact for data flow users we also have a skipper server which dataflow uses to delegate certain tasks related to stream deployment but is not generally used by end users directly we've also deployed the rabbit mq message broker which will be used by our boot applications to communicate when they're running in the stream and we also have a sql database mariadb in this case which data flow components use to keep track of the state of deployments stream definitions test definitions and batch and task executions and things like that so we also have here deployed cart service and the point service and so to complete the application we need a bit that's going to calculate the points so if we post an order to the cart service we want to get those that order details and compute the rewards points and then post that to the point service so to do that we're going to build a stream in spring cloud data flow so let's take a look at the data flow dashboard so the applications view is what you land on and we can see we've already pre-registered all of the out of the box applications that are provided by the data flow team that can be used to perform many tasks when you're building stream pipelines so we have a number of sync sources and processors which are components used to compose streaming data pipelines and there's a lot of integration with common open source products we see redis we can use our socket s3 for sources and syncs we can read from s3 or write to s3 we can use sd sftp rabbit we have a http request processor which is a processor that takes its input and calls out to an external web service and returns the result forwards it on to the next component so lots of things like that and we also have registered our custom applications in this case our points calculator sync so we'll take a look at that points calculator sync in a second we can create the stream by composing these things using a graphical editor to drag and drop so we're going to use the http source here to expose an http endpoint to the cart service and we're going to use a points calculator sync and so before we complete this let's take a look at the source code of our sync application and we started by going to start.spring.io and creating a points calculator sync artifact using java and spring boot and maven and the dependency here are spring cloud messaging the rabbit mq binder and the spring boot actuator the spring boot actuator is important when you're deploying the kubernetes because it supports the liveness and readiness probes that we're going to need to determine that our application already and have been deployed so we go ahead and create that project and then we can write our custom business logic and here we have a spring boot application and we see that we're importing an http request function configuration this is something that comes from the out of the box stream applications that we looked at and i mentioned the http request processor that is a processor that's backed by a function and we can also reuse these components at the function level so every one of these stream applications that we saw listed on dataflow have also a function that that it does the implementation itself and these functions can be used and can be composed when you're building custom applications so we can use the function directly as we've done here we have registered two functions actually one is a function that takes a purchase item and returns a points object and then we have a consumer of the points and we're going to compose these functions so that the output of the calculate points will be used as the endpoint for post points uh and we're gonna compose them declaratively we'll see how to do that in a second but let's just look at the code here so we get uh some json end that's we're representing as a purchase type it's got a username and an amount and so we're basically just going to create a points object that uses that username and then take the amount and multiply it by a hundred in this case to calculate the points so no big deal there when we post the points we're going to use the http request function as a dependency and that comes from the configuration so we know we have that in our application context and the http request function is reactive and it requires a flux of messages and in this case the message payload is going to be of type points so we have a lambda here that's just going to apply the function directly and we're going to block on that uh sorry josh we should have been end-to-end reactive here i know you wrote the book on that but just for simplicity uh we'll do it uh synchronously in this case so to compose a function declaratively we use a spring cloud function definition that has a sort of a mini dsl it says take the calculate points and then take the output of that and use it as the input to post points that composition of a function plus a consumer acts as a consumer and spring cloud stream is going to use that function definition to bind to a consumer to treat it as a sink and so we've already built uh the spring boot application as a docker image using the spring boot maven plug-in the build image task and the maven plug-in and so we've got a docker image and we've registered that in data flow as a custom application here the http request function requires an endpoint which is given by this property http request dot url expression and we could have put this in our our application definition as well but since we've already deployed the docker image and we don't want to rebuild it we can just set it here in the stream definition using the stream dsl which we see here and so the first part of the url is the point service ip address and the endpoint is points um and then it takes a username which is going to be dynamic in this case so we get that from the message payload that it accepts and and we're going to take the username field from the message payload so it's pretty much all there is to it we create the stream and it's asking for a stream name so we'll call it points and create the stream and we see that the dataflow server has accepted our request and is now deploying the stream sorry it's created the stream now we need to deploy it and we're going to deploy this orchestration as a unit we have our source in our sink has two separate applications and we have the opportunity to add more properties we can configure the memory cpu disk for each of the pods we can set some other properties application level properties like we have here and then just go ahead and deploy this stream but in this case we're just going to take what we have and deploy it and the data flow server has accepted our deployment request and we can see that we're creating a couple of pods for the http source and the points calculator sync dataflow append prepends the name of the stream to these so that we see they pull into the same stream and also a unique id deployment id so let's go ahead and wait for that to finish deploying and then we can run the rest of the demo now the stream has been successfully deployed and we can go ahead and try our demo and see if everything's working end to end so here we see that we have the http and the points calculator uh pods running and ready to go uh these data flow depends the name of the stream which is points in this case and kind of a unique identity deployment identifier for it and so now we want to post an order to the cart service and i really love sumatra so i'm going to buy four of them and we do that we got an okay which is good and we should have 400 points now so let me buy some more five more and now we have 900 points so everything seems to be working thanks that concludes the demo one of the things we worked hard to do with kubernetes is really work to make sure that we were um you know careful about breaking things down i think we like to talk about it as unix philosophy this idea that you have a bunch of small composable pieces that you could put together to do larger things and so you can take a pod and you can use it to run a server where the goal of these higher level things this would be a replica set and a deployment is to you know keep this thing up and running no matter what meanwhile there's folks that are taking that same abstraction of pod and building you know systems for running stateful you know workloads things like databases and so we have abstractions like stateful sets you can also do say a database migration while you're doing a deployment by running a pod to do that migration but having it only run until it says that it succeeded similarly like you know if you're doing data centric data flow types of things you may want to have a pool of workers and you can have some other controller and you know talk to the kubernetes api to dynamically size the number of workers you have based on the amount of data that you're trying to process similarly there's folks through projects like kubeflow looking to adapt kubernetes so that it works well for machine learning applications reusing some of these same primitives but really with a different context and different type of application in mind [Music] [Music] so one of the one of the the the things that i like to say is that kubernetes is a platform for building platforms and that really comes down into a couple of things number one is these core sort of like lego block primitives that you can put together to do all sorts of different things you can use the same pod for doing serving for doing data analysis for doing one-shot jobs that same abstraction can be used in different ways but that's just not enough i think one interesting thing happened as we built kubernetes all of a sudden as it saw success people wanted to bring all these new features and we didn't want the project itself to actually be a gatekeeper for what features could be part of kubernetes and what features couldn't so we started adding and this was brennan burns who's a good friend one of the other co-founders at uh of kubernetes created this thing called a third-party resource that eventually became a custom resource definition and this is a way that you can extend kubernetes so that you're sort of adding new primitives that act just like the built-in primitives and so more than any other cloud system out there kubernetes is built to be extended you can actually be part of building your own flavor of kubernetes or using any of the specialized systems that are targeted at the types of problems that you're trying to solve so a key aspect of kubernetes is this idea of declarative uh programming you tell the system what you want and it automatically makes it happen and that's a much sort of more you know friendly easy sleep better at night way to deal with these types of systems uh i think a great example here is that you know you know as we look to you know hopefully someday the self-driving car world actually working when you get into your self-driving car you're not going to tell it oh take a left at the next corner or turn right here you're going to tell it take me to the grocery store or take me home and that's what we want out of infrastructure you tell it what you want and it figures out the optimal way to be able to get there and so the key way that kubernetes does this is through this idea of reconciliation you have desired state and every time you look at a kubernetes yaml or or object you'll see that they're spec in status spec is what you want status is the current state of what the world is and so what happens is that that that spec and status is paired with a program that's running a reconciliation loop and it wakes up takes a look at the spec takes a look at the real world and tries to make some forward progress it reminds me of when i went hiking in the grand canyon with my kids a couple of years ago we got to the bottom and by the way when you go to the grand canyon they say down is optional up is mandatory so we got to the bottom we were climbing our way back out our desired state was to be back at the top of the south rim eating ice cream but we weren't going to get there in one step and so the idea there is relentless forward progress every time you take a step you just want to get closer to your goal and that's what kubernetes via desired state and replication helps you do when it comes to your infrastructure [Music] so far we've looked at a business application running on kubernetes one of the most promising aspects of kubernetes however is that it's imminently programmable you can even extend it using spring as tiffany jernigan will show in this next segment hi everyone i'm tiffany jernigan i am a developer advocate here at vmware so let's talk a bit more about kubernetes so kubernetes has a vast api with a lot of extension points one of the api components is a controller so what are controllers the kubernetes docs provide the best description of what a kubernetes controller is so i'm not going to try making one entirely up on my own so in robotics and automation a control loop is a non-terminating loop that regulates the state of a system so say you want to make a pour over coffee and you need some boiling water so one example of a control loop is the thermostat in the electric kettle if you're using one of those so when you set that temperature it's telling the thermostat about your desired state the actual water temperature is the current state the thermostat acts to bring the current state closer to the desired state then you can use it to make your pro over so in kubernetes controllers are control loops that watch the state of your cluster then they make or request changes where needed so each controller tries to move the current cluster state closer to the desired state so now let's go and start with the demo now we're going to start at josh long's second favorite place on the internet start that spring.io so this is the spring initializer so instead of having to write a bunch of the stuff ourselves it will give us some of the like code and the palm xml et cetera on its own so i'm going to give it a group name and i'm going to call it butternutty's and the artifact or what the name will also be i am going to call that kubernetes controller so basically that's all we need right now so then i can just go and i can generate it so i'm just going to save it here as a zip file so now if we look i can see that so i can just do unzip do that the file so now if we go into the kubernetes controller we can use intellij to open up the open it up and create a project so if i do idea and then do it on the palm xml cool so now we can dive into it so the first thing that we are going to do is we're going to modify the palm.xml that was created we need to add a dependency in there for the client java spring integration for kubernetes so i'm going to copy and paste that in here so we're going to be using version 12.0.0 of course you could use the parameters and then add that here but i'm just going to put it like this by default so then next if we do a right click we'll need to reload the project so now let's look into the code okay so here's the code that we got from the start of spring.io so as you can see it's just the basic outline of what we kind of need so the first thing that i'm going to create is a shared informer so start off what are informers so informers basically see what objects are there and watch their desired state and the actual state if you don't have one you could constantly query the api but that would result in a lot of redundant calls so the shared informer basically is a cached version of the informer to be faster and easier to share amongst informers it's also more memory efficient so we'll want to have a shared nexon former for both the node and for the pod at least in this case depending up depends on what you're trying to keep track of so basically what the lister is going to be doing is it's going to get the current nodes and pods that it can find using the informer that we were just creating all right so now i'm going to create the reconciler i'm going to call it reconciler and then we will give it the lister and the one node so then we can call that node lister and then i'm also going to give it the pod lister as well cool okay so then we're going to use the lambda and i'm going to return the request i am going to create this what using the names i'm going to use the namespace bk for everything that's going to be running so i have our namespace bk and then so now i need want to know what node is it so for our node and then this is where we use the node lister and i can get what it actually is with the request and getting the name okay so now i'll just print that out so node and then we're going to get the metadata and the name okay so now the next thing we need to deal with is the pod lister so basically i want for this node or whichever the nodes are i want to know what pods are there so this isn't going to be this code isn't going to be like oh do you have a pod that went down or something that came up it's just going to be very very basic and it's going to just list the nodes in the pods as the controller calls the reconciler so if we have our pod lister and then so we need to have the namespace that we just had earlier we have list stream and then we are going to go and get the name of the pod and then so for each one of those we're just going to print out what the pod's name is and then lastly we are just going to return a new result okay cool so now we have that reconciler again it's basically all this one is doing you can have it do a lot more stuff but this one will just basically tell you what the nodes and the pods are every time now we need to have the controller so basically the controller is a combination of everything that we just created above it's where everything comes together basically you may have heard the phrase where the rubber meets the road so now let's go and create that so i'll have another being and this will be a controller so it's going to take in the shared index and formers that we created above the sheridan farm factory and a reconciler so with this we're going to return a controller builder it's going to take in a default builder so that is our shared informal factory we're going to do a watch on the node that we have next we're going to see whether the pod informer and the node informer have synced we're going to take in the reconciler give it a name so we can call it booter nettie's controller and then build cool so now we have like the main meat chunks of the application so the next thing the last thing that we need here is the command line runner basically it just kicks everything off when the application starts off so i am just going to copy and paste that one as well so here you can just see that it just starts all the registers and formers and it runs the controller so now that we have that let's try running it so if we open up the terminal here so right now i'm running kubernetes locally for just being able to go and test this so what we can do is we can just do a maven clean package spring boot and run and this will take a few minutes so right now we can see that it's calling the node docker desktop okay so now if we open up the terminal we can just make sure that we're in the right namespace so i want it to be in the namespace bk so if we do a i have a tool called cubens and so then i can see what i'm in so right now i'm in bk cool so i have k alias two kubernetes so if i do a k get pod we can see that there's something there so that's why it's just listing the node so if i want to create a pod so i could just do okay run so if i want to have busybox for instance so now we can see that that's created and if we wait a little bit we should be able to see that it will end up showing up here with our controller and with that we have working controller there are some obvious next steps we could containerize this controller with spring boots built-in support for build packs we could even turn this application into a spring native application which uses build packs to produce super lightweight ultra fast architecture specific binaries thanks for sticking around with me for this demo and hope you have a good rest of your day [Music] so [Music] systems like kubernetes is that traditionally when you run a server the set of apis that you have to deal with are the ones provided by your underlying operating system things around you know file systems and networking and all that with systems like kubernetes and i think there's other classes that fall you know of systems that fall into this you have a whole nother set of apis that you can actually start to interact with so you have a richer set of you know building blocks for building your distributed systems for building your applications and this can be as simple as something like a health check in kubernetes you can tell kubernetes that hey every five minutes i want you to go ahead and actually run this command in my context or hit me at this particular path http path and if you don't get a success then you know something's wrong and you should start the whole self-healing process that's a new interface between the application and the underlying system that fundamentally leads to more reliability and that's a simple place to start but you can go all the way to creating your own controllers where you can orchestrate and actually turn your application from being a single process to really something that works with the system to make sure that you're running the right things at the right time to solve the problem that you're working with and you know this comes in in things like data flow where you need to make sure that you know you have a scalable dynamic set of of workers to deal with the incoming stream of data that dynasism is what kubernetes was built to do and so you can embrace that by accessing this whole other set of apis from your workloads from your programs we've got a working system but its http apis aren't secured compressed or even accessible to cora's clients to address these kinds of concerns chris sterling will look at the spring cloud gateway for kubernetes my name is chris sterling i'm the senior product line manager for our api management within vmware tanzu and my focus really is around how do we operationalize the spring cloud gateway api portal and the whole api management experience and so what we're going to show you today is how we've done that on kubernetes creating a spring cloud gateway operator an opinionated version of the gateway itself that even has some additional filters and predicates and then also on top of that show you our api portal how it's able to display the auto-generated apis that we provide all right today i'm going to show you how i'm going to install spring clog gateway for kubernetes which is our commercial version of spring cloud gateway and in here we're actually using a early beta version that's coming out very soon if i run the install script this should install the spring cloud gateway operator which is a kubernetes native operator deployed into the spring cloud gateway namespace so we'll go ahead and go to that namespace right now and show you what the operator looks like when it's on the cluster so the operator here actually deploys a pod and it has a service on it now the service is going to be interesting because we uh we're actually going to expose an open api endpoint for all of the api routes are going to be dynamically updated on the on the actual gateway instances that we're going to create later but for now just know that the operator itself is running and waiting to for custom resources to be applied to the cluster in yaml files so we're going to go ahead and look at those now let's go to the gateway in the gateway we actually have a gateway yaml file and we also have an ingress file and in order for ingress to work we actually need to have contour installed so i'm going to go ahead and install contour onto the cluster project contour is an ingress controller that you can use with kubernetes it's open source and uh also happens that much of the work is done here at vmware all right so now we have the ingress controller installed and what we want to be able to do is go to our bk namespace we'll do a get all you'll notice there's a bunch of resources already deployed there's services like cart service and point service which we're going to utilize very quickly here and expose those through a gateway so our gateway just to show you for a moment here our gateway is defined fairly simply we actually have a spring cloud gateway kind custom resource and we've given it a name of buddha nettie's gw that's what it's going to create on the cluster now we're using nip.io which is a dynamic dns service that we're going to be using so that we can get it from the outside world just notice i had a an actual problem with my url i need to make it http and through this gateway we are now going to be able to apply of the gateway to the cluster in the bk namespace so if i were to do a cake at all type of degree for booter netties what we should see now is we actually see a pod a service two services actually in a stateful set uh the stateful set is the deployment it's managing the gateway instance pods so now we have one pod because we're using the default of a single node you can actually set this count to and why don't we go ahead and do that right now just let's play around with it now count two all right oops all right so now we've added the count we're gonna go ahead and apply it again check it out and you'll notice that we have a zero of two now as well and you'll start to see multiple instances come up we actually have one running at this point in time there should be another one coming up as well all right and the service is running on a cluster ip at this point in time let's go ahead and create an ingress so just to show you our ingress file so you know how contour works we've named it with butterness ingress we're saying in the bk namespace we want to deploy this ingress and we want to give it the url from the nip.io that's going to allow it to redirect any traffic coming in on the ipa address with that particular name to be redirected to this gateway instance and in this case the service name is veterinary's gateway or gw and it runs on port 80. so what we're gonna do right now is okay apply the ingress okay if i do it get ingress you'll notice it has our ip address and it also has our new host name that we're using let's see how our boudonnaise is doing so uh we actually have an instance running so our pod is there and let's go ahead and see if our let's see if we're able to see our gateway if it's running so we're going to use the management port you'll notice that the main apis are available on port 80 but on port 80 90 which is not exposed to any ingress you can do a port forward on the pod and then go to a browser and when i go to the browser i can see the health of that particular pod is actually status up so we're healthy let's go ahead uh oh actually i'll show you something else so currently uh the gateway routes uh metrics endpoint is our actuator endpoint is not showing any routes defined on the gateway at all so we're going to dynamically update that for the cart service so let's go ahead and go back let's go into the card service actually i can just show you here um the cart service actually needs a route config so the kind of that custom resource is spring cloud gateway rock config and we're going to call it the cart service rock config it's going to be pointed at the cart service which is the name of the service on the the name of the service on the cluster in the bk namespace so there it is and on there we're going to have a couple of different api paths one of them is api cart coffees and then we also have api car orders now by default we automatically strip off the slash api when we when we route the traffic through the gateway to the cart service itself so the card service is only going to see slash card slash copies and slash card slash orders so that's all done by default the orders is a post standpoint so that we can add some orders later and you also know your notice that you're able to add a data model to this and define it in this case when you create an order you say which coffee username and quantity that you want and all of that is provided in the raw configuration file okay and we're also going to deploy the point service and point service on its own is going to have its another route config and in there we have slash api points with a username so in this case i'm going to see how many coffee points josh has in his account and you'll notice that both of these are in their own repos so the point service is actually in the in the point service repo and the cart service rocket fix and the car service repo so each one of these can have their own life cycle and be able to evolve their api separately all right let's go back over and let's apply some route configs so in this case we're going to do the cart service route config the other thing that we need to do though is we actually want to map it into a gateway instance so i'll show you what a mapping file looks like so i'll just give you an example really what it is saying is okay now you have a route configuration file that's defined for a service you have a gateway that's been uh defined as well we'll now put these api routes onto this gateway is all that we're seeing here so the gateway ref is here and we have the route config ref here and we're just saying now map them on and that's going to actually define the routes on to the gateway all right now we're going to apply that mapping file to the to the bk namespace and so that's going to map the cart service route config over onto the ludernetty's g-dub gateway instance now that we have the route config and the mapping applied to the vk namespace for cart service we should be able to see some routes now defined yes they are on the gateway okay and let's go ahead and do the same thing for point service so we'll go and do the route config and then we'll do the the mapping as well now that should allow you to see that the the rocket fig and the the routes actuator endpoint should show you now the the api points endpoint as well so now that is available on the gateway as well all right so now i'm going to do a few curl commands and test this all out because we have everything installed um let's see here so our first command is going to be uh let's see what the list of copies are oh wow we got a lot of them the proficionator al cappuccino a lot of great different things that we can order here so that's great and sumatra is one of them i really like sumatra so what we're going to do next is we're going to we're going to go ahead and send in an order well i think uh josh needs some coffee and let's kind of check out how many points josh has oh josh has 400 points already he's a heavy coffee drinker i think he needs some more coffee so let's go ahead and order a sumatra because sumatra is the best coffee of all and now that we have some coffee we just ordered let's see if we got any more points for josh oh 800 points now all of a sudden so josh is a veteran of the coffee uh sumatra thingy that we're doing here all right so josh great job drink your coffee and now that we have all of this information we've now used the apis we're going through the api or through the spring cloud gateway for kubernetes instance everything is working nicely what would be really cool is if i was an api consumer maybe i want to see what the apis look like in a cool portal so i'm going to go ahead and and uh install that right now so we actually have an api portal namespace let's go ahead and go there you'll notice that there is nothing in there at this point in time so let's go install the api portal which just got released a couple weeks ago and pretty much the same thing that we did over on the the spring cloud gateway when we installed the operator now we're gonna install the api portal server it's going to create a pod and a service that we can put in ingress around so that'll be our next step to expose that service outward using nip.io and voila api portal has been installed let's do keep called get all oh there is our api portal server and it's currently running so let's go ahead and apply the ingress just to show you that what our ingress looks like let's go ahead and go into the gateway directory and we're going to count the api portal contour yaml pretty simple we're going to put the api portal with the nip.io address we're going to apply that now into the api portal namespace we do ingress voila another ingress is created so now we can go over well actually i better copy that really quickly go to our portal and see if it has all the information we need oh it's using the defaults you know we need to point it at our spring cloud gateway instance now so we're going to do a fairly simple command where we're going to set an environment variable on the api portal so let's go ahead and do that there we go and what we're doing here is we're doing set the environment on the deployment the api server deployment and then we're going to set the api portal source urls to my open api endpoint that's on the spring cloud gateway operator so once we've done that all of a sudden it should be it should be restarting the application with the new uh the the pod with the new information about what sources it should be pulling from so let's go ahead and do that oh actually while we're doing that let me let me show you how cool it is that we have this open api endpoint all right if you're actually going to a browser you'll see that open api endpoint right here you notice that from the route configuration files that were dynamically added to the gateway we then auto magically generate this open api three versions specified documentation for your apis and based upon the types of filters and predicates that you add to we'll even add in some additional information for you such as let's say i had a rate limit on one of the end points here the coffee's endpoint and on the cart and therefore it shows you that a response code could be 429 at times if you're to hit that rate limit and then it'll tell you a header with x-ray try in with the milliseconds that you should wait before you try again okay so we auto-generate that open api documentation ooh there's the butternut apis all of a sudden showing up all right so now i can go see if i can list my copies out let's go ahead and try it out we executed oh there's all my coffees the jitterbug is one of my favorites so great maybe josh wants to have one of those later so with that we showed you around spring cloud gateway for kubernetes we've actually applied some rod configurations for our cart service and point service uh we've mapped those on to our gateway instance and then we showed you how those can show up automagically as open api documentation in the api portal thank you very much a lot of times people will ask me what is the future of kubernetes and the truth of it is that as a community and actually as a vendor our goal is to make kubernetes as boring as possible good infrastructure is boring you don't want to think about it you want to focus on your problem that's going to actually move your business move you move what you're doing forward and so you really want kubernetes to fade to the background so our goal with kubernetes is to actually make it disappear and really let you focus on getting to your work so that you can do what you need to do so i think kubernetes is great and it's been the keystone for this entire ecosystem that you'll see through things like the cloud native computing foundation and if you go to the cncf site you'll see this landscape page and it'll be the sea of logos of all these projects that are adjacent to kubernetes this stuff is awesome and it's also horrifying right in my mind i you know i like to call this beautiful chaos because this is the sign of innovation there's a ton of folks doing a ton of really interesting things but as a user as a consumer of this it's incredibly intimidating and so you feel like i have to know everything before i do anything our goal with vmware is to help to really be a gentle introduction into this so that tonzu provides a great way to get started with this to build your skills and as you start hitting problems that go beyond what's built into our offerings we want you to have access and the right grounding so that you can take advantage of this incredibly thriving ecosystem of other vendors of partners of open source projects and really get the most out of the cloud native computing landscape and now to help us wrap things up it's nate shuda who looks ever upward to the opportunities implied by platforms like kubernetes [Music] all right it isn't easy to architect cloud native applications there's a lot of moving parts distributed applications require a fair amount of plumbing we're going to need monitoring and circuit breakers contracts gateways streams we're going to need to externalize our configuration i'll probably want to use some function service discovery load balancing documentation the list goes on and on we can't afford to reinvent the wheel on every single project at the end of the day our customers want features and functionality and that's where we should be focusing our energy and efforts now there's any number of approaches that we could take to make this happen you know we could just spin up our own hardware we used to do that for quite a while in this industry we could use some automation to make it easier to spin up that hardware we could use containers to further abstract away even more of that bits and pieces we could use a platform that hides even the containers from us and we go all the way to serverless and now i don't have to deal with any of that i just write a little bit of code and away we go now the further down this stack i go the more flexibility i have and the fewer standards and i can do whatever i want but if i need a very specific version of something well then i need to get closer and closer to bare metal and of course the higher up this hierarchy i go well the less complexity there ultimately is the more operational efficiency we get now the trade-off here is i don't have as many options i don't have every language under the sun and i have restrictions on perhaps how long something can execute but it's all about the trade-offs that we need to think about as engineers in our ideal world we want to push as many workloads as high up the stack as possible we want to work with these higher level abstractions that has been one of the biggest shifts that i've seen in my career is we have these bigger blocks to work with today we have higher level abstractions anyone who's been in software long enough has probably heard someone at least paraphrase this famous allen k quote people who are really serious about software should make their own hardware now we've come a long way since then now obviously there are exceptions that prove that rule there are some instances where yeah you probably still should build your own hardware but they're the exceptions now we've covered a lot of ground today everything from gateways to teaching our platform new tricks we've baked in some resiliency we've externalized some configuration we've talked about one of my all-time favorite subjects coffee but it's important to take a step back and ask yourself why does this matter for you why would you want yet another layer of abstraction my good friend sam newman had this this great tweet a couple years ago where he said haven't we just made things worse we have all these layers that we now need to maintain and patch and that prompted a great response from josh who said well that's why we want these platforms so that we can just focus on the app and let the platform deal with all these other things one of the most dangerous phrases you hear in organizations is that's how we've always done it that just won't work today whether it's convenient or not business cycles have sped up we have to adapt we need to adopt the always be changing mindset we need to deliver code in days weeks not months not years i remember the first spring one i attended as an employee a gentleman from scotiabank got up and talked about their journey from the traditional quarterly release cycle to doing thousands of releases now to be very clear that does not happen overnight this isn't something we just decide to do on a random tuesday and by thursday everything's good it took a huge change in culture and a lot of engineering effort and discipline but it can be done now how do we make these changes if you ask a software engineer how to improve something and pretty much guarantee their answer is going to be make it look more like software engineering and that's certainly what we've seen with devops let's make operations look more like software engineering now that of course means we as developers become that first line of production support and you get paged at three in the morning you ultimately are going to write more reliable software and obviously i'm oversimplifying quite a bit but by making this a more repeatable automatable process we've been able to release more often which has given us yet another nested feedback loop now as the agile process teaches us i can demonstrate progress and i can make adjustments based on what my customers are actually seeing and that's an excellent feedback loop which ultimately leads to better software but it's not the only feedback loop now i as a developer can work with my operators to reduce friction in this deployment process and my operators can work with us as developers to make sure that we're baking in fault tolerance and reliability this also encourages us to work in smaller batch sizes which gives us ultimately less risk and makes it much easier and simpler for us to run experiments try things out do a b testing allows us to be much more responsive to our business partners and changes in the business environment now this does lead to increased developer velocity which is fantastic it also leads to higher quality production environments which ultimately leads to better software which gives us more business value and that's really what this is all about we need to get code to production as quickly as possible with high quality and reliability and one of the things that i would love to see for our industry is to move beyond measuring ops on uptime devs on velocity tests or testers on number of tests completed and let's focus on how quickly we can get features and put into production it's ultimately about us delivering business value it's we as a team own this and it makes me think of one of my all-time favorite books the grapes of wrath which for me anyway was all about this this sort of social change from the i to the we and that's what we need to embrace in software this is a collaborative game we need to work together to lead to better software now at the end of the day infrastructure is different than it used to be there's been this massive massive shift in our industry from these homegrown servers this bespoke artisanal approach to software which is fine for coffee not so great when it comes to having reliable repeatable infrastructure it was painful now things have changed this is a good win for us servers have become a commodity item we've used off-the-shelf software and chips to replace customized things now what this has led to is ultimately a democratization of infrastructure what in the past would have taken many months and countless emails and meetings can now happen in a matter of seconds that's a huge huge win for us now the other side of that coin is it's very very easy for me to spin something up and forget about it and forget to turn it off only to see my cloud bill get bigger and bigger every month this quote always makes me laugh we dropped our cloud cost by 61 by simply going in and turning off things we weren't using what this means though for us as developers is we have more responsibility now these issues were always there we just didn't have to think about it in the past we could trust our operators to handle it for us now that has led to a paradox of choice for us which if you're not familiar please go hunt down this ted talk or the book that it's based on we have to understand that this democratization demands more of all of us and so to paraphrase one of the founding fathers of the united states a well-informed developer is a prerequisite to successful cloud deployments now the other side of this that we have to think about is what do we want our developers focused on i was talking to a colleague of mine and he made this quote about with this approach some companies discover that my developers now need to be certified in their application framework of choice their cloud provider as well as their container orchestrator is that really what we want them spending all their time and energy on the moral of the story is we just need to be prepared and be aware of that and ultimately remember that as developers maybe we want to be careful of what we wish for so yes we have more control we also have more accountability so let us never forget uncle men's wise words with great power comes great responsibility we can shape our platform we can continue to move up the abstraction hierarchy but never forget it's about us getting features and functionality into production in a reliable repeatable manner thank you good luck back to you josh thanks so much for watching we hope you got something out of this and that you had fun we sure did we want to thank you for coming and we encourage you to please follow us on twitter we've also put all of the code including a working kubernetes pipeline for all the different modules on github be sure to check that out as well [Music] to the code cave robin
Info
Channel: SpringDeveloper
Views: 8,400
Rating: 4.9722223 out of 5
Keywords: Web Development (Interest), spring, pivotal, Web Application (Industry) Web Application Framework (Software Genre), Java (Programming Language), Spring Framework, Software Developer (Project Role), Java (Software), Weblogic, IBM WebSphere Application Server (Software), IBM WebSphere (Software), WildFly (Software), JBoss (Venture Funded Company), cloud foundry, spring boot, spring cloud
Id: wu38Fm56wew
Channel Id: undefined
Length: 113min 41sec (6821 seconds)
Published: Thu May 13 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.