Develop and test your AWS-powered Spring Boot application locally by Anca Ghenade @ Spring I/O 2023

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
foreign [Music] thank you all for coming I know it's late we all had a very um exciting two two days and I bet we're all tired by now and that's why I appreciate you even more for coming to the closing keynote of track two um I was going to make a joke about saving the best for less but it wouldn't be fair to the other speakers so I called it that uh by now you probably know from from the agenda that we're going to be talking about developing and testing your AWS powered spring boot application locally um so before we jump into anything I want to tell you a little bit about myself my name is Anka and I'm a devil engineer at local stack uh if you ever want to get in touch with me it's anka.kenade at localstack.cloud and you have my Twitter handle there and it's the same as my GitHub account uh so that's that's it for the for the contact information uh you've probably seen seen me around here uh in the past two days uh but I want to give you a little um background on my experience so I've been a Java developer for about seven ish years and um I started off in a software development teams uh using Java ee building Enterprise applications for larger companies and soon after we transitioned to a to Spring boot we also had a bunch of quercus projects but spring Booth is really the thing that I that I liked and this is why I'm here today um so before joining local stack I had little experience about what it means to develop applications using cloud services such as AWS so I had to ramp up on that and um learn fast so this is also documenting my Learning Journey and how I dealt with um building AWS powered applications as as a beginner so when I joined the company I asked myself so how does one go around developing applications using AWS services and um throughout my Learning Journey I I discovered sort of a pattern right and it goes something like you have your software ninja that is tasked with developing a new web application on AWS Cloud this is an exciting time it could be the new Netflix application who knows people are researching how to store their data how to um I don't know parse it workflows so it's it's it's exciting in the beginning but once you dive in you start developing your code and where else would you do that then on your local machine and you soon find out that there are a lot of dependencies with resources in the cloud which makes it kind of hard but you cope with it and you try to push push forward so at a later point you realize that it's not so easy anymore because the development and the testing Loop is getting extremely slow and tedious every local change needs to be built packaged uploaded to the cloud for testing you need to see if it works God forbid if you're a beginner and you introduce some bugs which you would need to fix in maybe two or three iterations so the frustration is growing at this point so now the software ninja has a red built on one of their feature branches but they cannot effectively test or debug their code in a CI CD pipeline right because you can't attach a debugger you have to basically rely on logs and you you try out stuff and hope for the best uh the whole team is using git flow for development which means uh there is one CI build per feature Branch hence an explosion of different environments uh required for development um if you have a team an infrastructure team that has to deal with that it's getting even more painful because at some point you'll have a bunch of grumpy persons that you are afraid to interact with because yeah they need to to help you all the time um and lastly the the ninja manager realizes that the resources that they put into developing uh these AWS applications um are not producing the value that that they want so the the value for money is is bad because there's a spike in costs from from all the environments that are being spun up so this is another problem that you might have to deal with uh so all in all the developer experience kinda sucks uh your hands are tied uh you might depend on other teams for for infrastructure um in certain companies you probably don't get access to your AWS dashboard right away you might have to reach a certain level of seniority um or I don't know have special access privileges whatever so what about testing with um you're testing your AWS powered applications is that getting any better this is the serious slide now where you can see the context of testability of Cloud app deployments uh being represented as a pyramid and of course at the top you have you you have this shape that's very narrow so you you have marking at the top which basically means you you mark out your Cloud apis uh but you you still need to put in the the work to define the behavior even if you have some some library that will help you with that it will only get you so far so you put in a lot of work for not a lot of coverage this is not the most efficient thing that you can do uh next up you have service emulation which means replacing Individual Services with some local versions and if you're planning to use services like postgres or MySQL then that's fine but what if you intend to use dynamodb or Aurora or Neptune those are AWS products that are meant to be managed on managed and ran on the AWS cloud so again at a certain point you will run out of luck then the the third slice of the pyramid is cloud emulation where you would have a full Cloud emulation with service integration spoiler alert this is where local stack comes in I didn't introduce it yet but you my t-shirt is probably giving it away and your last option is staging environment this offers obviously the highest level of fidelity because it's an actual Cloud environment but it it will not scale for every feature Branch you don't want to spin up all your resources multiple times for for every feature that um you need to test right on top of that you might be getting some additional pain points for certain developers that work in environments where they're very concerned about things like security Regulatory Compliance data sovereignty risk management these are usually banking or Healthcare environments this is this is an area where you solve exciting problems but your again your hands are tied and as a developer you might want to master certain services at some point and um have a sense to your career right and build experience and get better at all of these things these are environments that might make you go through certain processes of um of approval to um to be allowed to spin up even the the simplest service uh so this is where local stack comes in the the third slice of the of the pyramid and if I would have to Define it in in a few words it would be a fully functional local Cloud stack or in two words a cloud emulator um that was born out of these exact concerns that I just mentioned and yes this is an actual depiction of our CTO building starting local stack um when he was at atlassian so everybody is asking what is local stack but nobody asks how is local stack and what it can do for you so local stack is enabling users to have a very efficient development and testing Loop for their Cloud applications uh it ships as a Docker image which makes it very easy to install in startup in this day and age Docker is pretty much a given in software development except for those special cases those strict environments that I just mentioned but again there's a workaround for that and we're actively helping companies with that as well so at the current moment local stack is supporting around 90-ish Services it's constantly growing uh I think last night I counted 91 services on our service coverage page they vary in um in field of expertise let's say from compute such as Lambda Lambda is actually one of our most popular service and we're pretty proud of that because the Fidelity is very high close to 100 so there's also um easy ECS eks various databases again dynamodb we're proud of that as well you have messaging services such as sqs you you have your managed Kafka streams Kinesis but you can also get some more sophisticated um or exotic apis such as Athena so local stack um also enables uh CI integration and advanced collaboration features there's a feature that's called cloudpod which basically enables you to take a snapshot of the state of your local stack instance and share it with your colleagues for debugging purposes or if somebody wants to pick up where you left off so all in all it's redefining the way the cloud applications are being developed across the life cycle so obviously I'm not here just to do a sales pitch about local stack there is also a demo about running your spring boot application on AWS versus local stack and this is a quote I've seen I think on our site before joining the company it said your application won't even know the difference and I thought that was very exciting and it's it's a good motto to have and this is what I'm I'm here to show you uh so this is what we call a 10 000 foot view of local stack uh you can see you have different environments I cut off the continuous integration stage because that's not what we're going to be doing today uh but you can see your application migrating through different stages from local stack on your developer machine on the left hand side it will go to um to ACI pipeline where you can also use local stack and then you can safely deploy it to AWS don't don't go directly to production please have a staging environment um and don't use local stack in production it's a developer and development and testing tool so about my sample application that I will be presenting a little business logic up front uh I didn't mention in the beginning that I used to work for a logistics and transportations company so it only made sense that I would use a shipment entity and these shipment entities each one has a sender and the receiver each participant to a shipment has an address the shipment has a well-defined weight because you need that to calculate for your plane or ship obviously but we need to add the the size as a picture for some reason the size of the package that you're sending cannot be determined so that's why we will be using the internet International System banana for scale I think you're all familiar with that and how else would you know right um so let's have a quick view at the architecture of the application I used a react app for the front-end side this will display our information it will basically be a list of shipments and it will help us upload pictures because that's what we want to do the back end will be handled by a spring boot app this will contain the endpoints to communicate with the react app and we'll also have the services to to interact with all the AWS resources you can update your entities updating and creating entities is only possible via direct um by by calling rest endpoint directly you cannot do that in the front end unfortunately I was a little discouraged to implement a full a full form for editing so our entities will be stored in a dynamodb database the pictures will be stored in an S3 bucket that's pretty pretty basic here we have configured a notification um configuration on the on the S3 bucket that will trigger a Lambda which is a maven project will which will validate and process the pictures uh validation means checking the extent extension and making sure it's a picture and not a text file and once it's being validated the picture will have a watermark and if it's a non-compliant file it will be replaced with a placeholder and it will have a subtle message letting the user know that they need to pick a different file from the Lambda function there will be a message sent with the shipment ID to an SNS topic if you're not familiar with with the concept it's a topic it receives a message and it fans it out to a bunch of subscribers in this case we have only one subscriber which will be a simple queue on sqs and it will come full circle to our spring boot application which listens to the queue and receives the messages and once it receives the message it will use servers and events to let the the front-end application note that it's time to refresh and pull the edited picture or the placeholder in that case so let's jump in and have a look at the application where we have our IDE here okay so uh first off I want to tell you that I have these separate tabs can you see them should should I change the background and make it white would that help okay no clearance I think this is better right you you can see right okay so I have these different tabs here there's one for the the front-end folder each of them are pointing to the specified folder um there's one for the terraform folder there's one for the back end and one for the Lambda function um let's quickly have a look at the spring boot backend application this here is the Lambda validator uh we have um our bucket name defined here and the important part is the configuration and how we have um how we are receiving uh our endpoints for our different clients um from two different sources so I want to mention that we will be leveraging one of the core features of the spring framework which is profiles and if you're not familiar for that you can configure your beans to behave in different ways based on the environment that they will live in or perform in which could be Dev test staging production whatever you need so in this case we will have the the production environment using the real AWS services and the dev environment which we'll be using a local stack and we have our um our configurations defined in these application yaml files I'm pretty sure if you're a spring developer you you're familiar with this so let's start off with the the production configuration file we have our credentials defined as environment variables and um spring boot will pick those up and of course Very uh basic endpoints for dynamodb S3 and sqs we have abstracted um like this um um file edit you said yeah this is also okay no problem right yeah right I'm sorry I don't file edit ah sorry I thought it's under file um and presentation can you see okay um so anyway you have your credentials defined here um and your your endpoints um and yeah and uh a configuration file for the development environment you have these endpoints Point pointing at local stack which will we will get to that uh in a second on top of that we have these abstractions where you configure your Port it will run on 8081 you have some limits to the size of the file that you can upload and the logging but that's that's not really relevant at this point so let's um mavenclean package shade our Lambda function this will create an Uber jar and it will package all your dependencies in it we can see it in our Target folder it's right here we will be using terraform as our infrastructure provisioner are you guys using terraform how many okay so I'm pretty sure you're familiar with that um if you're not uh terraform is in infrastructure as code tool by hashicorp where you can Define the resources that you want to have um spun up by AWS in a declarative manner and you have them all in a configuration file here so you can see you um you define the provider in this case AWS and you have some generic region definition S3 bucket dynamodb so let's start that because this might be slow we're going to be doing terraform in it and our terraform folder and at home this takes 10 seconds but I realized that with the hotel Wi-Fi it might take up to one minute so it's good that we started with that we can have a look at the terraform configuration file while we wait you can see there's the definition for the dynamodb table uh there is a population of the table because I'm just giving some some entities as sample data I don't want to add them manually uh you we can define a bucket where the the Lambda zip will live uh there's also the lam the source code where it will pick it up from um some attributes for the Lambda function uh the trigger for S3 we also need to give the the Lambda function permission to to call the the S3 bucket um and there's a lot of permissions involved in roles uh there's also the topic definition okay the queue and we have our init we're gonna run terraform plan which will basically create um a plan for for the resources usually it tricks against the current state of the of the services and in case there is a state it will apply the changes in this case there is none so it will just spin up all of these and we're going to do terraform apply and auto approve because there's no point in waiting at this point to to confirm it uh so this will take another minute uh we can start our front-end application here we can do npm start and it will start on localhost Port 3000 I'm running everything locally because it's easier for demonstration purposes normally you would have your spring boot application in a container but at this point it doesn't really matter the configurations will still be provided to all the beans so it's in a similar manner anyway so I guess um I would be taking off a layer of complexity here oh no I'm getting a network here [Music] okay let's try to do terraform Destroyer have you ever seen this before unable to following destination configuration hmm okay let's let's try again [Music] um yes this this can happen and it happened it was working perfectly fine half an hour ago um I guess I didn't pray hard enough to the demo gods I hmm the trigger definition now uh it's it's bucket ID a terraform what yes that's that was a little strange because it did not complain uh the first time around uh I guess let's try another reply this is because I changed the the theme of the ID I knew it I tried it on black and now not cool foreign it's fine it's fine I I didn't prepare the the dog surrounded by the Flames meme I really oh look 19. [Applause] 19 resources added um that was lucky okay stuff like this happens right so you need an environment where you can try all of these things without having to wait a long time for for AWS so this this also reflects um one of the issues that developers have uh okay so uh the the front end is up let's quickly start the back end we're gonna be doing Maven spring boot run and we will tell it to use the profile um production because we want to use the real AWS services okay that didn't set us back too much okay so now our spring boot app has started and you can see you have a list of shipments um each one has an ID a sender receiver the weight and now we need to add a picture to define the size so we we have to wait a few seconds for that it's being processed and then the react app will be prompted to refresh let's check the the logs okay nothing is happening okay so you can see here that the message has been received with the shipment ID and that means that our application has refreshed on its own and you can see there's a cat the size of Three Bananas with a watermark saying that it's been approved for banana for scale um let's add a few more pictures see what happens we'll add a package and you can see working with real AWS Services can be slow so now we have two pictures let's try a wrong format I prepared the text file here and as you can see the subtle message is there the user knows that is their file was wrong and they need to replace it now we can delete some entities that we don't need anymore and that was pretty much it for the front-end functionality part now let's try to run all of this on local stack and see how that works out let's go to the terraform folder do a terraform destroy just to to make sure that there's no confusion and all of these have been cleaned out and what I really want to show you is that we're going to be running the exact same terraform configuration file against local Stacks so there's not a lot of Need for new configuration we will be passing however an environment an environment variable to let it know that it's the dev environment since the Lambda function needs to configure its clients according to that so this should take a few more seconds elevator music by the way we won't be using directly terraform with local stack we will be using a command line tool that's called TF local it's basically a lightweight wrapper around terraform so it uses terraform it just configures the the correct endpoint okay so 17 resources have been destroyed this is odd there were 19. you see what I mean there's that it's kind of strange okay so let's clean this up um getting rid of all the all the files that keep the state we're gonna do a TF local init in the same folder uh so you can see that we're using the same file here um yeah this might take another minute or so in the meantime maybe we can do a short recap with the slides um yes what what have we learned so far um we we've been using beam configuration um taking advantage of the ad configuration annotation for for configuration classes the Java AWS SDK helps us interact with the AWS resources and that won't change we won't do any any codes change um to make the switch to local stack spring profiles are basically doing the heavy lifting here to help us configure our beans according to these two environments and of course uh running the same terraform configuration file helps us maintain some consistency across all of these environments let's see what's the status here okay TF local plan oh before I forget I need to start local stack I will start it using Docker compose we have here this very basic configuration um you know the classic Port four five six six uh we have the debug flag for more uh detailed logs uh let's start this up otherwise there's nowhere to to apply our our changes to the infrastructure okay so you can see here we're passing the the NF equals Dev so it knows the environment is development um I keep forgetting do we add the variable in the plan phase or in the apply phase you guys know oh yeah well I wanted to break this up okay so then it's just the plan that that needs the okay see we learned something okay so you do it in the planning phase and the apply will just pick up the the plan you don't need to do it twice okay and if we check sorry um if we if we check the the local stack logs we will see that stuff is happening stuff is being created here all the same infrastructure uh the the jar is the same and it will be picked up from the same location where we where we said it would be okay just a bit longer and see we have the exact same 19 resources that are being created on local stack and oops I forgot to turn this off um the application is polling for the queue and since the queue is gone now it will give some errors here so let's start it for the development environment okay seems that everything is fine and we can go to the same front-end application and you can see we have all the same entities that we did the first time using the real AWS Services now we can add more pictures again the first run until the the Lambda container spins up might take a bit you can see now we have a banana on a scale for scale um we can so once the we can check the the docker containers um since V2 there's a new Lambda provider that will use a separate Docker container for the execution so once that one's up it will be way faster than than before then what we we saw using AWS see uh let's change this for just for fun we can delete some of these okay and we're we're left with just one entity here okay and since this is easy to to clean up you just need to turn off your your Docker container and that's it you you don't have these resources anymore so you don't have to have a fear of unexpected costs slides okay we did the recap while we were waiting it is what it is so uh switching to local stack I hope you realize that there's some advantage and you don't have to bend over backwards to accommodate this um it will be faster it won't um it will let you sleep at night knowing that you can let your resources run the internet is full of horror stories especially Reddit of people who were charged a lot of a lot of money for resources that they forgot as for terraform we want to meet developers where they're at and that's why we have a lot of Integrations you can see test containers there as well they have been very popular at this conference um with the language sdks I feel like this graphic doesn't do it justice because there's a language sdks for go for rust Unity PHP JavaScript so anywhere that you have sdks you can use local stack with we have wrappers for AWS CLI for terraform any major infrastructure provisioner so yeah you if you're still wondering but why why switch to local stack let me iterate on the reasons most applications are way more complex than what I just showed you and you saw it it failed even for I don't know how many services that I have six seven um so creating resources is time consuming speaking of okay we're still in time development Cycles are slow and they cost a lot of developer time and yeah you can spend that time doing something else um it's tedious to clean up I don't know why we destroyed only 17 resources after the first run but I will figure that out on my own and of course the last thing that we generally think of is that resources are costly but that's not our main concern we we want to enhance the developer experience so you would actually enjoy working on your AWS applications so I did some benchmarking in my hotel room like I said at home I think the AWS was under one minute but in this case you can see there's clear distinction um you might say well that's not a lot but this was just a handful of services uh we did we actually did a test in January oops this is probably a little too small but I will describe it um we we ran a managed streaming for Kafka for um local development uh there was this large terraform file and it took 28 minutes to um to create the resources on AWS and then only 24 seconds to do it on local stack so there's a a clear difference there and I I assume you don't want to wait almost half an hour and then realize that there's a mistake in there and you need to either rename it or um I don't know change something um you've probably seen Atomic jar with test containers taking over the conference this year it feels like they hijacked the theme but I would say that we were very good at infiltrating ourselves um hopefully you you will remember local stack um if if you would like to know more about the parity I think yes there's a few minutes left um very quick the engineers at local stack came up with something called AWS server framework which runs um weekly to check for changes in the AWS apis this makes heavy use of the photo car python Library which is used for the AWS CLI you know the command line tool to communicate with AWS so with every run there are some server-side stubs generated which will generate a pull request automatically generate the pull request which is manually approved but the thing is these endpoints will be generated even if you try to access them you will get a service not implemented error but still they will be there so parity will be very good on top of that we're using something called snapshot testing where we run where we run a set of tests against AWS we collect the responses in Json format we do some cleaning up we replace certain generated caches with placeholders or account numbers with placeholders we run the same tests against local stack we do the same cleaning up and then we compare the the two results so these need to be matching all the time and on top of that there's also a lot of metric collection to track test coverage so if you run the same tests over and over again you you won't have coverage good coverage obviously so you need to enhance them constantly and that's what we're doing so this was my experience this meme was very popular when I when I created this presentation a few months ago so I hope I can convince you to try it out this was me um and signing off thank you for coming I know it's late thank you [Applause]
Info
Channel: Spring I/O
Views: 2,725
Rating: undefined out of 5
Keywords:
Id: gwYzS5DRDcI
Channel Id: undefined
Length: 47min 35sec (2855 seconds)
Published: Wed Jun 14 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.