Spring Boot on Amazon Web Services with Spring Cloud AWS

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hi there welcome back um hope you had a nice spray got some coffee or a snack if you needed um up next we have spring boot on amazon web services with spring cloud aws a really cool community project um so we're ready for you take it away matcha and mate hello everyone and thank you for coming to our talk we will do our best to give you some hints in a beginner friendly way on how to use spring boot on amazon web services after whole day of learning sprint i believe you have a good feeling on what is spring how to use it and what is the programming model and what is spring is actually an interesting question not really as easy to answer as it may seem and that's because spring grew from a relatively small and focused framework into a large project and the and the whole ecosystem considering a huge amount of integrations it provides you can definitely look at spring as an integration framework because it lets you glue together different technologies and build an application on the top of it the idea behind providing these integrations is that integrations are hard they require expertise and a lot of boilerplate to write and this of course leaves space for making mistakes spring provides consistent programming model for using all of these technologies and this means that whatever integration you pick up you will find similar patterns similar annotations and it will be relatively easy to start working with it the technology and integrations we are focusing today is aws the largest public cloud on earth aws offers a long list of services you can integrate with some of them are open source software like databases or messaging queues where aws takes care of provisioning backups security patches so basically all the or most of the maintenance and the other services are proprietary aws services that are available only on aws and they are of course also not open source ideally you would like to consume these services in a familiar way so similar how other integrations with spring work and that's exactly what spring cloud aws is all about so in this talk we will show you how you can use sprink aws in a common scenarios that we will that you will very likely implement but also what to do when you will find out that spring cloud aws is not enough my name is macie valkovic i'm an independent consultant a freelancer with a passion for java spring kotlin and aws since quite some time i've been working with sentry by day and leading spring cloud ews project by night sentry is a great error reporting platform with a very nice spring boot integration it has nothing really to do with the talk but in case you haven't seen it i recommend checking it out in my spare time i run the youtube channel spring academy and i wish i have more of this spare time and post more videos but quite many people found it channel and videos useful and subscribed to it so maybe you will find it useful too hello my name is martin edich and i'm software engineer in small creation company called ingemark i'm core team member on spring cloud aws project and i contributed to projects such as sdk man and spring boots since of some time i'm heavily invested in kubernetes and i like to say that i'm yama ninja my other areas of expertise are aws and spring so let's talk about spring cloud aws before we get into specific use cases i think it's important to share a little bit of context when and how spring cloud aws was created because in the past year it's become a little bit louder about the project but it isn't actually a new project in spring spring cloud umbrella so the first comet happened in february 2011 so this was over 10 years ago and this means that the project predates spring cloud itself and even spring boot it has been developed primarily by these two gentlemen that you see on photos again and align under the name elastic oh sorry yeah sorry so it was developed under the name elasticspring and i must say this is a very clever name especially considering how difficult it is for developers to pick really any names and it's important to mention that spring cloud aws is one of so-called community modules and this means that the project is not maintained by the spring team working at vmware but but by external contributors and over the years the original project creators moved to new challenges which is totally understandable because open source takes a lot of effort and time and as a result project has switched temporarily to a maintenance mode in april 2020 spring team has made the decision to boost sprinkle aws and spring cloud gcp from the release train and also from the sprint cloud github organization which i believe was a good decision because it wasn't very easy to distinguish which projects are maintained by the core spring team and which are maintained by the external contributors and as a result spring quality ws was handed over to us when i say us i mean me mate who is speaking today with me and edu melendez edu is a star in spring ecosystem it's really hard to find a spring project or even popular java project on github where he didn't make a contribution so since last year um we moved to a new organization aw spring where all the current and future development of spring called aws is happening the old repository under spring cloud organization is still there but there we only contribute very important backfixes or security patches for spring cloud aws 2.2 version which is now in the maintenance mode we have closed over 100 issues and prs and released a new version that is compatible with springboot 2.4 and 2.5 our main goal was to improve the developer experience using sprint called aws so it means to use sprinkled aws in a way that is similar to using spring boot or other spring cloud projects but also to speed up the startup type because before when you added sprinkly ws to a springboard application the startup time under let's say under this default configuration was extended by even few seconds and also we wanted to of course fix as many bugs as we possibly could thanks to edu we've added a basic integration with aws cognito and while we spent significant amount of our spare time on it making so many changes was possible only thanks to the community and the spring cloud lead spencer gibb who helped us a lot with moving to the new github organization so let's see now how sprinkled aws can help you developing spring boot applications on aws starting from the most common thing a developer do connect and use relational database when you are on aws there are two ways you can set up a database server one is by installing it on a one or more virtual machines ec2 instances or you can use a dedicated managed service from amazon rds which stands for relational database service i imagine some of you can already guess which option we recommend there is a number of things that you need to think about when you are making this choice database and the operating system on it on which it's running it has to be kept up to date you have to configure backups you have to configure all the security measures you also need to think about scaling and high availability meaning what happens if the machine where the database server is running goes down or what happens when there is a problem with network on ec2 you need to take care of all of these things by yourself and this requires a lot of expertise and time on rds on the other hand everything is automated and taken care for you and as you can expect this is not free it does cost more than if you would just run the database on an ec2 instance but very likely it doesn't cost more than if you would have to do all this maintenance by yourself on rds you can just click to create uh one of the most popular database servers including open source options like my sql postgres or mariadb and commercial oracle and microsoft sql server the one database that you will not find anywhere else is amazon aurora aurora is a mysql or a postpress compatible database and you can choose which compatibility mode you want when you are creating the database but as amazon claims it's up to 10 times faster than the original equivalent and since it is compatible with mysql and postgres you can use the same tools the same jdbc drivers and the same sql language constructs using aurora as you would use with normal my simple or postgres there's also one interesting things about aurora it can work in the serverless mode where you just pay for what you use and that's an option that is definitely worth considering so let's get back to our scenario but this time with rds each rds database can have one or more read replicas in a background rds asynchronously replicates all the changes that happen to the primary database to the read replica and read replica can be then used but by let's say a business intelligence team data warehouse or or any other client that needs only to read the data and also can accept that it may not be 100 up to date with the primary node and this happens without affecting the performance of the primary node and as a result also it doesn't affect the performance of the application read replicas can be also used in a failover scenario so with the primary note down you can promote read replica as a primary database the one use case that for us today is the most interesting is how you can use read replicas to scale the database behind the capacity of the single node so since we have continuously replicated database why not to use it for reading data this way all the operations that use a connection to the primary node will be a write operations and all the read operations can go to read replica and in fact there can be more than just one read replica there can be many of them and with aurora you can even configure auto scaling so that read replicas are spinned up only during higher load and then after the load goes away they are shut down and as you can imagine this is not a very easy thing to implement on the application side so there can be many there are many things that you have to think about like creating data sources creating connection pools deciding which data source to use and then make it somehow work with spring boot so if that's your use case you should of course use spring cloud aws because it lets you do it in a simple and declarative way without sprinkle aws using traditional spring boot configuration for jdbc this is how you would normally connect to an rds and you would connect here only to a primary node without an application even being aware of a read replica once you add spring cloud aws jdbc starter you can use the following configuration instead and there are a few things worth noticing so first of all what is this dash in the middle of the screen that's the legacy from the pre-spring boot times sprinkle aws from the beginning supported more than one database which is indeed a nice option in some cases but since spring boots outing auto configuration in many cases rely only on the single data source you need to be aware that once you configure multiple databases here things may not work as they usually do even if you specify a single database the dash has to stay and another thing that we don't specify here is the jdbc url because instead we provide a database instance identifier this is a database server name that we give to aws when we when we create it the the url is then retrieved on the application startup and you might be wondering why and that's because there is more than one url when you use read replicas this one line on the bottom read replica support true is all you need to start using read replicas for read operations so on application startup sprint cloud aws fetches urls for all the read replicas and creates data sources for it so now you may be wondering how does it actually work does it mean that all sql queries are sent to read replica and all write operations to the primary node not exactly all transactions that contain only reit operations are sent to read replica and all transactions that contain at least single write operation operation are sent to the primary node this behavior can be controlled using a transactional annotation and it's read only flag if the read only is set to true the connection to the to one of the read replicas will be used and if it's false or if it's not specified then it will go to the to the primary node under the hood sprinkle aws creates connection pools and data sources for the primary node and each of the replicas and it creates also one more data source beam that is a primary beam a routing data source and this delegates to the chosen database you can find more about this pattern in vlad mihalka's article linked on the slide last last but but not the least um i think it's worth much mentioning that if you don't use replicas if you don't use replicas it is perfectly fine to not use sprinkle aws to connect to rds you should use sprinkle aws or in fact any other dependency only if you really need to and before you decide to use sprinkle aws jdbc integration uh you should know two things because they can be a deal breakers currently spring cut aws supports only tomcat connection pool and that's a legacy from the pre-spring boots times this means that you cannot use let's say hikari connection also we don't provide a support for our aura and the reason is that rds support was developed primarily before aurora was even available in aws and aurora uses different rds apis and this means that we have to do quite a lot of work to get it working in spring cloud aws but the good thing is that we plan to do a support for aurora anyway because we believe that this is a very important feature of the framework in 3.0 so the next big version that we plan to release we also plan to add im authentication and this means passwordless authentication so that you will not even have to specify the password and also support for rds proxy which is very handy tool from aws for supporting failover scenarios maybe we also add the secrets manager jdbc authentication but this is maybe because it's not clear yet is the if this project sequence manager jdbc integration is something that is still maintained or is it maybe already deprecated let's now go to another use case many if not most of the applications nowadays that we develop either follow microservice architecture or are at least some sort of a distributed system in distributed system components needs to communicate with each other and one common way to do it is to use messaging if you have never worked with messaging let me quickly explain what it means in practice the simplest and the very common way to implement communication between two services is to use regular http calls when service e a sorry when service a needs to talk to service b to do some work or or even to to notify it it sent an http request and this brings couple of issues these services are tightly coupled both service a and service b need to be available at the same time service a can easily overload service b with requests and when service b becomes overloaded also the it propagates to service a so the service a becomes slow because it's waiting for the responses and then this brings a need for circuit breakers that we will not discuss today but essentially it complicates our life so the question is what is the alternative there are a couple of them and one of them is to put a queue between them and instead of making an http call that waits for the response just send a message to a queue and let service b consume the message whenever it's online and has a capacity for that this of course does not necessarily makes our infrastructure simple often it's quite the opposite because we need now a messaging broker that services can talk to and there are a number of messaging brokers available as managed services on aws from open source active mq that is compatible with jms since november last year there is also rabbit m2 there is kafka and there are proprietary solutions like kinesis or the the last two that are the most interesting for us sqs and sns sqs stands for simple queueing service and sns simple notification service both can be used independently but when they are used together they offer functionality that you would need for a complete messaging solution and the great benefit of using both of these services that they are incredibly cheap and easy to maintain because there are no servers to manage there are even no instance type to choose from no need to monitor the cpu or the memory so they are so called serverless before using sqs i think it's important to understand its architecture because it may be different from what you are familiar with so first of all it's a pole system over http consumers need to actively create http requests to sqs to retrieve messages each queue has a property visibility timeout which sets the default visibility timeout for each message sent to the queue and visibility timeout defines for how long the message becomes invisible to other consumers once it is retrieved by a consumer so in step one we can see that component one is sending a message with a visibility time of 40 seconds then component 2 retrieves this message and the time starts ticking if it manages to process this message during this time it has to issue another http call to sqs to delete this message from the queue and if it doesn't manage and so the processing takes longer than the visibility timeout message will be visible again to other consumers so likely another consumer will start processing it again so in such case to avoid it if the processing takes longer likely the component 2 should extend a visibility timeout for this particular message does it sound easy i think it's not it doesn't i think it shouldn't because it's not really easy there are so many things you have to orchestrate here and it also has to be done in a resource efficient way from both the performance and also the cost point of view sprinkled aws does it all in a in an effective way but also it gives the programming model that is familiar familiar for to spring developers so let me show you how to send and retrieve messages from sqs to send a message we just need to inject a cue messaging template that follows a template pattern similar to spring's jdbc template or or rest template and call one of the methods for sending messages the most common one that you will probably use is convert and send because it not only sends a message to a queue but also takes care of the object serialization and that's quite simple right and retrieving messages is actually even simpler because all the complexity that i have mentioned before is done by the framework silently behind the scenes to retrieve message from a queue is just enough to annotate bean method with an sqs listener annotation and the message panels will be automatically deserialized to objects using jackson and that's actually con configurable in case you don't use json you can you can you can configure this in different um messaging converters similar to spring mvc or any other messaging integration in fact for spring um you can retrieve header parameters using add header annotations and by default all messages are consumed um when the when the processing of the message it finishes with the success meaning that there was no exception from during during processing the message will be automatically deleted from the queue and that's that's fine for for many use cases maybe even more most of the use cases but sometimes there is a we want to delete the message conditionally so it's called acknowledging the message so if our use case is not that straightforward we can also add the acknowledgement parameter to this method and then call the acknowledge methods if some condition is met and this behind the scenes deletes the message from the queue there is a similar approach for extending the message visibility so in case when processing of this message takes longer we can also inject the visibility parameter and call extend method where we put just a number of seconds for how long more the visibility of the message should be extended because spring cloud aws integration is an implementation of spring messaging we can also use a nice feature like annotating a method also with sent to annotation so the code you see right now on the screen will retrieve a message from spring one queue and the result of this message so in this case it will be an object of an instance of a ticket will be serialized and sent to a ticket skew and this is all without any extra boilerplate code while previous examples uh shown a relatively simple but they were relatively simple use cases with a single queue and single message consumer the more common scenario is that when there are multiple message consumers and often the producer is not even aware of the consumers and this is where the sns is handy on the aws side you can configure sqs queues to subscribe to sns topic in this case service a needs to publish message to sns instead of directly to sqsq and all services that are interested in this message must have a dedicated queue from which they retrieve messages using sqs integration the alternative way to use sns is to configure it to notify http endpoints whenever the message is sent to a topic and we have an integration for that too sending messages with sns is just like as simple as with sqs so instead of queue messaging template you just use notification messaging template and call send notification method and retrieving sns notifications in this case is just handling http calls so this can be done just with sprint mvc cloud aws adds some convenience annotation and parameter resolvers so to make it easier to handle endpoint subscription and parsing messages but if you work with spring mvc you should really feel at home what can you expect in spring called aws 3.0 in regards to messaging so first of all we want to improve the performance i mentioned that sprinkled aws processes messages in the research resource efficient way and this is true only to some degree the problematic part of this implementation is that processing processing batch of the batch of messages takes as long as the slowest message to process and the next batch is retrieved only after complete batch of processing is finished and this means that let's say if we have 10 messages retrieve from sqs nine of them take 10 milliseconds to process but the last one takes one second all nine threads will wait for the tenth thread to process the message before spring cloud aws makes a call to another batch and i think we can all agree that this is sub-optimal another point that is on our wish list it's nice to have but it's not really critical is support for reactive streams and also spring cloud stream sqs in fact a great part of the work for spring cloud stream sqs has already been done quite a while ago what we just need to do is to bring it to spring cloud aws project and do quite a bit of publishing so we will see how it goes welcome to story configuration with secrets manager and parameter store story configuration is extremely important and can be can be huge security risk if not done right that is why in spring cloud aws we have integration for fetching your secrets and parameters from security of aws through security related services but before we go to how what is parameter store and secret manager and pulp integration for them works we are going to jump to see classical architecture you would have for fetching parameters for example let's say you have service a b and c this service would call instance of spring cloud config spring cloud config would then fetch properties from file system database git volt and so much more it can be done periodically periodically which means spring cloud config will fetch this data periodically and when service calls it it will be able to automatically serve this data or it can be done when service calls or when service calls the spring cloud config it will go and fetch the data for you after that your service you get this get their properties and they are they will be up and running not only it is easier to manage configurations this way because they are centralized but from security aspect it is almost much more easier to secure one place spring cloud config also supports security on calls but i won't talk about it into a greater detail but it is something that you should know with this architecture you can focus on your business logic and you don't have to worry about storing and fetching properties and secrets which is great for more info please check the link below which is the documentations of spring cloud config it is a fantastic project and i recommend it but what if your storage for properties goes down imagine your client calls you and says i need application to call different url because i changed the url for api and you have two hours to adjust to this change but that the moment your rack with your file system in your data center goes down or for example you have ec2 hosted in ac2 hosted gitlab on your ability zone what's that well you have to wait for your backup to be restored this can take time you can't do anything your client will be mad customers will be disappointed it is bad for your business it is costing you money let's see how aws has your back ews parameter store is used to securely store and manage your parameters externally this is fully managed service which means no server provisioning is required you don't have to scale worry about updates and so on not only that but it you don't have to worry about it going down since it's highly available if you don't know what higher built is it means that parameter store is spread into multiple availability zones if you don't know now what the eez is it is data center inside aws reach one aws region holds multiple data centers and if one goes down others will be up because they're spread in and on different locations part me to store is perfect way for you to centralize your parameters and to separate your code from your configuration here we have service a and service b calling parameter store and fetching data from it using spring class aws parameter store integration removes the need for you to write code you would have to write a lot of code in order to get these parameters one thing to notice here as well is that you have one less point your point of failure since your services are calling prop parameter stored directly this means that you don't have sprinkled config which may fail but it probably won't for example of showing you the how integration works i will first show you how this looks in aws this is aws console for parameter store and you as you can see we have one parameter here which can be found under slash config slash spring and slash message and message you can think of it as a file system because slash config and slash spring are acting like a folder or directories and your message is data or your file why i'm talking about this because it will be easier for you to understand integration let's see check how integration works how to make calls to parameter store is extremely easy firstly your applic in your application properties or yaml you you put spring config import after that you have to use prefix aws parameters now comes the most important part since parameter store integration is loading parameters on whole path which means it will load parameters on path slash config spring slash in our example we have our message there if you had something like country code or url or let's say refresh rate it will be loaded as well but once if you don't have any parameters on this part loading while loading parameters your we will fail runtime for you this is why we have at options with optional prefix you you can you just put it before aws parameter store prefix and all parameters if found or not will not fail your application to best showcase that you can use optional and required parameters i shown you i am showing you the yama code below as you can see it has optional and required parameters basically about sprinkled aws parameter store integration is is how parameters are part into your application just simply introduce them with add value and that is it after that you can use this value to create your custom beans or some custom logic and so they can be used for anything you like basically you can use parameter store and reference it in your application properties as well here we have refresh rate for spring called config server do you remember how i was talking about fetching data this is how you can set this refreshing policy for git okay we have par with store but where to store secrets well there's no better place than aws secret manager you can store your database urls passwords username and other secrets same as parameter store this is fully managed service which means no updates no provisioning servers no you don't have to worry about scalability on when calls increase and it is highly available but before we check how secret manager works let's see example of architecture here we have service a calling seeker manager and using these credentials to work with rds in this way your rds credentials are safe and sound inside of secret manager one more thing i want to show you is how to combine parameter store and secret manager because i believe this is best of the two worlds imagine you have api that you need to call and you want to create web client bin for it url can be inside parameter store because it is some public thing everyone can know it so store it in parameter store and it's also cheaper for credentials you want to use sql manager because no one can know it only you want to know it that's why you keep them in secret manager where they are encrypted now let's see how secret manager integration works real loading secrets is similar as a parameter store but it has its differences since in secret manager you store your secrets with key key and they're jsons this means you cannot load it by path you have to use a key in our case slash secret slash db slash pro slash url is our key and it will give us the url for our let's example say rds or our jdbc so remember parameter store loads by the pad and secret manager loads by the key usage is same you can use it with that value or you can reference it you can represent in your application properties parameters let's now talk about different species between parameter store and secret manager parameter store supports types such as string list string and secure string string list is just strings with separated by columns string i don't have to talk about you probably know that and secure string is something that was used before because parameter store was introduced before secret manager so you you had to store your secrets inside parameter store that's why seeker string secure string is still here it can the parameter store can hold up to four kilobytes of data characters max can be referenced across account and you cannot rotate your secrets with it but it is cheaper you pay only for api calls you don't pay for storage on the other hand secret manager can be referenced across accounts it has automatic securitation it is more expensive because you pay for storage as well can store more characters up to 10 kilobytes it has a built-in password generator and it for something really good it supports secrets gross return replication which is something you might want now let's look what is coming we are looking to add seek resultation support but first let me explain what secret rotations is aws can rotate your secrets for you which means it can replace credentials inside of for example rds for you but that this change needs to be propagated to your application since application won't be able to access rds with old credentials this is really good security aspect which we would really be more than happy to support and we are planning to add this somewhere in 3.6 not only that but currently if you go to a parameter store console and for example you go to parameter we have message there and you change it currently the change is not propagated to aw to your application as well which means if new version comes you cannot have it refreshed we will we want to reboot both secret rotation and parameter secret change we want to refresh your beams for you so there is no downtime happening and your application context will be refreshed there are a few other supported services in spring called aws that we will not have time today to discuss so there is support for s3 for storing and loading files there's also fetching ec2 metadata if in case you are running your application ec2 instance we also support sending micrometer metrics to cloudwatch there is a spring cache sorry spring cache implementation for elastic cache with redis and memcached sending emails with sqs and also resolving resource ids from cloud formation stack but this of course does not cover even half of this list of services so the question is what should you do if spring cloud aws does not provide the integration you need first you may want to look at these spring projects in case you are using kinesis probably the best way to use kinesis from a spring application is to use spring cloud stream binder for kinesis in case you are building an integration middleware on aws very likely you will find spring integration for aws handy if you use spring cloud config server you may know that it supports s3 bucket so that you store your properties in s3 and there's also a support for secrets manager coming and there is also spring data dynamodb project though there are many forks of it and before you start using it you have to check which fork is actually the one that is currently maintained so you have to pay attention to that and of course you can always use just aws sdk there are two sdk versions available v1 and v2 v1 is the one that is used by spring called aws under the hood or and in fact most of the third-party projects that are older than three years or two years and v2 is relatively new but and offers some interesting features so first of all it is future proof it is compatible with growl vm native image and it supports non-blocking io this means that you can very easily use it in a spring reactive stack with spring webflux because it uses different name spaces it does not conflict with sdk v1 so this means that you can use both sdks in a single application for example sprinkled aws sqs integration for sqs and sdk v2 just for the dynamodb the one thing that you need to remember in case you would be migrating from sdk v1 to v2 is that there are still couple of features that are missing in v2 like for example the cost effective buffering sqs client that we use in spring cloud aws and last but not the least the question that hopefully everyone who uses aws asks is how can we test it what is the testing strategy when you use aws services unfortunately aws does not provide any equivalent services that you could run on your local host for testing purposes except one for ni for dynamo db so you have three options one is to just not write integration tests and only write unique tests but of course this is suboptimal the other option is to use aws services like real services for running integration tests and this option is not bad but it's an expensive one because it means that before you run your tests you have to spin up the whole infrastructure and when the test is over you have to destroy all the infrastructure so not only it's expensive but also it takes time and the option number three is to use local stack which is a third party aws simulator made exactly for this purpose and local stack is the option that we will be focusing on today local stack is distributed as a docker image so you can write this docker compose file where you can define which services from aws you need so in this case we define that we are using local stack and we need sqs and s3 we expose the port 4566 and that's pretty much it once you run docker compose up you will find in the console that you have these services running on your localhost and you can use them even with aws cli so you can just run regular commands just at the end you have to specify the endpoint url so that it doesn't hit the real aws but instead it hits local stack so of course if you create a queue for example on local stack in the response you will get the queue url pointing to a queue in a in a local stack the way you can use it then in the spring boot application is by specifying the endpoint property we added this endpoint property to all integrations in sprinkle aws 2.3 exactly for the purpose to make it easier to test springboot applications so you can just point this to localhost 4566 and it's not coincidence that both of these ports for both of integrations are the same because since recently i think local stack exposes all the services at the same port and you are good to go and this option is really nice especially when you want to not run the integration test but if you want to run the application locally and play with it like experiment with the api but if you want to write an integration test there is this necessity of having this one step before to call docker compose up and what we ideally would like to have is that you just run the test and everything that is needed for the test is you know created provisioned while the test is running or before the test is running and since local stack is distributed as a docker image we can use test containers for that test containers have a dedicated module for local stack that offers local stack container and we can use it in a even more friendly way than with docker compose because it provides a nice api we can just create a local stack container and then with services and just list services that we need to create then we use the dynamic property source from spring framework where we can uh instead of putting these properties into application properties we can just put it here so we can put the cloud aws sqs endpoint and then retrieve from local stack what is the end point for sqs local stack is distributed in two versions one is the free option and the another one is pro and of course it also doesn't cover all the services that are available on aws this is the list of all the services available in the free version so this contains services that probably you will like you will use the most often but if you need something different like for example cognito is also very popular service that there is a big chance you will need to use then you have to buy the pro version at this the list i think it's a complete list from the pro edition if you are working with springboot on aws definitely go to localstack.dev to learn more about localstack so what is the future for sprint cloud aws we are working on version 3.0 the main theme for this release is to migrate completely to aws sdk v2 and since sdk is at the very bottom of the whole stack this means that we have to pretty much rewrite the whole project and it's a huge effort and it goes much slower than than we planned and then what community also expected in addition to that we plan to add support for dynamodb and cloud map the cloud map integration there is actually already a pr so maybe we will release it in 2.4 we also plan to drop some support for some services because we don't think they bring too much value and the example is elastic cache and and that's because spring boot support for redis is already more than enough and the spring cloud aws support for elastic cache that was implemented before spring boot does not really bring too much on the top of it we are also thinking about dropping cloud formation support because like maybe that's really what's the percentage but many projects use terraform instead of cloud formation or other way to provision services and cloud formation support unfortunately costs us time but we will not drop anything without consulting it with the community we want to continue improving the startup times and performance in general and since growl v since aws sdk v2 is growl vm native image compatible we would like to make spring called aws native image compatible too it would be shame that mattie and i talked about all these services and that we don't mention spring cloud aws sample in spring cloud aws samples you can check how to use these integrations and have them up and running in couple of minutes so let's check where you can find them samples can be found inside spring cloud aws samples project which is found in our main github project currently there are samples for four integrations sqs sms parameter so in seeker manager and as you can see there are few of them missing like s3 rds and simple image services we plan to add them in the future or coming releases but they will be there once you are finished with sample or if you want to learn more about spring cloud aws go to aws spring io and take a look at our reference documentation and if you want to have a really deeper dive into spring and aws ecosystem i recommend checking out stratospheric book where bjorn philip and tom explain in depth how to develop and deploy spring boot applications on aws including authentication with cognito and provisioning infrastructure with aws cdk thank you for staying with us i hope you learned something new and found this talk useful if you have any questions please join us in a minute in the questions room thank you for watching that presentation and thank you to our wonderful presenters to mattia and mache um check out their channel on youtube the spring academy i've uh once he said that i remembered i've watched a couple of videos and gotten some help from him so make sure to check out his channel it'll be helpful to you thank you all for hanging out with us last two days learning a lot about spring and you're welcome to still use the self-paced labs they're probably still up and we thank you all all the attendees and all the presenters everybody who works so hard on providing spring one and the spring community that just continues to grow and thrive thank you all so much and appreciate your your attention and love thank you so much take care you
Info
Channel: SpringDeveloper
Views: 1,915
Rating: undefined out of 5
Keywords: Core Framework
Id: cgMjPBmBkyE
Channel Id: undefined
Length: 51min 24sec (3084 seconds)
Published: Wed Sep 22 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.