Spring Cloud Function: Where We Were, Where We Are, and Where We’re Going

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
all right welcome back everyone uh we have another great session here on the architecture track with two marks um uh from one from solace and one from aws and my colleague and good friend oleg and they're going to be telling us all about sprinkled function where we are currently and giving us a sneak peek into the future as well so i'll turn it over to them hello everybody my name is alex jorkowski and i am project lead for spring cloud function and sprint cloud stream quickly bought myself i been with spring organizations since 2008 pretty much done a lot of different things uh primarily uh one of the core engineers on spring integration project but as i just said uh currently i'm a lead engineer for uh spring club functions print cloud stream uh the other presenters mark sales and morgan squad will introduce themselves uh there are a lot of slots later on during this presentation so um we have a lot of content to go through so um i'll begin right away in order not to waste any time so um here's a quick agenda um um as you can see there's three parts so i'll go first i'll talk about uh quickly introduce it to spring cloud function to the whole paradigm of functional development and spring cloud function i will segue into aws london cloud function integration and specifically talk about routing function you may be surprised why all of a sudden i picked one of many features such as routing functions to talk about but there is actually a connection with aws london and i'll talk about that as well then i will hand it off to uh mark sales from aws lumber team who will talk to you about a lot of interesting things related to sprint cloud function aws lambda and yeah and um then the closing band will be brought to you by our friends from solace with the mark d pasquale on elite vocals who will talk to you about some very interesting streaming use cases when it comes to uh using spring load function on solas platform with another framework that i mentioned earlier spring cloud stream so java functions so um let's talk about the scope quickly um here we're talking about java functions what is java functions that's the three core strategies that we got from java 8 which is supplier function and consumer and when i talk about them i usually like to talk about uh to explain these sort of four uh parts to them like simplicity portability extensibility and consistency so simplicity it comes through the clarity of expression no matter what we do if you think about it we either consume produce or doing both right and these three interfaces these three strategies allow you to do exactly that right portability well there could be many different implementations of functions at the end of the day it is still either a function supplier or consumer so any system that supports java and java functions will be able to port and interact with these with your implementations now extensibility has already said any interface could be extended you can have custom implementations but at the end of the day it is still a function a supplier a consumer so um and that brings sort of a consistency the final argument which is agreement with what has already been established very simple well understood um you know widely supported strategies that don't come from spring they come from from the platform itself from java and we should all be familiar with them by now so the another interesting thing i talk about is the the two core tenants when it comes to java functions which is the contract and the pattern contract defines a very strict set of rules right and pattern is greatly simplifies communication when it comes to interpreting business requirements because you simply have to align a particular requirement to either function supplier or consumer for example receive and process report receive and sorry receive and process debit and report new balance right this requirement although trivial is clearly a function because it talks about some input talks about some output right but also notice how easy it is to um get sidetracked by words like uh receive words like send right are we receiving a bhtp for example or messaging or kafka or something like that what about reporting are we going to be sending something somewhere we get sidetracked by these requirements but the reality is that those are non-functional requirements those are boilerplate things that we eventually this is where we will that's sort of where spring cloud function and other frameworks come to mind but the function itself think about it will the debit processing like a debit transaction would it be different if the input comes from http or input comes from kafka it would come from somewhere else obviously it's a rhetorical question i can you know interact with the live audience at the moment so i'll answer it myself and it's resounding no so no the implementation should be absolutely the same what should change is ability how you trigger how you activate functions and this is where we're kind of getting into the realm of spring cloud function so for example as you can see from this slide we can have many different execution contexts for the same function it could be executed as part of aws lambda as this function could become an http endpoint or a message handler and many many more this is these three are just for you know to keep the slide pretty there is gazillion more different ways of these functions to um to be um various different ways to create execution context for a particular function so and this is where we're getting into a sprint load function so um what is sprinkler function so sprinkler function is a spring boot based framework now again the spring boot base that's the important part because right now everything is kind of spring boot and by saying that something is spring boot based that kind of uh tells you a lot of things about how sets a lot of expectations like for example it means that there's going to be some motor configuration that will allow you to for example take certain assumptions there are certain opinions that are going to be taken into consideration while you can always introduce certain changes so it is springboot based framework to promote implementation of business logic via java functions if you remember like for those of you who are familiar with spring and spring and bbc we used to had uh back in the days when we talked about how you can create a poger method and then throw a few notations on it and all of a sudden this method became an http uh endpoint because the framework will take care took care of the rest right while you were able to do uh poetry testing without ever carrying in the world that it could be invoked by http so we embraced a very similar paradigm here so um so sprinkle function like i said promotes implementation of business logic via java functions it gives a uniform and portable programming model integrates with other platform including the serverless such as aws lambda you know the streaming solus rabbitmq kafka and many others and as i already mentioned consistency gives you consistency and testing capability some of the core features oh by the way as i said this is going to be very congested um because i like i said we we have a lot of content to present uh however so the sprinkler function part is gonna take i'm gonna be done in about 10 minutes however if you want to get into more details in the resources slide that's going to be displayed at the very end of this presentation there is a link to my presentation from last year's spring one which where i actually had a whole hour to talk about sprinkles function and that one goes into a lot of uh details and a lot more samples so anyway the core feature of sprinkled function is the most popular one is function composition so like rod johnson once said there is no such a thing as a complex problem there is an array of simple problems you just have to connect them that's essentially what we're doing here right so instead of writing a lot of code you can write a lot of code in isolated examples tested individually and then be a pipe you can connect them together and compose into a single function transparent type conversion so you don't have to invoke function with exact type and worry about how to create this type we'll take care of it for you and that part of the framework is also extensible if you don't have the appropriate message converter you can create your own one reactive support poacher function in other words the duck typing concept if if something looks like a function like at functional interface kind of thing then it will it's a function functionarity which is functions with multiple inputs and output function routing something we'll talk to talk about today um poser function deployment of package functions adapters like function as this function is that and many more so anyway this is kind of what it looks like right now so basically you create a beam uh which is a function you see a couple of examples it just registers as a bean and that's it right so we'll take care of the rest and uh that's it so you can see this these are very trivial samples but technically we don't care because we don't care what's inside the function we only care about the signature because that's how the function gets was strapped around within the framework so um quick demo um i'm running a little short in time so i'm going to skip well i'm going to skip one part of them so quickly here's basically the code here's a function configuration three functions and inside of my configuration inside of my pom file which is again when you work with spring boot it's always your configuration class and your pump file because that's where the dependency is defined and dependencies are not just the libraries they're order configurations so what i'm going to do is i'm going to un uncomment these dependencies by doing so i'm basically saying i want to be able to invoke this function as a resting point and also want to be able to invoke this function as a message handler for example using rabbitbinder that's all i did right so i'm going to start it and um and uh i'm gonna go into my rabbit console you can see my um i can send the message and you can see it's been reversed and uh oh here here i use the actual function composition you can see uh from here and um yeah so um this is um i'm going to remove this because i'm going to put another demo um so um i can obviously you you just saw how i how i send message to rabbit and all of a sudden you block the function but what's interesting i can also um i can also oh what happened i stopped it i also as you can see i invoked uppercase pipe reverse and right now it is so the same function became a rest endpoint so in other words i could do uh the function is the same but all of a sudden i provide two different invocation and and and uh two different applications and two different uh execution models right so now what i'm gonna do is i'm gonna do the unthinkable is that i'm gonna execute cdk deploy which means that markup sales will talk more about what is cdk and so on and so forth but this is basically we're switching quickly to aws because i only have about two minutes to talk so this will go through building and deployment all the way to aws which we are going to see in a second but i'm going to switch back to the uh powerpoint and uh quickly talk to you about uh aws so as one of the integration oh my god dude come on so one of the integration points as i mentioned earlier is integration with serverless platforms such as aws so what uh sprinkler function and you know islam the goals are ability to extend simple function programming and model to aws lambda so instead of writing the request for those of you familiar with aws so instead of writing request handler request stream handler and various different handlers you only have to worry about writing a function implement a function you can integrate with aws lambda api and tools which is what i'm doing right now with cdk and simplify management and maintenance of function as aws lambda function such as routing function so why routing function is so because it has application everywhere in streaming and in http but why is it so important in the context of aws because aws community requested it why because for those of you who are familiar with aws you can when you when you define lambda you have a w you can expose it via a api gateway but what if you have 10 10 functions 10 lambdas 20 alumnus can you imagine managing the 10 20 30 api gateways well it's better to manage one but have some way of saying i want you to invoke this function or that function and that's effectively what routing function does so it's a function that simply routes to other functions it acts as a gateway firewall single point of maintenance dynamic decomposition and so on and so forth so i have about one minute and perfect everything got deployed and now we are in um [Music] lambda um [Music] so lambda here we go here's my alarm that was deployed routing gateway right um i can invoke it directly or i can go through the api gateway anyway i already have everything preset here so i'm just going to execute another call request so in this case uh i'm posting to this api gateway i'm specifically saying i want you to invoke this function definition uppercase reverse and this is basically a call start so mark sales we'll talk a little bit about called worm starts right so we're in the call start point and we're done so as you can see the output of so it uppercased and reversed the foo so um thank you very much guys i'm going to stop sharing and like i said we're going to hang at the other room after this for more questions but now i'm going to introduce mark sales who is my friend from aws lumber team mark take it away hi everyone and thank you for that great uh whistle floor through uh through spring cloud functions and i think it's really interesting to see the similarities between that project and lambda so just a quick introduction of who i am so my name is mark sales i'm a specialist solutions architect at aws i've been there about three years now and uh recently i've been really trying to dive deep into lambda and serverless and really connect with the community so it's been great working with the with the people at spring so for those who are not familiar with aws lambda what what is it well it's a a compute service which allows you to run your code without having to manage any servers at all so you upload your code to aws and we run your code in response to events now very synonymous with what oleg's just been talking about so lambda can be configured to work with many different event sources so as he was talking about earlier api gateway but also um message systems that you're probably familiar with so kafka rubber mq active mq our own messaging systems like sqs and sms and uh you know a whole variety of other other systems as well once a lambda functions receives an event it's just like uh running code anywhere so you can make connections to databases you can make external calls you're not really limited in what you can do and although we're talking about java today you can run a number of different languages and run your own languages via something called custom runtimes which we'll be looking at a little bit later when we come to talk about spring native so uh one thing to understand about lambda is that it has two components to pricing one is the number of implications that you make and the other is the duration of the lambda function execution so you can determine how much memory is allocated to each lambda function um and the price is dependent on how long the execution lasts for for a given uh lambda function for a given memory allocation so the faster your lambda function runs the less you pay which is great so the core tenants for for lambda are are uh are shown here and i think they're really interesting to to talk about because i think a lot of them really associate well to each other so we're always trying to produce the the best platform for people to go to market the fastest and people can achieve that by using our platform because we have so much of these architectural non-functionals uh already pre-baked into the system so the system is already load balanced across multiple geographic locations it automatically scales up and down with usage we have uh billing which is just on on usage so you're only ever paying for what you use you never you never build for any idle time and again that fits in quite well with the the total cost of ownership but also the total cost of ownership works quite well with security we're patching the system on your behalf and installing security patches for me this was quite evident when i was a customer because um i remember using aws lambda as a customer and a vulnerability called spectre became became known to everyone and the adverse lambda was actually patched before that announcements were made so for me as a developer i could just carry on working whereas other people around me were wondering what to do and how it passed the system so security is really important for people using lambda and i'd just like to dive into that a little bit more because lander is a system that's used by hundreds of thousands of customers invoking trillions of requests every month so how do we keep all these customers separate and how do we make sure that your code doesn't affect other customers and how does their code not affect your code so what we do is we take a bare metal server and we add our own micro vm and this is from a open source project that we've started called firecracker it's freely available on github you can go check it out and that uses a lot of open source technology and some of our own contributions to make a really uh minimal virtual machine which starts very fast and is just specific for for running lambda and scale out systems like lambda in that micro vm we add additional sandboxing so that the processors can only issue certain commands to the to the kernel and then within that we add your code we have the runtime which is java in this case and then we allow the customers to access some of the disk space but that is a completely isolated environment compared to another customer who is in a completely different vm and uh isolated there okay so this shows what an execution environment is and the next thing i'd like to talk about is scalability so one of the great things about lambda is that you can use lambda and you don't have to worry about scalability so let's talk about that next [Music] so i mentioned that lambda doesn't have any any servers and how does it work and so so when you make a request what happens is that behind the scenes uh an execution environment is going to be created so on that previous slide i showed that an ec2 server has a as a micro vm so one of those is going to be created we're going to find a host with some spare classy we're going to make a new execution environment we're going to download your code onto that environment we're going to boot up java and we're going to load your code and then execute your function and that's all going to happen within seconds and that execution environment will only have a handle one uh invocation of the time which makes uh the programming model a lot of you have to worry about open file handles things like that so as requests come you're going to see that actually we need to make more execution environments and this means that we're going to have a number of what we call cold starts where execution environments are prepared ready for use but there'll be a point where there's a number of concurrent uh execution environments available and requests will come in and they will be able to be handled by execution environments that are already prepared this is called a warm start now as developers we get hit by a lot of cold starts because we're making changes like olig did we're making changes uploading our code and then testing our code the vast majority of customers will always hit warm starts because the system will be up and running and as the requests come in they will be shared across a number of execution environments which already exist so as developers we get we get a poor story but that's uh that's how it is and how how can lambda scale well lambda can add uh execution environments up to a number of different limits i'm just going to talk about the limits just now so that we can understand the scale which we can operate at so by default uh aws accounts in uh in each region can have a concurrency of 1 000. so that isn't 1 000 requests a second if your requests take less than a second to execute you can have a request per second much higher so say you had a thousand concurrency 100 milliseconds per request that's 10 000 requests per second the other thing to think about is what we call burst capacity so burst capacity is how many execution environments you can add in a short period of time um and the different regions are different sized which means we have different burst capacity in different regions so in ireland where i typically operate my uh my test from i can burst up to 3 000 execution environments in a minute if my uh account limit has that a limit as well obviously these can be changed with a support ticket so if you're finding that your workload is using more than a thousand which is the default account limit raise a support ticket and that can be adjusted for you okay so obviously in this sort of architecture cold starts are really important every time we we hit a cold start we're adding latency to the system that's affecting our customers journey and our customers likelihood to continue that journey so what exactly happens in our cold start and what what can i do to optimize for this well your code is downloaded from s3 which is where it's stored which is our storage service a new firecracker micro vm has started the jvm is started your application is loaded and your function invoked well there's a lot of different stages there which you can you can look into optimizing your code for and that is the whole talk in itself and this is where i would normally talk about these things but i'm actually going to be talking about spring native instead so i'd invite you to have a look at this talk by one of my colleagues from germany i think it's a really good a very in-depth session about how to optimize java applications the alternative now is that we can use spring native and i'm going to show how uh how the performance is quite different to how it was before spring native okay so let me switch over check my time yeah we're good okay so if i swatch swap over to intellij and change my sharing share intellij okay so uh oleg was kind enough to supply me with some uh code so he gave me a spring cloud function which does some uppercasing this is the implementation and what i've done is i've used a project that he mentioned in his talk called cdk so ctk stands for cloud development kit and what it is is a way of expressing aws infrastructure as programming code it's available in a number of different languages and obviously i'm working in java here so this is a java variable which describes a lambda function so when olig typed cdk deploy what was happening was this application was running which is a maven project um and was producing um descriptions of the industrial show he wanted and then executing on on his behalf into his aws account using his credentials um one of the neat things about this is that i can also tell it to find the code from my classpath or on my my my local machine or i can tell it to use a docker container to do the build and because i want to be a good citizen in my development team i'm going to use a docker container to do my builds so that i can have the same docker container for multiple people so this uh section of the code describes what docker container i want to use i want to map my local m2 directory so the builds that take a long time and then at the top here is a list of arguments that i want the docker container to uh execute and and the bit that is uh most interesting is this bit so we're doing a maven clean package and as you've probably seen from other talks in in this conference we're adding this new uh profile which is native which is telling spring cloud functions to to package uh in using the native functionality so for those not aware uh is using raw vm's native image tool behind the scenes to produce a native binary of our java code and this has the advantages of being a much smaller footprint which is great for lambda being at such a constrained execution environment and it has a really really fast start up time so anybody who's used spring native will know that the one of the downsides is that the compilation times take a bit longer so in this demo i won't be able to show the compilation times because it takes a few minutes um but i've prepared one already so i'm just going to show you a quick demo of that so if i switch over to my console screen so now we're in the aws console i have a lambda function called uppercase and i can explore the lambda function as i normally would do in this case i have a memory setting of 256 mega megabytes which is quite low if you've ever used uh lambda on java and uh unlike oleg i'm going to use the console to to inject an event so this event is going to say hello mark and we're doing uppercasing so if everything goes well i should get a response that says hello marking up the case so this is the first time i've invoked this function so i have a cold start and a cold start in the console is called the init duration so if you ever see any duration that's a cold start now my cold start was less than 500 milliseconds if i was using spring cloud functions without spring native i would probably expect to see two three four seconds maybe so we've really taken a good chunk of the time here now after the cold start i can execute function again oh the beauty of live demos so there's obviously been a problem here and let me uh let me just force force another cold start and see if i can show you that again that's timed out this should be my cold start less than 500 milliseconds on my warm start there you go no cold start and then a much faster warm start duration so there you go and that's my demo thank you very much back to oleg thank you mark that was very interesting and um jim blocking out in front of the world but yeah it was extremely interesting and uh useful and um of course the demogods you know had their say but um anyway i hope you guys enjoyed the aws and alumna and all this all the interesting things about it but right now we're going to switch completely the context and go into the world of streaming with uh solos as a messaging platform and i'm turning it over to more of the best color hey um hopefully everybody can hear me now um and you should see my screen so i am i'm mark daviswally and i'm a developer advocate at solace um so for those who are not familiar with solace is a technology provider um where we offer a platform to enable organizations to um design implement and operate event driven architecture so all along those lines i'm going to talk about how to use spring cloud function um of course that's the you know the topic of this talk with spring cloud stream to create event driven micro services that we see a lot of customers using today and so as part of that you know oleg mentioned earlier that spring cloud functions can kind of be written once and have them invoked and activated in many different ways um so we're going to look at using spring cloud stream to kind of implement the streaming part of this and so i'll jump in and give a quick uh intro to spring cloud stream so if you're not familiar with spring cloud stream it is a spring project that provides a framework for creating these event-driven microservices that can be used with plugable messaging systems so at the bottom of the image here you see you know kafka rabbit amazon kinesis and of course solids pub sub plus there are a few others as well that you can check out on the project page so all of those are pluggable via via binders you don't have to worry about using their specific apis but you just learn how to use the spring cloud stream framework and then the binder handles the communications with the broker itself the project is based on on spring boot and it uses the foundation laid over the years by spring integration and spring messaging of course the code as we'll see is now written in spring cloud function as a version three and there are three different application types so you can see on my screen we kind of a source application type so these apps are ones that publish events into your event broker of choice on the right side of the screen you'll see a sync application type and these are apps that are receivers or consumers of those events and then in the middle we have the processor application so the processor applications are a combination of the two they consume events apply your business logic and publish usually related events um so that's like the the quick intro to spring cloud stream um as i mentioned before it is an abstraction framework for the events so you don't have to learn the messaging apis themselves you just have to kind of learn the communication models that you want to use so the framework supports three different communication models one is persistent publish subscribe so this allows your subscribers to be independent from each other and receive all of the events in order in a specific stream the second is consumer groups so consumer groups allow for you to fan out and load balance events across multiple consumers that participate in a given group and the third is stateful partitioning support and so that model allows for in-order processing for consistency and performance and then to bring it back to spring cloud function we write the the code for spring cloud stream now using spring cloud function and so all those functions that oleg mentioned earlier your java util function supplier function and consumer that were introduced back in java 8 map to those spring cloud stream application types so you have your supplier which is your sender or your source of events you have your function which implements the processor pattern and you have your consumer that is your of course receives messages and processes them right and so on the screen here we can see an example of what a spring cloud stream app using spring cloud function might look like it probably looks pretty familiar because you've now seen something similar from both mark and oleg but we just have a simple uppercase function and and that uppercase function is receiving a a string and outputting a string and you know nothing here looks like any sort of messaging code and so i just wanted to quickly uh demonstrate how this might work so i have that that same function here in my ide and as oleg mentioned earlier in order to you know kind of invoke your spring cloud function via spring cloud stream in your palm you just need to include the right dependencies so remember this is a spring boot app so spring boot kind of knows to check for the dependencies and see what's there and do the auto configuration for you so it sees that i have spring cloud stream included in my palm and the spring cloud starter stream solace which says hey the solas binder is available to check and that binder is what knows how to use the solace messaging apis to send and receive messages so if i run my app i'm going to run it as a spring boot app i'm just using spring tool suite here to run it this will go ahead and start up i can see my application is run and i'm going to go ahead and invoke my app and so the what's going to happen here is i'm going to send a a message in using a a solace tool that is going to arrive to our application via the uppercase in xero topic so that is kind of the default topic configuration for this function you of course can configure that to be what you would like but we're going to stick with the default for the demo the function will then take the payload of that message treat it as a string uppercase that string and return that string from the function which spring cloud stream then in conjunction with spring cloud function puts that in the message payload and sends it back out onto the uppercase out zero topic so just to show that i have this try me tab in the solace pub subplus manager so this allows me to send and receive test messages with solace i'm going to go ahead and connect a subscriber and a publisher and i'm going to listen on uppercase out xero go ahead and subscribe there and i'm going to publish on uppercase in xero and so i have a local solace broker running it's a docker container and so by default spring the spring cloud stream knows how to connect to that via the solace binder so i don't have to configure any sort of connection information and we'll say hello spring one and i'm going to go ahead and publish that and i can see that my subscriber did indeed receive the fully uppercased message on the subscribing side and in our application we can say see it did indeed process that message as well so that is the very simple introduction to spring cloud stream i will say if you want to learn more i highly recommend you check out oleg's talk from last year it will be in the resources slide at the end of this talk and we'll supply the deck in the slack channel also the spring cloud stream reference guide is really the the documentation for the project i highly recommend you know going in there if you have any questions it has very detailed information on how to use all the different features of spring cloud stream but for the rest of the of the talk i want to talk about a feature that's kind of near and dear to my heart you know working at solace and with a lot of organizations um implementing event driven architectures and that is how the these organizations are using dynamic destinations or topics and also how spring cloud stream allows you to do that as well and so we we refer to these as dynamic destinations in the framework for many brokers those destinations are our topics right um and so you might say mark you know why is this near and dear to your heart and and the reason for that is because dynamic destinations really allow for improved decoupling so one of the main reasons that organizations adopt an eda or an event-driven architecture is that it promises a more decoupled system that allows you know for for easier scaling and easier extensibility etc but just adopting an eda doesn't guarantee this you need to have a proper design that doesn't tightly couple your event driven apps just like their synchronous predecessors and so dynamic destinations can go a long way to provide a flexible coupling point and and what do i mean by that well producers publish the topics and consumers subscribe to topics um to receive the events that are being exchanged and so in an event-driven system or event-driven architecture these topics are really the coupling point of an eda so if we can make this point flexible then we allow that coupling to become as loose as possible so as an example you can use a hierarchical structure allowing for different levels and so you know one of the in in solace and mqtt and ibm and it's a lot of the messaging systems that are out there you'll use a forward slash to delimit these levels um but then once you do that each level can be a variable or an enumeration that is derived from the data itself so this allows you to use that topic name as the channel and it allows you to really describe the contents of the messages so in the example here at the bottom this is a internet of things or iot connected vehicles example and so we have six different levels of our topic the first level is really which app is it in this case it's the bus tracking app the type of event or message that exchange over that channel in this case is a it's a gps update and then we have the bus number the route information the latitude and the longitude so this allows for your topic to be very specific about what this message is which which is which is great so publishers can really publish something out that describes exactly what the contents of the message are so publishers are publishing to that topic and they don't care what the consumers are doing then if we think about that for from our iot connected vehicles perspective the consumers can look at the stream of events that might be an infinite stream of events and say okay you know which events do i want do i want all those events do i want some of those events et cetera so in this case subscriber number one might say you know what i only want the events for bus route number 10 whereas subscriber number two might say you know what i want to consume all the events that are near the airport and you might then have subscriber number three which might also be a consumer group of people of applications that says i wanna i wanna consume that entire stream maybe i wanna consume it all now and do analytics on it um so by decap by using the hierarchical topic structure this allows for all of those things to happen without the publisher having to change like each of those subscribers can choose the subset of of the stream that they want to consume without having to say hey mr publisher can can you publish to this other topic or change the partitions that you're publishing to um instead these consumers can look at that well-defined topic hierarchy and use wild cards um to decide what exactly they want to consume so in the middle of the screen here we can see if you want to consume all data from all the buses on route 20 about 95 you can use wild cards at specific levels and then specify for example route 95 at the route level or our second example there was i want to receive all the messages near the airport well you can define a bounding box and then in your topic subscription define like what latitude and longitude you want and use wildcards to represent you know even part of a level as well and so this allows the consumers again to decide exactly at a fine grain level what events they want and then the broker does the filtering for them so i want to point out here that you know one of the benefits of this is that each of those applications are only getting the events that they subscribe to they don't get all of the events and then throw 90 of them on on the floor that they're not interested in they only get that 10 of the stream for example that they want or the 100 of the stream that they want or the 45 of the stream that they want and i wanted to show a quick example of this actually and so fitting with our bus example one of my colleagues created this a real-time connected uh buses demo and it's it's a it's a demo of buses driving around singapore right and so all the buses you know we've got i don't know a couple hundred buses there all publishing gps updates on a topic hierarchy and so i then have this web app that's a subscriber of these events and so right now i can see it's you know receiving you know 100 to 175 messages a second but you know what if i wanted only buses on route 10 then i can you know i can change my subscription and say i only want to subscribe on route 10 and in real time the broker says okay you're only interested in events on route 10. i'll zoom in a little bit and now i'm only receiving buses on that route right and if i click on one of these buses oh probably bad example behind that i can see that the topic string here there's a route 10 in here and that's what i'm filtering on and i can see that route 10 kind of goes from the west side of the country across to the east side and so i can see only those buses whereas our other consumer let me go ahead and bring all the events back in our other consumer might only be interested in one area of the country and so he um he or she or the application could say you know what i'm only interested in events over here by the airport let me create my bounding box and subscribe to only events there and so now i can see the buses on the rest of the map start to disappear because i don't care about those events the broker is only sending me the events for the buses in my area based on what i'm saying i want to subscribe to and i can see my received message rate is now down to 20 messages a second so i'm not receiving those other 120 messages and throwing them away i'm only actually receiving the events that i am actually interested in so you can see you know here like that font that well-defined topic hierarchy really allowed the consumers to filter on exactly what they wanted obviously i didn't have to go to each bus and upgrade their software and tell them to publish to a different topic for example all right so now that i've showed you a working example um obviously the that map was not using spring but in the solace event broker and other event brokers right there are protocol translations and stuff like that so i can i can consume those events in spring cloud stream or publish events in spring cloud stream and consume them in an mqtt app like the one that i just showed so how exactly do i publish to dynamic topics using spring cloud stream and there are multiple ways that the framework supports for this the first is using stream bridge and stream bridge as you can see on the screen here allows the developer to work with just the pojo or the plano job java object you'll notice you know in the in the function here i'm not actually dealing with the message object at all and so you can inject the streambridge object here and then in your jaw in your spring cloud function you can use streambridge.send you specify the topic you'd like to send to and then the payload of the message that you would like to send and this allows me to have a method such as the get my topic using logic method to apply your business logic and define that dynamic destination that you might want to publish to so in our example you know a second ago the bus would check its its gps location and say this is my latitude this is my longitude this is my bus number it will build that topic and then publish to that um something to call out here is you know when you do use stream bridge spring is managing each topic as its own spring integration channel which is which is useful for metrics you can then query via jmx and stuff like that and spring cloud stream will also cache a configurable number of these channels i think the default is 10 or 20 but you can configure that if you know you're sending to the same five places or the same i don't know 50 uh channels so that's the first option option two is using a few special headers so the first of these headers allows the dynamic publishing to be handled by at the framework level itself and that is the spring cloud stream send to destination header and the second of these headers allows the dynamic destination resolution to be handled by the binder and so that takes away a little bit of the framework overhead um on the second one here you definitely want to check and make sure your binder supports that the solace binder is an example of one that does but if you go this route now you're actually dealing with the message object because we are adding a message header and so again you can define the topic or the destination that you want to publish to using your business logic and then you know define your payload however you want but then when you're when you're actually going to return to send the message you're going to return a message object and on that message object you're going to set the header in this case i'm doing the binder headers target destination header but you could also use the spring cloud stream send to destination and so by doing that you know every time you execute this function your business logic could be different and you could be publishing to a different uh topic and so that allows you to publish to the two dynamic destinations which is becoming you know more and more popular actually just had a question on it from a customer about 30 minutes before i talk so all right um how many how much time is left let's say we've got i've got a few minutes so i will this this application here is going to be available afterwards in the github repo but i actually implemented these um as well in that in the same class so you can you see you could so you can uncomment and comment out and test these out so i've got the stream bridge option here and i've got the target destination option um i'll just show one of them real quick since there's only about a minute left before i hand back to oleg and so essentially this is going to consume a message from the function using stream bridge in xero topic that's the default topic it will execute my get my topic using logic which is simply just going to publish on hello spring one and then a counter which i think starts at one or starts at zero um so let's check it out so if i go back to my try me tab clear my messages i want to subscribe subscribe i'll just subscribe to everything real quick in xero actually i don't want to subscribe to everything i want to subscribe to hello spring one all right and if i publish hello spring one i see that my helispring one was processed by my function and it's dynamically adding the zero on the end if i if i change this to my second message it's now published into hello spring one slash one and and i'm subscribing using using the greater than here which is a wild card and solace so as i keep sending this i can see here the topic changes and i'm subscribed to all of those so that is how you send to dynamic destinations using spring cloud stream so just a a quick take away there from this talk as a whole spring cloud stream allows for you to invoke and activate your spring cloud functions for streaming use cases and provides functionality like publishing to dynamic destinations that allow you to solve a lot of these real world problems um and then if you want to learn more we have office hours at the solace booth until 1pm but obviously we have our q a session right after this that we'll be jumping into um so back to you oleg thank you mark this was actually very cool because as much time as i spent with you i've learned something and i got excited about something it's it's interesting to see that at the end of the day functions are remaining the same sprinkler stream is the same but the topology of topic names the queue names that solas provides allows for completely different use cases yet still using the same frameworks that you use for other use cases and obviously still using the same functions still using the same functional implementations that have nothing to do have no knowledge of your topology have nothing to do with uh with specifics of solace or aws lambda or anything else of that nature so um actually we're doing very good in time we still have 45 seconds to talk about anything so what i'd like to talk about is simply say here's the slide with all the resources that are relevant to this presentation there is a link to github repo which is also included in the slack channel for this talk at the um as a pinned message um you can see uh yeah basically i'm not going to describe every single link you just click on it and figure out what it does but it's all relevant to today's presentation i would like to finally uh oh who we are and our how to get in touch with us uh the link uh contains uh the pdf of this presentation which which actually has these slides with uh each slide for a for each speaker with our information so and with that i would like to thank uh mark and mark and thank you wow what a great uh presentation there by mark mark and oleg i certainly learned a lot listening to that um even though i'm part of the spring cloud team um but yeah it's it's hard to keep track of everything there's some great demonstrations there spring cloud function uh and really enjoyed listening to that presentation it's great job guys everyone grab another coffee and get ready for some more content coming right up you
Info
Channel: SpringDeveloper
Views: 603
Rating: 5 out of 5
Keywords: Core Framework, Kotlin, Serverless/Microservices
Id: YkkFLcpUKNI
Channel Id: undefined
Length: 56min 48sec (3408 seconds)
Published: Wed Sep 22 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.