Applied Event Streaming With Apache Kafka, Kotlin, and Ktor

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello and we are live again welcome back to our youtube channel youtube channel of kotlin have you heard of cotton of course you did uh it's me today again anton hippo and uh welcome back victor hello hello everyone it's great to be back it's great that uh you folks keep inviting me and uh yeah it's gonna be a great show today folks we have a critical pleasure i'm excited too all right so this is the fourth webinar in our second series of you know webinars of coding for the server side and uh today we are talking about kafka again right kafka with kotlin and gator by the way not spring well last time folks were upset that we were using a lot of spring magic right so we decided to bring cater this time yes today we're going to be you're misty doing a less magic yeah will be more more practical effects rather than the magic right so we prepared a very nice demo for you victor will go over this but before we start if you didn't see the webinar yesterday about kate or what the framework is and what kind of features it provides i will do a brief introduction so that you know like we are not just using some sort of uh obscure library or whatever it is a new language kate or gate or what it is right so it's a cotton first framework cotton first uh web framework uh it actually provides a server-side component and uh client uh client by the way is a multi-platform one so you can actually target jvm uh javascript and even native with that uh it's very popular in mobile development uh but the server side component is uh fully jvm maybe one day we will have it multi-platform as well and run run it on a native native platforms um but so far it's jvm it's tightly integrated uh it's very lightweight uh too lightweight even while we were creating demo so we realized that it's too flexible sometimes actually uh when you are used to using spring and uh with spring you have all kind of recipes how to do things and with catering you actually have to convert your mindset a little bit and uh start you know writing real code all right so there is a very nice documentation we are going to use this framework for the demo uh the framework provides a bunch of features like if you go to the documentation there there is a list of features for the framework and we are using some of them uh especially websockets integration on the server side we have also web jars web jars integration like so that you don't have to copy the javascript dependencies into your project we are going to use web jar functionality to you know to download and serve those uh javascript libraries into the ui what else did we use victor do you remember i i think those are the two primary features but of course victor remembers we are using kafka and we created our own custom feature to actually install kafka component into the application yeah yeah so this is going to be um we will work through the code and we'll show you uh what what we did to make this work and uh we tried to do the this like a you know use the ktor for this demo with like hate and passion meaning that like we uh squeeze some of the wood and push it on some of the binders uh from perspective how the people or might want to use this in the real world um and if you would like would you have this code already published in um in a github repository so exactly i will share the link in a few moments in the chat so you can click it and browse the code with us as you want uh but we i think we are ready to start but before we do probably just you know uh a show of hands in the chat where are you from why are you interested in the topic are you using kafka or not are you using kator or not maybe are you you are using some other framework with kafka and you know please share uh and i think i think we are ready to start victor right so let's start with the demo let's start with the demo let's show the awesome ui as you know this meme you know like the the ui the front end and the back end will first show you the nice picture and and then we move move backwards like we go to the back end and show you the real picture um so let me see um if i do have uh i don't um yeah so uh my uh my infrastructure right now is is down so that's why i will show like a quick bits here so give me just a one quick second i will uh you can you can share my screen that's all right i will i'll show what i'm trying to do for for this demo um so that's why so first of all um i would like to start this uh by um talking a little bit about the the micro service architecture and why it is uh i think it is important uh and after that we will go into the bits of this um of this of this talk so when you start doing something when doing some some cool stuff from the very beginning you know everything is nice and clean and this is how you get um the the tools that allow you to generate nice clean structure right anton we can get use some of the like star.ktor.io we get the nice clean structure file location we start getting some of the um easy to understand the bits of codes we start adding some cool things we start adding some of the functionality we start adding some of the um i don't know like we're adding some more resting points we're getting more services we're adding more uh into integrations we start adding databases we start doing dealing with different type of databases as you can see things getting a little bit darker now so it's not um it's not like that uh new shiny thing anymore um things getting even darker when you start getting new requests and you can get new um requests from your users adding some new functionality you start importing new people into the project and um this is the the people adding more code some code becomes obsolete some of the beats become an absolute you can find some dead code there some of the tools can help you find that code but like you're so afraid to touch it because you don't understand what is going on there because there's a lot too many things are happening here like you're talking to your admin you're talking to your um architect and he's trying to show how all different components interact together right this is where we're going into the world of the the traditional enterprise software so the only help here is that we started doing a couple years ago is to do refactoring refactoring stuff using microservices and microservices it's essentially you disintegrate your application and after that um once you disintegrate this it was easy things to do but like what you can do to make them work so in the good old days when everything was much more simpler everything with one application right how many of you remember these times like we do have this like three tier um some sound micro system blueprint for java e this is how you develop applications so not anymore so those applications are distributed so but it's good it's okay it's like it's a normal application it's very easy for the things to um to think about this however when the things are getting more and more complex you need to start like tearing your hair out of your head and uh getting really really crazy about explaining people how this all works so that's why um we start why you need to explain people how to use it because you need to make some changes in the system you need to do stuff uh to um to integrate those services and the the hardest part is to like we broke down our mallet it's not difficult but like how we can integrate this together how we can start exchanging the data so the this is actually not right so there are good ways integrating microservices but there are no good ways to integrate these microservices so that's why we will talk about the few things that um people might want to be useful and uh you will understand why i'm talking to this in a few seconds so file system how many of you actually try to integrate two systems through file system like write down in the comments it would be great to uh to learn if we if someone actually did this in uh in in their system but it's just not it doesn't feel right yeah it's it's just like it feels really really wrong to do um like one system writes file in the file system another system will read this file you can use how you would exchange information between systems that distribute it so you need to deal with these distributed file systems i remember sending messages using files through ftp exactly that's exactly how we we're doing this while we're working on the big banks right um we remember the you know the lending areas that you need to file needs to be structured in certain way someone will mess up with the name of the file and everything will go um to reprocessing and that you will not process some of the payments and things like that not cool not cool so that's why people start saying all right so like we can enforce some of the rules uh what about databases can we use database to change this information well um it's it's possible it's a good thing to do but there's a certain complications here there's certain things that um might work you can say yeah victor but we know that the basis you know everyone knows databases we can just like have a schemas for everyone like a sql is easy and everyone knows how to use it that's true however um once the um once the services start using database is there things to to change this information um more and more services they're trying to pull the blanket to each other they start the some of the ideas from one domain will be like a link into another domain so this way we bring in this uh bounded context of one service into another context of another service um because all of a sudden your application needs to some changes and uh multiple other applications will be affected um database is still great to provide a persistent for your service so it would be inside your boundary context where you have a control over the things but it's not good thing to communicate between microservices it's incredibly difficult to agree about something once you start changing the schema and things like that so uh but uh we have uh remote procedure calls it feels and looks the same as we're calling um our code because we break down monolith we broke down those functions one model calls another model like how we can why we cannot use this uh rpc is a good it's um it's a little better but it also downsized because it requires some of the direct communications you can agree up on api and the structure of the calls and you can publish this there's the ways how you can use intermediate frameworks you can use like open api you can use grpc uh to define the structure of the messages to define the structure of the services and publish this so other teams will be using this and would be much easier to agree upon and however there is a very big problem with these systems and the problem is uh dependencies and cascading failure so if all your service will depend on one service that will depend another service and you will have a cascade of those calls you might have a problem here um and how you would debug debug such a system so like anton how you debugging such a system in this case how you would um figure out like which service causes some of the problems well we'll probably use some tools like wire sharks install some network monitors write the logs splunk maybe one of the things that i learned very early in my career when i also started bringing some of the tools to debug some of the problems one of the senior architects or senior engineers came into play and saying the best debug tool is system out println so you're using system out printer lan and writing the results of every service into console and you're trying to follow so essentially you need a log right so you need the log file that you also mentioned and uh which which which lock like let's uh let's look on this log so what kind of properties this log has so first of all we do see it has a date so we know that the particular thing happened in particular time in particular place we know where the thing happened in this case it's our class and the location in our code we know where the thing it happened so the thing also is uh immutable right right because you i don't know like some of the doing some of the hackery uh and you're trying to change the logs and things like that you're actually not changing the logs the logs becoming your kind of source of truth you can recover a state of the system by reading this log in a sequential order you can use different tools to merge multiple logs so you will have a full picture what happened in one service and what happened in in other services like a log aggregation it was like a whole different line of business but maybe but maybe you need two different type of log so uh that allows your systems to uh to communicate not only through perspective for this like login systems so the record in this log reminds very much or like looks like event for me like and uh what is the event it's something that happened and uh something that's important for us or important for us on given moment of time like sometimes we don't care about the info messages but sometimes it's good to know yeah the all info messages are here so it looks like we're good we're in the right place and uh so event is a combination of a notification when we need to send some of the data and say hey i'm done uh some of the jobs finished or there's uh some of the data transfer we're going to be using this uh the pattern today we're going to be using the both identification and state transfer in our in our demo app events are immutable like once you said something um you cannot change it it's like in real life you have an argument with your significant other you say it's something that you're not supposed to say only thing that you can do is send another message and make sure that your the receiver side will process this in time so you will get a good thing in life and uh one is like abstraction point in databases uh we're using tables is our abstractions uh in uh in hadoop using files because it's a file systems and uh in this like a log actually there's a system that can do the things in the in the log system in the log fashion it's system called kafka and in kafka we're using log which brings us to the point that i wanted to show uh right now so i promise there would be no more slides and now rest of the stuff is demo so now we need to bring the kafka into the mix so kafka is the system that will combine or like allows us two services to integrate what kind of services we have anton we have uh basically one service right we have our two services we have a driver service we're gonna showing you some demo today that demo will be um kind of like an emulation of some functionality of right here some of the a rider who wants to have a like a pick up the car and there would have been driver who will see there's a drug there there's a rider and they can go and pick him up so or her up so in this case we're going to be using this system so i'm going to be using this in the cloud just simply because it has a nice ui i can show you all the bits that are flowing into the system rather than i will be um doing this in the console you know we're programmers we want to see the console but sometimes the showing some of the beats uh of the uh of the application would be useful and just call it kotlin webinars to connect to this um i need to use those things in our application so now i'm switching to my co to my screen uh folks in the chat in the comments uh let us know if you able to see this um uh nice and uh um you can see those things let me see yes exactly we're doing uh the uber we're coming for you we're writing the killer of the uber app today so um a couple things let me put this in my configuration i have i have a kafka driver depend on this key i will put this key here i will explain what does it mean in a second once i will show you this one and here okay and in this case i will say select configuration it's going to be quartlin uh this is our api this is our endpoint so the kafka has the multiple components one of the components is a server-side broker the um that's that's weird because it's supposed to be different but hey we'll see how it goes okay so we're good uh support so let's see we have a topic so the information of this data of our system will be flowing through the uh through the kafka topic and the kafka topic we're going to be using two systems since we're using these two services one service will be using rider or driver app so we're going to be doing this with them and uh it's it's a k2r application and in this application we're actually using a couple uh interesting things so let's take a look on first configuration we have this um driver um or should i run it first let me run this first so the people would start seeing things and after that i will go and do explanation just in case if we need to do uh some debugs i will run my driver app here so you folks will be able to see okay so my my application just started it just prints out some of the web socket things on the screen but actual thing is happening here so it is uh it is a map i see my driver here on the map um and there's a couple things that this application already did ah yeah of course um i forgot to uh to to change that uh like as we as we discussed this yesterday so we'll uh we'll bring this in in a second so this application uh will um will connect to kafka topic and it will this application will have two things we have this application module um that has all the common things that our application will be using uh we have um web jars that anton just explained so we're using web jars to bring some of the uh dependencies uh from like view framework like some icons we're using the web jars to do this we enable a uh web sockets to um to provide the server-side push um anton can you uh share the uh the repository with examples for for folks oh they were asking sure yeah sure yeah so the uh the anton will share this uh in a second yep um next thing is that we have a websocket so we have a web jars and now we have this kafka so um you see there is a there's a cool feature that you can do install kafka where you can provide configuration to your kafka so since i'm running this in the cloud and i want to have a higher level of durability of my data my replication factor needs to be not one but three so in this case those things would be very uh durable and stop-and-run and those topics will be created successfully so uh meanwhile you will be able to see this repository at the anton's github you can find this there so just go in this uh there so while my application is running there's multiple things that happened already so first of all this application connected to um to my uh supposed to yeah some applications just connected to cluster and using this kafka kafka feature it was able to create the topics in uh in this in this cloud so configuration file that we use here the kafka driver um driver conf includes information to connect to to cluster um it's just like some of the like security information and next it has some configuration that our application will be using uh to to produce and consume data also uh this information about connection will be used by thing called um admin client and this admin client uh will create our topics why we need to use this topic because we have a second pair of the our application in our application uh we will do a debug and we will start writer second application that have to go and connect to to cloud now i have driver let me put this side by side so you will see what is going on there i will do another thing here i will put this another side just do writer and now it should be able to see the rider so as a rider i need to go somewhere so i'm going here so say i want to go to new york i want to go in new york it's a flight iron building i want to go here so now my driver will see a pointer on the map where my user is supposed to right let me quickly refresh and we will see if it's there sometimes when we were working with front-end fun fun games so um as you can see here uh on a driver's scene i just uh pushed this notification from uh from the uh from the client uh from the you know front end so i can go here in my driver screen click this and i will go to pick up this user and now as you can see two screens now synchronized and the both and driver and rider will see eta and uh this uh this thing is happening in real time so essentially from here where i am to get to manhattan will take roughly 30 minutes so at the end of this presentation we should be able to check um if this uh this thing will work but so what we see right now if i would uh we'll show you what is happening uh under under the hood we do have a two topics in our in our cloud so the information about the driver where is it will be pushed inside a topic about the driver so location and status of current driver will be pushed there so we know the coordinates of this guy and seeing that current status is pickup meaning that this guy is is going for for someone so for how how driver would know this where to pick up so driver is actually reading the data from a writer topic and we'll see this in a second partition number one so since it's a partition number one i can reset from the very beginning kafka it's not only messaging system it allows me to travel back in time so i can uh like figure out the like from the very beginning like where the data is so uh partition number one um and this is all history for a writer like everything that the writer are published here is of i will go to offset zero it's a it's a very beginning uh of uh history of my writer and i know in the very very very just give me a start from very beginning creation started um come on my my ui is is breaking what okay let me make it a little bit bigger by what i wanted to show you here uh let's just let me download this and we'll be able to see looks like uh ui just like froze a little bit uh we say it in the json and if i will go and say see lords which is what's the word messages messages writer writer and jason gq you can see nice things here so when the um when my application just started uh my writer post a message on the kafka topic that says that i'm available and it put the keys current coordinates so when my rider was you know appear on the screen my driver will receive the message that there's some of the writer will be available so when i click this i will be able to go and come to pick up so that's why we need to have a two topics here so the next thing is that um this is some of the the assets uh we will talk about this in a bit but the important thing here is a web socket uh web socket part so if i'll go to this extension part where we put all the all the code for these guys not anymore so we have this endpoint and every every application if i will go and show you this inside the network i will be able to show you let me remove my debugger points here like each application will publish some of the data into um into web circuit ah okay so i need to i need to refresh i don't want to refresh it because it will kill the current progress but essentially um each each application has this javascript code uh that will be constantly reading current coordinates and will constantly pushing this into websocket from web socket endpoint we have this kafka consumer that will be um reading this data and uh so we have a kafka producer that will be pushing the current coordinates into a topic for driver it's going to be driver topic because it will be pushing the driver coordinates and for the consumer it will be reading coordinates for the writer so the both screens will be synchronized so that's how it's done it's pretty simple um simple pattern so essentially we use the two topics one is producer two topics another is consumer to the topic and after that we're reading this data in displaying this on the screen so um always leave the code in the better state yeah yeah uh it's it's in a great state uh so if you can go there um play around it should be it should be in a good state uh how you can get the access uh how we can get access to the topics dashboard um so this is uh something that available uh for you know for everyone in in content cloud so those things right now my my craft deployed in in the cloud and my data is in the cloud and dashboard is also in the cloud so um there's uh there's a two topics one topic is driver we have a driver produce data and rider will read this data in the same way we have a writer that reads data from the writer topic where the writer will produce this data and this data will uh will be consumed by the driver so that's um that's that's how it goes and this is a fairly um fairly straightforward however couple things were worse to point out so in this project we didn't separate those two uis in separate separate projects even if we could probably it's a good idea to do this because we have to like over overwrite some of the beats that available in ktor which we appreciate because the ktor gives us flexibility to uh to do so like we um we can we can have a custom uh configuration file uh for for say for kafka configuration uh that will include some of the things that um maybe currently is not available so we will be able to uh to do this um next thing is that we use uh uh kotlin uh kotlinx html um dsl to uh generate our like a front-end ui to front-end beats um and uh this was also fairly nice uh previously i would use something like a time leave or some other like service site uh the templating engine with the cotton dsl which is just simple code we just simply injected this variable here and that like access topic that access token that comes from say application configuration will be embedded into into response so which is which is pretty cool um and as you can see here in um in our configuration for driver configuration we have one module that has some common functionality it will start web sockets and it enables the um web jars assets and things like that um the same thing will happen for for rider the only thing that they different is the uh the logic for front end like a javascript side for driver and the javascript site for rider are different so that's why those things are are would be different um should we see uh data for writer yeah so we can we can start this is this is basically our flow of our application that we can that we can do so um what uh what questions do we have uh anton let's uh let's talk about two we have some questions regarding our socket integration in k core i think uh it's it's not a problem if there is no integration out of the box ktory is very minimalistic uh framework uh basically you can use any library just directly from their code that's kind of the point of microtrain work uh yes ktor provides a list of different integrations out of the box calling it called features so that you you know you call install method for the for the feature and the feature adds some kind of functionality into your application and that's what we did with our kafka feature yeah not that not not that we actually needed it but it was cool to create it and uh it just shows how flexible the framework is and how easy it is to do uh victory will probably show this feature a little bit later yeah you know i i can i can i can show it uh um where is it it's in our it's in the package no no i want to show like how we use this before um and how we uh it's on the booking application once again so why we did this um there's a couple things that's a good the developer uh needs to take care of um uh while we're using with uh with kafka um one of the things that i don't like when the people do this is to hard code their like property files for example uh to configure kafka you need to um provide this uh the property java with your property class and uh sometimes people just like go in the hard core this value so that's why it would be not very easy to modify if you need to move to different environments so i wanted to show this how you can create a um it's not even framework the it's just like how you can create the good um piece of functionality that allows you to move to different environments so essentially our kafka configuration allows us quickly just like disable all these kind of things and switch to local deployment that we will use like either uh docker compose or test containers or all this kind of stuff without changing the code so this is was a configuration driven uh development another thing is that i think the creating some of the additional configuration for our application is also responsibility of kafka developer if you're developing the system developing service you need to make sure that your topics will be created there with right configuration so that's why i i thought the um having the feature and having this kind of sort of like if you can see here it looks like a like small d cell for for our topics uh we have a two topics and uh just like in our kafka feature we have a configuration and the list of topics that i want to use um it would be great to to use this um so and uh this thing is very straightforward um in uh in a sense it's just like we take this block of configuration file and pass it here so we will be able to uh do um the configure of our our system in this particular case we're using this kafka admin api to create this topic so this is admin client that it's an api that comes with kafka we're using this one we're using the information that was provided through uh configuration um in uh in our um in our configuration block and after that we pass this into um this create topics functionality that's available from kafka so and uh for a consumer of this code for uh sorry if it's application that you dot vt for consumer for person who will be using it looks very straightforward so um similar ways uh we're using this for uh for for for four other things including the um somewhat like websocket configurations and things like that we do have this configuration in one place and after that inside our particular module we only using this so i think it's a it's a pretty good power feature there is a good question about using cotton dsl for html uh instead of any other template templating engine what do you think i think it's uh it's nicer when you can check your syntax with the compiler right i don't see really any other advantage of using it um there's there's a few so first of all um you you may be more flexible on um on what you want to do sometimes uh many organizations you need to go through the you know approval process if you want to bring some of the new technology there would be some uh security assessment there needs to be like because we know that uh the java templating engines are huge uh security risk you will remember like a couple years ago with the equifax uh uh the hack uh someone used the not the gsf they use the uh struts very old version of struts um and uh some of these um templating engines they're smart enough uh to just like simply like you know bring in some of some of the malicious code in in the runtime um with this you will still have a templating engine you don't need to bring the family in love with this framework just think would simply do one simple thing um and it will just it will just work there is a question about uh how to do like a caching and all these kind of things that available in in other things right so since it's a code it's already compiled it's already very fast it doesn't need to do anything uh with compilation so it just like return whatever string is available there um i think it's okay um to um you know to to to use this this code it's it's just like a matter of a taste it gives you like a ability to do things uh but uh not necessarily you need to use this if it's if it's there right um the quote uh sorry ktor has integration with uh at least like three different uh or four different templating that available you know out of the box if you do like a star.ktor.i you will be able to enable um timelapse you should be able to enable some other things let's take a look velocity free marker yeah exactly velocity free market all this all these nice things scratch oh sorry i typed this wrong and i can say yeah template here's the html dsl that we use uh there's a free marker velocity mustache and time leaf yep that's uh that's great that we have all these uh all these things available um you know maybe the struts uh struts for keytor like and it's uh what i like about this um some sometimes like i was i was a little bit frustrated because some of the functionality was not there but it was very easy to add like i very was very accustomed to things that spring does so that's why i brought some of the ideas how the spring integrates with kafka into this demo with gate or so yeah meanwhile meanwhile this uh this application uh still still running running strong and our driver right now inside the lincoln tunnel the reception inside the tunnel would be you know maybe not very good from gps but hey that's that's what we do um another thing that uh uh that we didn't explore but we wanted to explore is the testing capabilities um uh that's um that's something that we can uh do uh specifically by adding um like a test containers for testing kafka we already have at this docker compose thing so you can run this like locally uh there's a documentation that you can you can find they can run this locally only thing that you need to have is to um you know just run this in in docker um so what's the uh do we have any other um questions um requests i i think i think we could take a look at this uh websocket part uh that we were struggling a little bit uh while implementing this functionality so inside the web socket part there's a couple a couple things here so we do have this uh extension but the first thing was that websocket integration in ktor is not taking advantage or not making use of content negotiation feature so it only uh talks to you in text and you cannot automatically make it to convert text into json so you actually have to write code to convert it to json if your api requires that so so this time yeah this is block that anton is talking about exactly so we were using uh jackson in the beginning uh but jackson api is blocking so when it when it parses the text into the json uh the the api call is blocking and the id will tell you about that hey you actually have to defer it into a corrupting context uh but here we are using uh java cotton x serialization api uh which is a non-blocking one so therefore it's actually simpler and it doesn't produce this problem of blocking this uh threat uh for for handling uh the incoming message so and the other one was actually uh the problem with kafka consumer right yeah so the problem was here that um you know that we need to create the consumer only for um for current session so essentially when the application um when you hit this application um end point we need to create the consumer and after that uh we need to close it and dispose it after um the session will be closed so that's why we're using this like finally block um where we need to you know once we're done consuming this uh our websocket connection would be you know off and after that we need to uh destroy this consumer and um here it was kind of like a very interesting i don't know if it's race because this is also blocking code this code is asynchronous so that's why like um we decide to move this around so the first we will serve html and after that like somewhere in the background we will create this kafka consumer um so there's still um i think it's a it's not a problem uh of the framework or the problem of the developers it's just a matter of like accepting some of the like asynchronous nature in the more um natural way in the java applications essentially jvm applications because uh the kafka still has like kafka clients still has lots of beats that um they kind of sort of synchronize but they're doing this like a synchronous with like a old-school synchronous like using the features the completable features and things like that um those things can be integrated there's a integration uh for you know reactive thing in uh kafka reactor for example um to to to make this thing work so um yeah that's um that's what um that's what we have here now um let's see if there's a some good questions in the chat there's a good question regarding scaling the consumer uh give me just a quick second i will uh show this all right so the question is uh the previous answer one more in the way how do you scale the consumer uh guessing with 14. so the interesting thing that um the consumer in kafka is you don't need to do anything essentially to scale those things um the consumer scales by number of partitions in the kafka topic so consumer like you have a have a topic with three partitions we can have a like up to three consumers in one consumer group meaning that those consumers will split this low so it's not like you have a one consumer that will get all of the data and another consumer will get all the data the um in in this case in this case when we start another rider it will create a new um consumer group for for our application so um in um in okay so i'll try to explain this in the two words before our time uh will run out um the um in uh the kafka consumer has this ability to parallelize itself so it's it's a built-in feature you don't need to do anything about this yeah if you want to scale this you just start another consumer you assign another consumer group id um kafka say like driver so we do have this like consumer group id so if i will start another application with the same consumer group id for driver they will automatically join and they will split the load this is not what we wanted to have in this application because we want you to have only one endpoint uh will own one consumer and all topics uh because all this data needs to eventually land into um into a front-end so it will be like a straight um like a q in use case in other cases like if we do some processing i've seen there was a question about kafka streams integration at this point we haven't tried the integrate with kafka streams but i think it's a good good challenge for for some of the thing that we are brewing with anton which is i think it's a great to talk about this but uh in this particular case we're using consumer and producer as a straight um for the straight messaging capabilities reading all the messages from one consumer from all the topics and after that uh push these messages into uh into front-end so uh would you recommend kafka there's another question would you recommend using kafka for pops up for control local uh server cache in the micro service environment uh it is a good question i'm not quite understand what you mean by a local server cache in the microservices environment um i i think i made the case when i explained this in a few slides uh how i see kafka will fit into this like microservicer architecture and how it will allow to communicate things to um to uh between two services um definitely if you do kind of like uh any pops up type of thing try uh if filikafka will fit into uh into your used case um not quite sure before we disconnect i wanted to hey see see as i said everything was in the real time we have this like a real-time application our driver came to pick up our rider and it will take him to home so from uh from from new york um so the application actually worked and now uh when they met you know we can restart they start doing some some other things before we left before we part for today i wanted to show something for you and i want to invite that uh to something that we're brewing with anton so the next uh not the next week week after on the march 30th um i would love you to join me and anton in hands-on workshop where we will be you know you can join us um you can join with your laptop you can join us with your um mobile whatever yeah so whatever you want to use for uh writing your apps um please join us where we're going to be teaching you how we can start with some of the like basics of this you know we start with the plane project uh on ktor we're adding some kafka functionality we'll show you how you can invoke the kafka producer from reston point essentially we wanted to breaking down this uh this application for you so you will understand how we can use kafka and kator um and uh we'll show you how you can use consumers in your websocket we will talk a little bit about serialization and all this kind of stuff it is going to be a free event only thing that you need to do is just to register so we will know how many coupons you need to get to uh to connect to confirm cloud um apart from that i i want to say thank everyone who joined us today uh demo is published this video will be available um my contacts and the anton context you probably can find anywhere on the internet if you can find us in anywhere on the internet meaning that we doing some of the bad jobs as the developer advocates right so people need to find the ways oh would you have this uh the twitter twitter accounts in our uh in our in our screen exactly so thank you everybody for coming uh it was a nice session it was a very cool demo uh you know your homework is to go and download that repository and play with the code uh try running kafka locally maybe you can create an account in the cloud and try kafka in the cloud as well uh stay tuned for the announcements uh we are going to plan more webinars soon uh in in springtime and uh you know let us know what did you like about the webinar so far uh what kind of things you would like to hear about more uh you know you have to write in the comments you have to write in the chat uh if you can tweet about it it would be very nice uh subscribe to the channel once again i just reminded you uh by the way victor has a very good channel as well on youtube uh if you go to and find it you know uh by confluent uh keyword i think you will find it just easily there is a lot of interesting content and about streaming with kafka and you know distributed applications uh and uh thank you very much and it's it's a wrap yeah it's a wrap thank you as always have a nice day you
Info
Channel: Kotlin by JetBrains
Views: 3,578
Rating: undefined out of 5
Keywords: Kotlin, Ktor, Kafka, web applications, full stack, asynchronous programming, microservices, JetBrains, backend development, backend programming language, Kotlin web framework, server-side programming, server development, server-side programming languages, server-side applications, java to kotlin, event streaming, data pipelines, message queue, message broker, webinar
Id: 6qxkawU0qKA
Channel Id: undefined
Length: 52min 42sec (3162 seconds)
Published: Thu Mar 18 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.