Kafka Streams, Spring Boot, and Confluent Cloud | Livestreams 002

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
all right and uh i think i think we're live i think at least i'm live and if you've seen this uh you're probably also live okay welcome welcome to this episode of live streams um let's see we have um anyone in the chat all right so we have uh david oh george george cannot make it this time but uh make sure to watch the recording it's gonna be it's gonna be interesting all right so uh welcome welcome everyone i don't see um let me actually see if i will be able to to check um how many people right now alive i'll give you a few more a few more seconds to to join also let me know as as you remember the time of the show is conveniently uh chosen so many of you can participate in this one either during the breakfast or during lunch time or maybe in dinner time so like i said it's gonna be our um bring your own bring your own drink uh type of show um wait a few more three more seconds for uh for new people to arrive and um yeah so just uh let's let's check it out if you just joined type something in uh in a chat write down where we're coming from like i mentioned last time this time time is chosen to accommodate many of people um and i understand i also understand that for many of uh for many of us it's kind of like a second screen type of experience so you don't necessarily need to kind of like actively participate it's not um live show but at least let me know let me know that um that you were in the chat um and also it would be it would be great to know uh geography or of my um uh of my audience so just like type down where from where you're coming from how you learn about this event is most importantly and let me see all right let me put this i do i do see 20 27 people watching now nice amazing so let me uh quickly check chat all right dom from dublin welcome uh welcome here victor oh he's uh he's the same name as myself but victor is from hungary alexander from ukraine welcome welcome welcome to the show and thanks for um joining us alberto uh alberto from portugal nice uh armen from armenia welcome that's great uh and um um nick hill from india so geography of uh live streams is is pretty uh it's pretty cool um it's um like people who using uh kafka are everywhere or using shrimp processing and things like that um uh hi amid amid from india so welcome welcome to uh welcome to live streams nice nice uh all right so we have uh bormann from ukraine welcome that's great at least i know that i'm not the only one and thank you so much for joining me okay so what we are going to do today um so this is again show where i'm trying we're trying to get fun we're trying to maybe learn something interesting i'm definitely learning um i do have a certain responsibilities uh for you because i need to prepare i need to show you something exciting and something interesting so but uh as always like first part of this uh first part of the show we're just chatting which is like learning about uh about the audience where the people are coming from and i will leave the chat for a second and we'll we'll join we will we'll get back to this one um thank you so much for joining us um hit the like button um it's helped to um to to see if there's like interesting in this type of show all right so uh friendly reminder so last time i learned that some of you folks are um did not register for kafka summit so the kafka summit registration is still is still open and there is no reason for you not to register because it's free okay so it is free event this year um and uh we like right now super busy preparing this um to um to look into some of the uh talks and uh working on the with the speakers on the content so it's gonna be amazing and as as i promised as a countdown for kafka summit i will be highlighting some of the talks that i'm personally excited so how you can how we can learn what kind of talks will be there so we do have a schedule partially published here so you would see that many talks already published and but the first talk that i want to i'm actually looking forward to it myself i like myself as i was interested in this type of topic for a while and i did presentation and some of the colleagues of mine did the presentation about this topic and the topic about using multiple data centers right so the anna um anna macdonald she's my colleague here from confluent and she also known as uh my uh swimming friend uh and uh and the you know the lady with the high tops um we have we also uh wearing matching uh adidas all-star um sneakers so in this presentation she will be talking about um the kafka deployment into multiple data centers plus some of the aspect that important for you if you're developing shrimp processing application using kafka stream so i understand that many of us people are the operations people who support this infrastructure but also it's very important to know this aspect about multiple data center deployment for people who are developers and how this can be done this is my first speak come back for for the next speak next week i will be also highlighting some of the some of the presentations also if you have a chance to look at this at this agenda website let me know uh in the comments in the stream the um what talk are you excited about if you register if you didn't reg just go there um i'm remind you that um the website is i will just put this in uh in the comments over here and the this is uh the kafka summit the website where you can register let me know in the comments um what talk are you excited about if you had a chance to look into into some of the some of the agenda items all right so i see many people are joining i see right now around 42 people are joined so um so i see uh ng from usa dean from india welcome welcome um indra from nepal welcome that's uh that's super cool we have a javier from mexico uh we have navin from india gt gt row gaston flores from spain welcome welcome to this one um and we have uh vishnu from india and uh yes the the to my previous uh to my previous point the anna rules and uh if you've seen uh her presentation before uh she definitely sharing some of the um fantastic energy for her talks all right so next thing that um i want you to remind if you are interested in uh more collaborative what's happening with my my collaborative activities during the kafka summit i highly recommend you to check this um the um this is our um kafka hackathon you can register uh there will be prizes if you participate and your you will deliver super cool uh project so please please please register uh we have a cool prizes uh we have cool prizes um not limited to some of the monetary value like uh amazon 100 bucks gift card but also you can get the free certification coupon and also coupon for confluent cloud so how does it work how this will work so you need to choose the category where you will be working on some of the category that you can even maybe if you don't see your your thing that you want to do fit into one of those categories you can suggest your own but this is something that we we provided also you can find some of the links to um maybe blog posts to so you get the idea get inspired what the people doing in these particular categories and after that over some time you will you will work on some projects some cool project and after that just present this and the the hardest part will start for us to to select uh one of the uh one of the cool projects all right so yeah the goal here register uh it's uh the links will be there in uh in description uh don't forget if you if you're watching this um i'm just repeating this for uh for george who missing us today if you're watching this in recording um the links will be there so uh last thing last announcement if you um not enough of me of this stream so you know that i'm here every tuesday i'm talking about all stream processings uh confluent cloud kafka streams all this kind of thing but if you're interested in something interesting uh something else so we do we're trying to do uh the pair programming over the live stream where we're gonna be uh with my friend the sergey we're gonna be um talking some some and maybe coding our way through some interesting uh interesting aspects so we're doing this experiment uh tomorrow um you can you can find this in my uh in my twitter somewhere uh we're gonna be doing these live streams with uh you know the pair programming so another day so maybe next in couple weeks we'll have us streams every day and you will be tired and you will not watch but hey so we gotta try so uh today you have me tomorrow uh you will have uh both of us all right so this is uh uh the welcome announcement thank you so much everyone uh i think i think i think i can start so just uh a few more people just joined us uh sunivas welcome shinwas from india join us michaelo from ukraine welcome welcome all right um so what are we going to talk about so what i'm going to talk about so last time if you are we're not here uh you can go and watch the recording probably there should be if you're watching this and recording there will be a link to previous episode where i was talking about getting started with the spring boot and in comfort cloud the reason for that i already explained in the previous video if you um if you didn't watch this um just a reminder right um the couple things uh the people were asking how they can do this kind of things in confluent cloud but also spring one spring one it's the um it's a conference about spring if you are in the spring type of jazz and you love spring and developing applications with spring boots and things like that so let me see if i will be able to bring this on the screen um i will be speaker there i will be speaking there um if i will be talking about as always kafka microservices and uh spring um but also i will be doing the workshop so we didn't announce the workshop yet uh it's gonna be virtual workshop i will be teaching some of the um spring java kotlin and all those kind of things so i'm pretty excited about this and not to waste time and the resources i will be um i will be preparing to this um to this event through my live streams and one of the things that i mentioned last time is that i showed how we can just go and create a simple spring boot application to publish data and read data from confluent cloud today today we're going to be doing um something different today we're going to be um today we're going to be doing uh some some stream processing um we're going to be um the i will show you how we can integrate kafka streams into spring boot application and uh what you can do with it all right so as always um we start uh at star.spring.io as always uh this is where we're gonna it's where we're gonna start um we're gonna start with using gradle um uh we're gonna be using java as always we're using right package io confluent developer if you don't know what's the developer.com.io this website this needs to be your you know landing page where you can learn things around stream processing and like your learning resource number one so um live streams because you hear 3m streams and we're going to be using java 11. okay cool now add dependencies so as always we're starting with spring kafka spring kafka allows us to um handle some uh configuration and provide some of the opinionated approach and today we're going to be using kafka streams kafka streams um for here so we need to use spring for apache kafka spring for apache kafka streams and as always i like to use lombok just in case so i will not type anything else okay the let me know if i'm missing something but uh otherwise i will proceed okay now the um spring for kafka kafka streams so click generate and uh oh live stream so what is happening just the extra um extra extra space okay now we're switching to um our place where all developers like to be which is uh our console and let's see if um let's do let's let's make it a little bit edgy from perspective of like visual design i'm just looking how this looks on the screen it looks amazing um by the way by the way in case you didn't know in case you didn't know if you are in the chrome browser or you are in firefox maybe you can go click uh and you can watch this on 4k just saying you know i maybe you like this type of jazz um so let's go with the live streams go in here and let's start with intelligent so um while uh intellij loading this for me i just want to make a quick um a quick reminder so if you uh so uh all the demos all the demos after this presentation will be placed in this github repo called demo scene and you can find um live streams uh folder where all this demo from multiple different live streams will be convenient to place so today is what 28th and there's going to be folder called july 28th where you can find all the links you can find all the code or you want you can wanna you know just like type with me you can definitely do that as well all right so here uh we are in intellij uh i wanna enable yes annotation processing and so we're going to starting with this let's do this one as well so let's see yeah yeah something like this so um we're going to be using the uh kafka streams integration so first thing what we need to do is always is just to say enable um enable kafka streams so in this case it will tell um it will tell the spring framework that we're going to be using the kafka streams here so now uh start class let's call it processor because we're going to be processing something okay and it's going to be methods um public void uh process so this is where we're gonna start so first thing that you need to know um is that we are um the need couple things here so when you dealing with the kafka streams and from uh from perspective of um just your api you build your topology to build your topology you need to get access to the stream streams builder the cool thing about um a cool thing about kafka streams integration in the spring booth is that you you can rely on configuration um just like yourself plus springboot can inject this for you so i need to use this uh streams builder builder and i can say it's going to be what final and in this case i want to tell the spring to inject this out wired and also in this case i need to make this as a component right so it's going to be a managed thing so what we're going to be doing here in this in this example uh we're going to be read some data specifically we're going to be reading a i was thinking about poem but i'm not that cool so i can quote some poem but i think i can find something in internet and read it loud some of the quote from uh from movies and after that we can use the kafka streams and confluent cloud in springboot to do a word count so in this case read some data quotes from from movies count words [Music] write them to topic result topic this is one again what this is what we're going to be doing all right that's cool um so um i'm just checking if there's any comments okay um that's a high level abstraction like spring cloud streaming is more suitable starter pocket this is a good question and i think it's kind of uh you're overshadowing so we're going to be one of the things that i wanted to showcase in the future episodes and how to use this like high-level abstract called spring cloud streams but today i want to like build up from the very beginning from so you'll understand what's the you know the um what's the what what's what's running down below you know what's what is going on there so in this case uh be um be patient i will get back uh to spring cloud streams um maybe not the next time or maybe not next time we will see we'll see how it uh we'll see how it goes but once you will learn this foundation and how this uh the configuration works you can do uh what you know whatever you want for um for like event dream microservices you will be able to to make a make a choice and the springs cloud streams you still can be uh will be able to use things like um kafka streams or just the producer consumer so we'll talk about this as well now so let's start with this one so we have a builder we can create a stream from topic uh let's say quotes um and uh so from this topic what we're going to be doing um we need to parse it so we get all the messages in uh in this in this topic um let me see yeah so we get the messages from this topic we are reading those we need to parse those in this case the messages will go as a sentences we need to parse those enough to count um it's super you know super straightforward uh probably many of you already done this it's just like it's not exercise to um to do something complex but at least like you will understand how the mechanics of like underlying thing is working uh right so um so we're gonna be we're gonna be using this uh the flat map so we get a uh stream of sentences like every message will include sentence we need to explode this so we need to spread this and create a stream of streams meaning the stream of words so in this case we're going to be using a flat map for values and we're going to do so as a value we're gonna be having um string oops string um and it turbo let's just say it's gonna have a list um i'm just doing this explicitly so you understand like where where the things are you know coming from a list like this uh we're going to import class java till list um and we need to implement method so flatmap will get a string so we need to get something in in as a result so we can take the value we can do uh lowercase just do split um how many of you can tell me what's the regex that i need to use here um i'll get a i'm gonna grab the glass of uh no let's bring this here i'm gonna grab the glass of water and you will tell me what kind of regex i need to use to uh to to split this sentences into list of words there's always should be an expert in um in regex in um in any audience i can bring my cat but i don't have a cat usually you know you need to just bring the cat and wipe on uh on your keyboard so the people would know that um the so you will generate some regex any any experts uh we do have a people around the world we have a people from spain india you have you beautiful so borman we have a winner but uh the the we cannot use this uh we need to use this opposite um um it would be too easy for using the space so in this case there's like uh some some some of the values but hey yeah the uh in this case it's gonna be the right thing all right how you will put the uh depends on how you will put the messages on topic exactly but not exactly we will talk about this in a bit uh let's let's do this um in a second all right so um we're gonna be split so we but we'll have uh we have uh war we have a the uh so in this case we're going to be doing what like arraylist arrays uh as um as list split this is what we have um it's what it says let's say string and here we also say string so what does it say um you can resolve method map let's see what is going on here replace with lambda we don't need to have this guy here should be alright and now so we have a stream of things now we have a stream of words so next thing is that we need to group this group by value right so um the so for the group we need to choose new key and after that um we need to group by this key so in this case we're counting words so we need to move the value uh as our key so group by uh what's the methods here k value mapper it's a k value mapper we should say string and string methods so we're returning what returning value right and after that we can use relation with lambda and we also don't really need this type of stuff all right um here it doesn't know what we are dealing with so in this case we need to create a stream so the the create sometimes um i'm going to be doing this explicitly because i personally don't like to rely on default serializer so explicitly specify what types i want to consume from this topic in this particular case let's assume my key in value are string so in this case consumed with string oh sorry sir days sir this [Music] and string so in this case i can do this war because i'm going to be using this multiple times now and uh um string survey we know this is a string survey it's another string story now we have a now we have a typed things right so we know that based on this type the spring uh spring the kafka streams was able to uh to extract so let's see if we can do uh something some some refactoring uh what's the um just one occurrence or factor so in this case i can replace it with this lambda so looks good now so once i group those i can pass this kind of things into count method and just do here dot count can i uh method count will return me um k table this is going to be a group stream uh it's gonna be k table i need to turn this uh to stream and after that write this to two kafka topic counts so count okay perfect so this is going to be this is our this is our application right so this is this is what's going to be uh this is what we're going to do this is our like a processing thing um there's a couple things missing but we will try to solve this uh into um the the along the way you know i'll show you heirs something will blow up definitely will blow up because i see already a couple things that might blow up so next thing is that uh we need to have a kafka somewhere so before that let's see if we have any um any questions link to confluent live stream reference app yes sir we um all these repositories uh links will be down below after the stream will be over but for impatient people the go follow go start this uh demo scene repository it's in live streams folder let me see if i will be able to show for some reason my plugin broke and doesn't show me uh anything so yeah but uh you got the id you can get this in the demo scene uh confluent thing this is where you can get this all right cool so we're going to get the kafka so first of all we know where we can get the kafka right you already know the answer we go the kafka in um in our cloud so let me load one of my accounts that i use to this um create cluster so we call it as always live streams uh cluster um um i want to show you i want to show you a quick um quick hack that i use personally so in um in my uh in my comfort cloud when i'm creating this cluster i need to select the zone how i know like which zone i need to select uh there's pretty cool thing um called gcp ping i'm i'm pretty sure there is something for other cloud providers when i'm using gcp just simply people told me that is you know cheaper so i'm using just gcp and in this case it will show me which region i should choose in order to get the the lowest latency in this case so gcp ping uh just try it out um it might be cool so in this case usc is closest to me so in this case i'm still a uh selecting usb's i don't need to have a fancy cluster even though i can um and just like a go and continue and create um what yes all right so um what do we need to have as always we're going here uh tools and configs we know in the clients in this case we're going to go to spring boot and we're going to copy this property file that will include in our applications dot property this is what we're going to be including we're not going to be using some fancy ever stuff today even though we can this is some of the some of the conversations for our future um for our future docs now i need to get a key and i need to get a generate where is it generate kafka cluster key so in this case i can do live streams um key so i'm gonna go copy here um and insert here insert here so i click save so it's saved um i can do scheme registry just for the sake of live streams okay streams so for the sake of um you know what i don't need this so i don't need avro so i will just go ahead this is what we do right we are removing things that we don't need we have like a we need to have like bare minimum things to uh to work with now in in this uh in this type of setting so what are we doing we do have a connection to kafka next thing that's live streams application let's run it let's see if it's uh it works this is how we're doing this right it's a live bear uh like raw like multiple mistakes and all these kind of things so um so couple things that you need to notice here so kafka streams application will require a the parameter called application id so application id will define a consumer group so when you will be running multiple instances of this application they will be able to scale so in this particular case this is something that we missed and the spring framework uh before it created this thing it validates our config so what we can do here we go in here and called application kafka streams application id so this is what we need to put and let's call it my word count my word count and we can restart a this application let's see if we can have something now error and uh shutdown how many of you might notice this error what what might be the problem um and let me see if i can see what's the error in without stack trace received error code one and all streams have died wow there's some terrible error let's see if we can find any air um sometimes it's confusing right so you think that it should work and just just connects let's see if we can have anything like any any available information from from um from this application from this so missing source topic quotes during assignment this is our famous error from the last time if you um if you were here uh last time you probably know this um if you uh if you did not that's okay so you know many of you know hopefully many of you know already that um automatic creation topic is not a good thing so you need to be explicit on topic creations so in our in our application we need to bring uh two more uh two more beans as always right new new topic bean it's called simply because we are method returns a new new topic and a new new oops sorry it's gonna be um new topic and what's uh replication what's the first one is number of partitions six because it's default and we're doing a short three it's replication factor right number partition replication factor um we make it beam and we make it um counts right and what else the accounts all right so let me um let me quickly check if we have any questions into this one so uh there's some off topic question um how to read how to read kafka streams and deserialize it if a schema is stored in the schema registry um and this question is not precisely off topic but there is something that you need to fix in uh i will talk i will talk about the scheme registry hopefully if i have a little bit more time i will try to answer this today if not i will definitely bring this uh bring this to the next next one so thanks for this question born all right so um so yeah so i do have a connection i do have a connection to kafka cluster the problem was um i guess it's the to the point of my previous um the to my previous point uh the topic was not created so let me uh let me create this topic and i will run this application let's see if it fixed this um so alexander asking why streams app freezes and exception is by design or not okay let me see uh wondering why stream app is freezed on exception is it by design or not how should errors be handled imagine if you have inside the flat map and you're trying to do integer parsing perfect question um i think we're going to be talking about error hand layers and the different ways how this can be handled uh there's few ways how it can be done um the simple answer there is a very good blog post that recently was published i highly recommend to the to check this blog post by by team van barsen he wrote a blog post in the confluent uh in confluent blog where he was talking about the some of the error hand you know the oh i didn't switch to sorry yeah so the um the team one barson wrote the blog post about handling this um uh situation with the exceptions if you want if you like uh let's just say if this video will take i don't know like 50 likes i will make this uh error handling topic of my next like next week conversation so you know what to do if you like the topic about error handling housing can be handled properly um you know let's make it let's make it hundred let's make it funnier if this video will take 100 likes i will break this down as the next topic for the next week how about that um meanwhile you can um uh you can listen you can um you can read this in this blog post as spring for apache kafka and can consumer handle um can consumer handle poison peel all right all right why it's why it's so hard to install connector for kafka not using confluent hub um also why mongodb connector was removed from confluent free package um good question answer i don't know but i will investigate this for you for the next uh next stream so we will drill down into connectors world i'm i promise you and i will investigate thanks for this question o p o o p o um all right so how we can insert data in key table um we'll see how we can do that in uh in a few few seconds all right so i will switch back to my to my id and we continue to break things down um so i created topics topics should appear in uh in my application console now what what is happening so the next error could not create topics my uh word count what kind of topic is this you might ask like what what's what's what's up with that so in my application here so the places where i do things like um uh aggregate store this is something that we'll be using to collect discounts so this data needs to be stored somewhere so first of all it will be stored maybe in the local store but also it needs to be replicated through kafka this topic needs to be created by kafka streams what is happening here it cannot create this topic for some reasons and the reason for that because i i know exactly what's the reason the reason for that is that replication factor that we have by default is one and when we're trying to create this topic in confluent cloud which replication for uh factor is three by default we need to modify this so in our application.properties what we need to do here we can do kafka streams replication factor three so in this case let's let's let me show you screen real quick um where are we confluent cloud so we are here when we see topics these two topics that was created by me and next thing is that uh we need to rerun this topic i don't yeah so okay so this is good question so internal topic for state store i need to create those employees explicitly yes they will be created by framework um creating them uh explicitly uh it's not it's not it's not good thing so it is not a good thing to do this type of stuff because you're not handling this framework handle this framework needs to know what kind of topics that we'll be using you don't do this uh yourself um this is what the framework does for you and uh only thing that i modified here is the the replication factor and now you can see here boom one is four let's see first one is aggregated store and the second one should be for grouping uh repartition okay yeah so because we need to uh so count operation actually does two things and it does hides a lot of things uh if you do this um the like through dsl through this through the um through the api the if you will do it implemented this yourself you need to create this like it's actually two-step separation one is the partitioning and another one is actual store where the data will be stored okay now we have created this application is up and running okay cool so now let's let's start produce some of the messages here okay so how we can do this the um so i can do this from the console but the console um the say i can do quotes and in this case i can do this uh in json format but i want to use just a text i just you know let's uh let's switch gears and use something else we're going to be using um so let's see what kind of clusters i do have it's cloud kafka uh cluster list so in this case i will be using c cloud um live streams cloud and so in this case i will be using this cluster to produce um topic and this is what my cluster id and all right so um it looks like the cluster requires me to install this ap key so i need to do this through the things [Music] the clouds api key create for this and this is my cluster okay perfect now i should be able to produce uh key selected okay so when this do um okay um the um well this is my and key so let's see if i have a cluster settings uh cli where is it api access yep um let's just see live streams keys uh the one is this one so in this case i will be using this key uh to access to my cluster locally all right cool and now i should be able to produce cool beans okay so let's start to producing something um so what we can produce here um the let's just say let me tell you something that's uh some something that you already know the world world world is is not all sunshine and rainbows okay let's see if we can if our application works oh snap there's multiple errors exception handled uh so what is happening here the cost exception uh while producing to sync topic serializer byte array serializer a byte eraser is not compatible with key string string the this is this is very good this is very good one this is very interesting question let's take a look into our application so um so we are reading the topic in the string format so we're using the string serializer in the serializer um and in this case we have uh the streams that will be placed uh downstream so in this case the for flat map we get a string of strings uh key string and value string now the what is happening here is that somewhere here or here we are losing the serializer okay so and the logic here is that when we do a flat map we are losing in in this particular case with the flat map values so in this case keys stays the same but since we're changing the value um serializer there's no there's no way how i can know what kind of like serialization format i need to choose and the thing is flat map is it it doesn't go anywhere so i don't need to serialize this realize anything so in this case it still stays in jvm but my group operation will be using a another stream so it will create another partition stream that will include group stream so this is where we're losing the serializer from value flat map and we need to provide it here the way how we can do this there's the method called uh grouped with and uh key uh the we we see we still we still have it um as a string right so we do a string string survey and um string surday so in this case we specifying the serialization you can define this the default serialization format in you can do kafka default um yeah i guess it can do i need to provide this uh the the properties for kafka streams so in the kafka streams um so what i can do here kafka streams default survey and the this is how we can change this conflict the string config that allows me to do to specify a default serializer um so in this case in my case of my application over here um when they were lost information about value serializer after flat map snap we need to provide the serializer again otherwise it will be just a switch back to default serializer which by default is byte array or serializer whatever now uh let's also do um let's do some printing so we can get counts so we can see if there's our application is actually working so we can do counts dot print printed to system out and running count right what is going on let me see what kind of argument requires um print printed okay print um print it okay so what is that why expected one argument but got okay so probably just like like this should work okay um and after that we're writing back the results into um into topic okay let's see let's let's run it and see if it works while it's building um which value format is the best for creating topics avro json or other so this is very good question and it answers really depends um i prefer use either avra or protobuf just simply because there are tools that available and i can generate um i can generate my pojos i can generate my um java classes based on the schema plus schema registry supports this so there's integration and kafka and all all cool things um so yeah that's um it's just like depends we can we can talk about this in the future when i will bring in some of the schema registry here now my application now works so i will be able to uh the the read stuff and you know my printer printer is wording let me see if i can see anything in my in my topic views if i see the counts and my messages if i see anything here so my application works and just like zero offset is there anything yeah so there's some uh some of the value where you probably just cannot cannot see um um you can see stuff all right so uh let's get back here and uh let's see if my thing is work um so let's bring this so this is my output and this is where we're getting the counts of my messages we'll see and this is where we're going to be entering let's see let me tell you something you already know the world ain't all sunshine and rainbows it's a very mean and nasty place and you don't care how tough you are it will beat you to your knees and keep you there permanently if you let it okay let's send it um and in the chat or in the comments below write down in comments below if you know where this quote is actually coming from let's see uh there is a delay why the messages will you know appear here so first of all um i didn't configure um any you know duplication of the caching so the kafka stream is by default using caching um in order to you know send some of the message upstream so in this case it will be here so the you will see some of the delay here and this is what happening is here so our counts of uh happening here we have a me to times something to time so it looks like it works right so let's um let's see if we um if we can continue to do so you me or the body is gonna hit as hard as life but it ain't about how hard you hit it's about how hard you can get hit and keep moving forward how much you can take and keep moving forward all right so let's send it see i don't even need to uh type anything so like maybe in future i need to just tell um tell this application to to type my code so in this case i don't need to even type so yeah so application works uh all good and yeah the the up you know everything is working and it's super super cool now now i think that is it for uh for today's session so what did we learn um where's my scratch new scratch file ascii doc so um what did we learn today how to use kafka streams so spring uh for kafka streams so in order to use it um you just need to have one annotation in this case spring will be able to inject a streams builder in my in my application class so over here streams builder will be created and managed by spring which means that all applications uh properties can be handled as a separate file and configuration so the next time if you're switching to different location um we can um um uh just simply switch configuration we don't need to put anything in the code right so this is this is what thirsting first thing is the enable enable kafka streams annotation uh with second thing um the it works great uh kafka streams defaults don't work uh don't work with cloud because replication factor always create your topic yourself let's do this um if you want to know how we can develop kafkachim's applica if you want to develop a similar similar application but not in the cloud but locally uh i can show you how we can do this um let's do this like if this video will take like 200 likes i will tell you how you can do this again if this video will tell will hit um with will hit 100 likes i will explain to you what um i already promised you to explain something um i i will return in the uh re-listen this video and we'll check uh what um what did i promise to you uh if someone is remember uh just type down in the comments uh so uh application is still working and and let's see if it's still working and that's how winning is done now if you know what you are worth and you go out and get what you worth but you're gonna be willing to take hits and no pointing thing you're saying you ain't where you want to be because of him or her or anybody i'm still waiting some of the id oh stop too much too much too fast um still waiting still waiting um uh st still waiting for um some of the ideas where this quad is coming from what movie i'm quoting um write down in the comments if you know what kind of movie i'm quoting and yeah that's that's uh see my my data is still coming so some some data is still coming and word u uh already appeared 20 times the cool thing about caching speaking about caching here um that the compaction of this k table which essentially um this guy um this k table count yeah so this k table um it happens on the on application site so i don't need to wait until it will be compacted on the kafka side but if you want to have this results immediately you need to change the caching configuration which is if you go here just let's see kafka kafka streams um caching [Applause] in the memory management documentation you can find this explanation about how the caching is caching works in the kafka streams so essentially if you want to disable it and get the answers immediately and send this back to you just go here and change this uh cash max bytes buffering into into zero this is how we can do this um in uh in this case the message will be you know immediate immediately into kafka topic and it will start there um but i just wanted to have a just uh uh some same default uh streaming things all right so what we're talking about here the um yes so deepak is right uh he said it's rocky but which one is the question so next next uh next answer uh would be um like which which rocky this quote was from uh yes uh thank you very much for my for congratulations to my three years anniversary at the confluent it's exciting and i'm looking forward for at least that three or maybe four more years so depends it depends on how this uh livestream thing will work out so guys i in this case i need your support i need your likes um and uh let me know um [Laughter] let me let me know show me show me your support and again um this is how we can get a some of the uh some of the topics uh if you want to have some of the topics i already collected some of the information collected some intel from this chat and i promise i will get back in one of these episodes in future and we talk about how certain things can be done um i guess next time obviously there's a lot of questions about serialization and digitalization and what format to choose i will break this down first so if you interesting for this one tune in next week on tuesday uh i will be breaking down how this can be configured in the kafka in the spring and kafka streams and probably week after we're going to be talking about uh spring cloud streams because with the with all the things that i will you know talking to you right now we already have a pretty good foundation on the things um if you uh want to you know know when the next video will be up uh it's uh it's on our youtube channel you go here just go click subscribe and in this case you will be notified when the next video will be up um for this for today i think um i will be i'm done thank you so much for being the part of live streams i super excited that uh i've seen like so many of you here um the you can follow me on twitter this is my twitter this is where you can find me uh my name is victor gamoff i work as developer of the kid i'm here to answer your question to teach you uh stream processing and uh in a in a fun in a fun way let me check last time if we have any questions for this particular all right so there's some uh some of the questions um subromania i'm asking instead of removing duplicates using specific key can we update key table using specified key i think the topic of due duplication we can cover in some details um the i will i will take a look for um for for better way or better example how i can explain this to you um custom survey exchange from jason i guess it's the it's one of the things that we're going to be breaking down next time or maybe jason schema how about that jason scheme is also sounds exciting uh rock's db configuration um maybe in the five six weeks uh i mean like we need to build up to the point where uh we will cover some basics enough that we'll drill down into some of the basics all right all right so farah i hope you will find the um some of some of the beats uh uh well you were watching this uh multiple times hopefully um and uh we'll we'll see we'll see you next time all right my name is victor gamov and as always have a nice day and now few as always if you if you picture for uh for artwork um i was still online i was still live i'm still live oh snap have a nice day
Info
Channel: Confluent
Views: 8,997
Rating: undefined out of 5
Keywords: apache kafka, kafka, open source, confluent, streaming, platform, event streaming, messaging, spring boot, confluent cloud, microservices, livestreams, spring, kafka summit, developers, streams, big data, real time, technology, pulsar, apache, event driven, kstreams, applications, cloud, cli, tool, serdes, springone, spring initializr, caching, pipelines, oss, spring framework, database, hadoop, kafka 101, tech, tutorials, kafkathon, kafka streams, gcp, kafka consumers, poison pill, data streaming, data
Id: 3YFlT_yIDxk
Channel Id: undefined
Length: 71min 10sec (4270 seconds)
Published: Tue Jul 28 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.