Apache KAFKA Tutorial | KAFKA Crash Course | Spring Boot

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hey what's up everyone and welcome to daily code buffer in this video we are going to understand and learn completely about Kafka we are going to understand what is Kafka why we need to use Kafka what is the architecture of Kafka and how to use Kafka in your application so we are going to see each and everything about it we are going to install the Kafka application as well and we are going to build on top of that so without any further Ado let's get started so firstly we need to understand what is Kafka so first let's understand what is Kafka here so so Kafka is an Apache Kafka and it's an open source system Kafka is a communication system between a sender and the receiver so if you want to send any data then at that time we can use Kafka system so suppose consider we have one sender and we have one receiver now to send the data between sender and receiver we can use Apache Kafka okay this is going to be your Apache Kafka it can be installed anywhere okay so just consider that you have Kafka installed and you have one sender application and when you have reer application so what and how it works is it works on the publishing and subscribing model so whatever we want to send the data we need to publish the data so sender will publish the data to the Kafka and receiver will subscribe to Kafka okay so sender is going to send the data to the Kafka and receiver is going to listen to that Kafka okay whatever that is new data just send me on so receiver will be receiving those data based on that subscription model and you might have guessed it right as well what if about multiple receivers yes we can be having the multiple receivers as well so there is one receiver which is subscribed to Kafka topic there can be multiple receiver two receiver three receiver 4 and receiver five so all this thing can subscribe to Kafka so this way Kafka will send the messages to the different receivers whatever has been sent by the sender so this is a simple model how Kafka works we are going to Deep dive into each and everything in detail so this was the basic now when to use Kafka like what is the actual advantage of using Kafka and in which type of application you can use Kafka so let's understand this with one example suppose consider you have a cap booking application okay in cab booking application what you will do is you will be a user and you want to book a cab okay you will book a cab and at the same time when once you try to book a cab that request will go to the server and based on your location and based on the Cab's location One driver will be assigned to it right one cab will be assigned so suppose driver is assigned we are just taking a simple example here okay here you can see that driver is assigned now once the driver is assigned you will get a notification that okay driver is assigned and you will catac cab or driver will come to pick you in 5 minutes or 10 minutes whatever the ETA is okay now what you can also do is in most of the booking application you would be saying this that you will be getting a constant update about the location of the driver okay so you'll be constantly getting the location update of the driver now what and how the location update of the driver will get to the user now what will be happening is when the driver gets a notification and Driver starts the right to pick the user what's going to happen is or what is happening is at every particular time interval suppose consider that's a 1 second okay so every 1 second what is happening is every 1 second there is an update of the location okay one update is happening at every second so okay at the time you can see the driver is going towards the user and at every second that update is happening now what is happening is that update is going to a database okay consider that is going to a database okay and database is storing all the details about the current location of the driver where it is going and user is getting that information from user for every time interval okay it is picking the data from the database okay that's a general model that we can think of but consider this is one of the right there might be hundreds and thousands of right booked and for every user for every driver if we are doing the same operation again and again into DB at every time interval or every second we are updating the database and user is picking those from the database it's not going to work it's not going to scale very good because DB does not allow you to do those kind of operations we cannot have humongous operations on database so for that what we can have is we can have one intermediate application which can allow us to do each and everything okay so instead of this DB what I can do is rather than this I can have a messaging system okay so here the messaging system that we are using is apach Kafka okay so now what is going to happen is Apache Kafka is a publisher and subscriber model right it works on the publishing the messages and subscribing those messages now what driver will do is from the driver's application at the time of driver going towards the user at every interval okay the moment driver is going to the user all the data is getting stored to one of the topics here in one of the instances of the Kafka okay suppose consider there is one topic we will get to know what is topic and everything but consider in one of the sections data is getting stored here so data is getting stored in one of the topics and it's publishing the data at every interval it is publishing the data that means we are not touching DB as of now so we are having very high throughput because for every second we need to store the data so dver is coming on storing the data in a pach kfka and it publishing the messages now at the same time user also wants to know where my driver is when it's going to pick me up and what is at so what users application will do is users application will subscribe to Apache Kafka it will subscribe and it will subscribe to this particular topic itself okay and what it will get to know is whatever the messages whatever the updated location has been set by the driver to apach kfka those will be Listen by the users application and user will get the updated driver location okay and for the same thing hundreds and thousands of of users and drivers are connected and because of that our Apache Kafka is able to manage all that high throughput because our apach kfka is a distributed system okay it will be not centralized data it will be distributed along the different servers different replicas and different clusters so whatever the number of load we have kapach Kafka will be able to handle that load and it will be able to serve the traffic very easily so here you can see that this is one of the application where you can can leverage Kafka the another application would be your food delivery apps your flight trackers and so on okay if there's any data streaming happening for your data science pipeline or your data pipeline there are many many applications where we can use apach kfka if you see currently in all the organizations in one of the other ways apach kfka has been used be it e-commerce application be it travel application maybe life insurance banking wherever you go apach Kafka has been used Okay so this is a general idea about where the apachekafka can be used and what is the purpose of using apach kfka and we also saw that what is apachekafka let's understand about its architecture as well and then we'll move ahead towards how to install and how to build the application on top of that but before we see about what is the architecture let's understand the advantages as well so let's understand why we should use Kafka in the previous example we saw that we were able to handle hundreds and thousands of users and drivers right so that means the first Advantage we can see is high throughput which means that it is able to handle lots of transactions lots of operations so it's able to handle hundreds and millions of transaction distributed across its cluster so it might be having multiple clusters all those multiple clusters can be divided into different topics and whatnot so all the things can be handled because it's completely distributed and we do not have to worry about storing all those data centrally so because of that it's very high throughput and because of those needs we can use this kka next is fall tolerance why it is fall tolerance because it is going to handle our replicas as well okay so as it is having different clusters across distributed systems it will also handle the different replicas as well so how it will work is that will be different replicas and one of the Clusters would be its leader and the rest would be its follower so all the data would be managed by the leader and if anything goes wrong with the leader then any other of the followers will become the leader and it will serve the traffic and can serve the data so this way there will be multiple replicas and all those replicas will serve the traffic once they are leader and accordingly it will manage all the fall tolerance for us so we do not have to worry about losing our records and it is also scalable application you can see that using those High throughput using distributed system creating multiple clusters and having lots of replicas the system is highly scalable Suppose there is millions of millions of Records already getting process and suppose in our e-commerce application taking the example of e-commerce application we have a sale coming up right so for that sale if you need extra clusters available extra resources and servers available then Kafka will scale up all those things according to our need so we do not have to worry about how the scaling will happen how it will be able to handle all those records and anything it has the cluster defined if any new servers are required will create the servers and it will be assigned to a cluster and it will handle each and everything according to the different replicas if we take the example of a travel booking application if it's and vacation season going on then to handle the large number of fleets it can add the different servers and the entire cluster will be handled accordingly so it's highly scalable as well and because of that we should use apach kfka in our taex te so there are a lot of advantages of using Apache kfka in our applications with this we got to know that as well so we understood what is apachekafka why we should use it and what are the applications of the apachekafka let's understand about its architecture as well now as we saw earlier as well apachekafka we will use when we have our sender applications and receiver applications and in between we'll be having our Apache Kafka but what is consistent in Apache kka that will also we'll see so the first thing would be that we will be having a sender and we will be having the receiver right now as the architecture suggest Apache Kafka consists of two components one is the Kafka cluster itself and one is the Zookeeper so the one layer would be let me just first draw it down zookeeper and the next one would be Kafka cluster okay and both these things zookeeper and Kafka cluster are part of our Kafka ecosystem okay so our ecosystem consists of two things that is Zookeeper and cluster cluster is your Kafka cluster but what is Zookeeper from The Zookeeper itself let's consider that example that if you're going to a zoo right there are lot of animals available and to manage those animals that is a zookeeper which will manage all the different animals it will keep track of where the animals are what is the food life cycle for that those animals those animals when they will get the food when those will get rest and all the details about that will be handled by that zookeeper here also is the same thing for our Apache Kafka cluster zookeeper will take care of those things zookeeper will take care of like how many clusters are available how many senders are there how many receivers are there how many replicas are there all those things can be handled by our zookeeper so entire kka cluster will be managed by our zuker so kafka's job is just to get the data send the data and whatever it is but the metadata around it like how many senders are there how many receivers are there and their complete logic about replicas and everything those is exported to zookeeper so zookeeper will be handling all those things for us okay so these are the two components in our architecture so sender will be sending data receiver will be receiving the data and this is our entire ecosystem now within our Apache a cluster we'll be having different Brokers so suppose we have one broker one we have a broker two okay let me just write it down broker one broker two okay there will be multiple brokers available so within a cluster there will be different Brokers and within the Brokers there will be different topics topic one topic two and so on okay and all your data will be stored within the topics always so for each and everything you will be creating the topics and data will be stored within those topic itself so in the previous example we saw about cab booking application in the cab booking application we store the data in one of the topics so suppose you created one topic that is the updated location so that is a topic which sender will send the data to that topic and receiver will also listen to that topic to F the latest data okay so that's the part of the topic now within that topic itself there will be multiple partitions okay like from which partitions we need to get the data so there will be partitions available okay partition P1 P2 and the data will be stored in those particular partitions as well okay now within those topics as well we will be having the offsets as well so what offset will do is like suppose let me just create here so this is your arrays okay how you store the data in arrays like these are the indexes 1 2 3 4 5 okay basically you store the data here then here then here and so on and once you read the data the data will be fetch from here then from here and then from here this way the data will be fetched so this is nothing but your offsets within the topic and partions so the sender has sent the data and suppose it has sent five data so 1 2 3 4 5 all the data has been placed here so now when the receiver is starting receiving the data it will receive the data from where there are some options as well do receiver First Data from the starting offsets from the earlier offset that means from the start of the topic itself or I should receive the data from now on I don't want the previous data I want the fresh data only so these are the options that a receiver can receive the data accordingly based on the offsets defined okay so here you can see that what we learned we learned about the sender we learned about the sender we learn about the receivers about the Zookeeper about the Kafka topics about the cluster sorry within those clusters we have Brokers within the Brokers we have topics and within the topics we have part partitions and there is an offset concept as well like from what offsets receiver can receive the data so this is all about the architecture of cka and how internally it will work now as we saw that we have two components within our Kafka ecosystem that is our Kafka cluster and zookeeper when we install Kafka in our system we have to run those two components okay so we will see that as well like how we can run those two components so now enough being about the theory part let's go ahead and see how we can install Kafka from where we can install and what are the different commands how to create topics and all those things okay and then we'll create the application as well so let's directly jump into it so now to install Apache Kafka what we'll do is we'll go to our browser and we'll search for Kafka install okay and here you can see that there is a documentation available that is the Apache Kafka quick start okay so we'll go here okay this is your Apache Kafka website if you go here within getting started you can go to quick start and you will get all the details you can also go here to download Kafka and download your Kafka okay so let's go to getting started and quick start so here you can see that you are getting all the steps how you can install kaf and everything so here you can see that there is an download button available so here we will download our Apache Kafka so let's go here and we will download the latest version available so here you can see that the download is getting started and now download is completed so we'll go to this directory so you can see that it is downloaded in our downloads directory so now what we will do is we will go to this section and we will start working on it so first step it is asking to unzip it right so we'll just copy this command and we'll go to terminal and what we'll do is we'll go to download directory okay and here we have that star file downloaded here so we will just paste this command that is star hyph xzf Kafka and it will unzip so here you can see that the unzip is done so if you go to CD Kafka this is the folder okay now you can see that the next command was the same thing now here you can see that these are the requirements that you need to install Java 8+ and we have already installed Java 8+ so we can get started now here you can see that there is an option to Kafka with zookeeper so you can see the first command is to start the Zookeeper and the next command is to start the Kafka okay so if I show you one thing here that if you go to this directory okay let me just do this okay now within bin okay all your sh files are there so if you're working with Mac or Linux then this is where you will be directly running your s files but if you're working with Windows then you need to go within Windows directory and then from here you need to run the bad files because windows will work with the bad files and your Linux and Unix will work with the sh files okay so if I'm directly using this any sh files and similarly those files will be available in Windows as well if you're working with Windows suppose I am working with Kafka server start then similar Kafka server start will be available in Windows folder as well okay so you can install accordingly so now let's run this command so what is the first command to start our zookeeper so we'll just copy this command and we'll go to terminal and will run this command so here what is happening is within the bean directory there is a zookeeper server start command available okay and after that we need to give the configurations for the Zookeeper as well so the configurations are available from config directory so we are taking from config SL zookeeper properties that's it so based on this zookeeper properties it will start our zookeeper server okay so we'll just run the command and we'll wait for zookeeper to start okay so now you can see that zuker is started you can see zuker started so now we will open a new terminal tab we will go to downloads directory and we'll go to Kafka directory and now we need to start our aach Kafka so we will go again to browser and this is the below command to start aach Kafka broker okay so we will run this command copy this and paste it here now this command says bin / Kafka server start and properties you should use is config SL server. properties simple okay so now with this if I hit enter you can see that my Apache Kafka is also started you can see that it is started and somewhere it would be written that it started on Port 9092 here you can see that it started on Port 9092 so your Apache Kafka is working on Port 9092 so your zeper is started and your started now we need to understand how to create topics and how to create the different messages to those topics okay so let me just open a new terminal and we'll go here okay so now here you can see that in step three there is an option to create a topic to store your events okay so for each and everything for each and every data to be stored you need to create a topic okay and here is a command so you can see that the command to create a topic is you need to go to Kafka topics file from Bean directory and hyphone iPhone create is to create the topic and hyphone iPhone topic is what we need to create so I need to create but what I need to create a topic then you need to give the topic name okay this is a topic name and where you want to create a topic like what is the Kafka server so Kafka server is our 9092 local host right that's what we saw here right 9092 Local Host So within boot step server we we need to give that that Local Host 9092 okay so this is how you will be able to create the topic so what we'll do is we'll just copy this command okay and we'll go to terminal and what we'll do is we'll go to CD downloads and Kafka okay and here we will run the command that go to be/ gafka topics then hphone iph create to create hphone I topic for topic means create topic what is the name of the topic quick start events where in footstep server local 9092 and if we hit enter you can see that created topic quick start events okay so topic is created and if you go here okay and here you can see that within partition quick start event zero is created and there are no blockers and then broker zero okay sorry so you can see that our top topic is created now how to see those topics okay that also command is available see so here you can see that the command is available for describing the topics okay so what you need to do you need to go to B Kafka topics same way you went here for creating the topic okay then you need to describe rather than create now you need to describe what you need to describe you need to describe topic and what is the name of the topic quick start events where it was in bootstrap server Locos 9092 okay so I'll just run this command and it will give me the topic information so here you can see that we have the information that this is the topic name that is Quick Start events what is the topic ID what is the partition count how many partitions are there how many replicas are there and what is the configuration okay who is the leader and everything so all the information we can get for a particular topic with this particular command cool right so now we know how to start our zookeeper start our Kafka create a topic and describe that topic as well let's see how to publish the messages to these topics so if you scroll down here step four write some events into those topics cool right so we'll follow this now so here to produce or publish the messages we we are going to use the producer here okay so we'll go to the bin directory and Kafka console producer s such file we going to use that is going to produce the events for us okay so that's the console we are going to console file we are going to use and what we need to do we need to select the topic which topic quick start events that is what we created okay and then where within the boot step server local 9092 okay this is what we need to add so once you hit this it will create a sub console with that you will be able to create the messages okay so let me just create the producer here so let me just clear this okay and this is the command I'm going to be Kafka console producer the topic is Quick Start events and I'm selecting the boot step server that is our locost 9092 okay you can see that we got the car here so now I can create the message so I can create the message is subscribe to daily code buffer okay so this is my first message so now it is as me for the second message so I'll just add enable notifications Okay click on Bell icon so these are the different messages I'm publishing okay now this all the messages are created now this is a producer we need to have one consumer as well one receiver as well which will receive all these messages okay so for that let's go ahead and create theer as well so for that I'll just create a new terminal okay okay and here you can see that how to read the events okay similar to what we saw in the produ year while producing that was the producer here it will be consumer okay so being Kafka console consumer from where we need to read from the topic quick events quick start events and from which server that is a boot step server okay from here I want to read now this is what from beginning okay now this is is what I talked to you about the offsets so which from where you need to read the messages from which offset do you want to read from the start of the offset from where the topic was created and from where that the data was started to publish or from the new messages you want to start receiving the messages so that's the thing that we need to decide while reading the data or while connecting or creating the consumer and subscribing to the topic okay as of now I want to get all the data whatever the data has been used from starting itself I want to get those data as well okay so I will be using this so let me just copy this okay and let me just create a consumer here okay being Kafka console consumer I want to get the data but what I'll do is let me just remove one thing okay I'll just show you what happens okay so I will just remove from beginning here okay and I will hit enter so now you can see that it is connected okay now you can see that the consumer client ID it's console consumer group ID is this so for every consumer there must be a group ID okay so the group ID is there and now it is started to listening if I add any more data suppose like this video okay this is the new message so this new message will be okay there was some mistake so what I'll do is I'll go to CD downloads Kafka and here I will run this command okay now it is connected so there was some issue so let me just give one more message High it should be able to receive okay there is a mistake events Okay g is extra hello and it should receive hello okay so now it's good now you can see that it received only from hello okay let me create a new terminal what I'll do is I'll just copy this command okay and now it has from the beginning okay you can see this tag from beginning so let me just go to CD downloads Kafka directory and run this command so now it has from bootstrap server from beginning so now it should get all the messages from the starting so you can see that it got all the messages this is my first event subscribe to daily code buffer enable notifications click on Bell icon like this video hi hello everything okay so here you can see that it was able to get all the data but here only hello we got okay so this way you can see that we got the data from the two offsets two different offsets one was from the start of the topic and one was from that particular point when the consumer was connected okay this is the same way we can do in our applications as well and here you can see that within the logs you will be able to see what is the leader okay and what are the different uh receivers connected to our application as well to our Apache clava cluster as well okay all those information you can see here so this was about how you can install Apache Kafka and how you can work with Apache kfka within the terminal now let's create a springboard application to understand all these things and create our application so what we are going to build here today is if we go to the same example our okay so what we are going to do is we can have one cap booking application Okay so so let me create a cab booking application where one app would be for user and one app would be for my driver okay and we have a Kafka cluster in between okay we have a Kafka cluster in between where driver will push the data to Kafka cluster for its location okay what is the latest location of the driver and user will return to this C cluster to get the latest location to get latest location of the driver okay so this is a simple application that we are going to build using the spring boot okay and we already have the Apache Kafka cluster installed and it is completely working so we are going to use the same thing okay so for that what we'll do is we'll create two applications okay now to create the applications what we will do is we'll go to spring initializer so let's go to the browser let's go to start. spring.io okay this is the best place where we can create our springboard applications so we'll create the springboard applications with may1 language is Java we are going to use 3.1.5 version and the group information I'm going to do is com do daily code buffer okay the artifact I'm going to give is cbook driver okay this is for our driver application and we are going to packaging with jar Java version 17 the dependencies I'm going to use is spring web dependency and I want to use Apache Kafka so I'm going to search for Apache Kafka this is spring for Apache Kafka this will publish subscribe store and process streams of Records so this is what I want to use so I'll just use this now with this dependency I'll just generate the project and here you can see that cbook driver is created now I need the same thing for my user as well so I'll just change my application name rest of the things I need the same thing so I'll just generate the project project with this as well so here you can see that my two applications are created so let me just open this two application in intellig idea so here within the Kafka directory I have my two applications that is my cbook driver and cbook user so I'll go to intell idea and open this applications so let me just open I'll go to YouTube tutorials Kafka and I'll open cbook driver okay so here you can see that your cbook driver application is opened okay so if I'll go to May one year and I will add one more may one project that is going to be your cbook user okay so here you can see that your cbook driver and cbook user both the applications are there okay and if you go to your pom XML file here you can see that we have two dependencies that is spring boot starter web and spring Kafka okay and those two are the test dependencies now for this cbook driver what we need is from this application we need to publish the messages to our apach kfka and from this cap book user we want to listen those messages simple so we will go here in our main Java okay and here we are going to create our application so what we are going to do is we need one API that will hit and it will create the data for us okay so let me just create the packages so I'll just create a controller package package I'll create a service package and a config package to have my Kafka configurations so these are the three packages I have created now let's work on these packages so within this controller package let me just create a new Java class let me say cab location controller okay so this controller will handle the tab location for us okay so this is the controller we have created let's create the similar service for that as well so tab location service okay two things we have created we have one service and when we have controller okay now to make this a controller what we'll do is we will create this as a rest controller and we will do request mapping and we will make sure this is slash location and we need to have the object of a service here right we need to have a business logic so we will just mention cab location service and this is going to be autowired so we are going to use the object of location service here now we need to have one method here okay when API method that will by hitting that particular API it will publish the location of the driver okay so I'm going to Define public response entity update location okay now once that is done I need to return new response entity this is my message map off location updated okay this is just a message I'm passing back and then HTTP status code do okay instead of that I can just write HTTP status. okay so here you can see that I am just returning my response entity object and within that I have a Json data I will be Jon data that is the Ty the key would be message and the value would be location updated successfully okay and it it will be HTTP status okay response that will be 200 okay that's the thing now between here my business logic will go okay now from here I need to call my cab location service okay now this is your controller now we need to add the configurations of our Kafka as well so what we will do is within the config package let's create the Kafka config first we need to know where we are publishing our messages right so what we'll do within the config folder we'll create a new Java class and we'll say cka config and we will mark this as a configuration cool right now within the Kafka config we need to create a topic okay so what we will do is we will create a method and we will create a topic of The Bean okay we'll create a topic that will be a bean which we can use so we'll just Define method public and will Define new topic you can see that this new topic is from the Apache Kafka clients okay and we'll just Define topic and this is the method okay then this has to return topic Builder Okay using the topic Builder pattern buildup design pattern for the topic we are going to create the topic okay the topic Builder dot name of the topic the name of the topic we can give cab location okay dot you can see that there are different parameters that you can pass what is the configuration for the topic okay if there are multiple configurations we can pass all those configurations within the configs this is similar thing that we saw when we are creating the configurations and topics starting the server using our command line right if there are replicas langage replicas to assign if there are partitions okay how many replicas we need okay all these things we can Define here but for now we just build it with the default values okay so this is what we are going to do and I'm going to Define this as a bean okay what I will do is I will convert this to a constant value okay one more thing I'll do is I'll just copy this and I'll create a constant out of it so let me just create a Java class and I'll Define constant Dot app constant CS TNT okay and within this I'll just Define this okay now if I go back I can use this app constant. tab location okay this is much cleaner now so you can see that my configurations are done now whenever I want to Define which topic I want to use I can use this topic now what I'll do is from my controller okay this is my controller okay update location now this is my method ideally Whenever there is an update operation for a specific thing we generally do use put right so let me just do put mapping here okay and let me do a simple logic here that what I'm trying to do is once I get the request okay what I'll do is I'll update any random values as a location okay okay that's what I want to do I'll just update a random value so let me just Define integer range equals to 100 okay and while range is greater than zero and range minus minus okay so till this what I'm going to do is math. random plus plus math do random so so this is considered this is my simple logic here what I'm trying to do is once I get the request okay until this range is exhausted I am publishing or I'm printing a random values consider this as a location your lat long okay we will do the actual logic but this is what I'm trying to do okay that for a particular range I'll just generate a random locations now whenever we are connecting to our Kafka using spring we need to tell our springboard application as well like where you need to connect to right so for that we can do our configuration in our application. properties file so we need to Define spring dot Kafka do producer because this is a producer application right Kafka producer dot which server we want to connect to Kafka server right so in earlier also when we working with CLI application we Define bootst step servers right so we'll Define your boot boot step servers equals to Local Host colon 9092 okay this is your boot step server now what we need to Define is like whenever the application works right whenever the data is been transferred over the network those objects get serialized right so whenever we are passing the data those data is going to be serialized now what type of serializer we want to use because we are going to store our data in a string format right so that's why we will Define that my Kafka within my springboard application whenever the data is going to pass it should cize in string so that's what I'm going to Define spring. Kafka do producer dot ealer okay equals to we need to give the object that is or. Apache Kafka do common do series calization dot string serializer okay this is the class string serializer that we are going to use similarly for key we did we have to do for our value as well so I'll just copy this entire thing this is going to be my value value serializer equals to the same thing one more thing I'll Define is my application should run on server. Port equals to 8082 and if I start my application cab book driver application you can see that there is some error okay consider Define a bean of service okay that's cool so we have to go to the service we have Auto it but it not in Springs context so we will Define using service annotation okay now if you run it it should run so you can see that my application is started now okay Okay cool so what I'll do is I'll go to postman let me just start the postman and let me just hit the API that we created so this is the API that we created that's a put request Local Host 8082 / location if you go here right if you go to controller okay this is rest controller request is location and this is a put request okay so once I hit it it is going to generate the location for us within this range okay so if I hit send okay you can see that it generated for 100 times okay and it completed the request so what I'll do is I'll do one thing now that whatever we printed now right that now we are going to publish to Kafka okay so this is just for the example to show you that what's going to happen now we are going to publish it so to publish this messages to Kafka okay we are going to go to our cab location service and here we are going to write the code for it so let me just stop the server and write the code here now here to publish the code we have to use the Kafka template Kafka template will help us to create the template for storing my data to my apach Kafka cluster like what type of map combination that we we have to use what key value combination that we have to use and in which topic we need to send the data those all configurations I need to do here so let me just do that so I will just Define that I need to use Kafka template okay this Kafka template would be like my key would be string and my value would be object okay and I just Define this is a CA template and I'll Auto wire it okay now I'll create one method public Boolean update location and this update location is going to take string location this is going to return true and in between we need to write our logic okay now my logic would be use this cfast template and send the data okay now here you can see that to send the data we need to Define first topic okay okay and the data okay so this is the Constructor that this is the method we are going to use okay the topic is app constant dot cab location okay and the data is location see how simple the method is right we just use the Kafka template to send my data to a topic that I have created okay I have not created this topic but spring boot will do auto configuration and it will create the topic for us okay if the topic is not available so now what I'll do is I'll go to my controller and rather than printing this okay I'll call the method that is cab location service do update location and I'm going to pass the data okay now this data what I'm trying to do is this data is going to update at every time interval so suppose I'm updating this data at every every 1 second so I'll just do thread. sleep th000 okay and let me just add the exception to Signature okay so here you can see that at every second I'm going to publish my data okay considering whenever my driver is there driver is coming the driver is updating this location every 1 second so now what I want to do is let's start this application and see what's happening okay so my application started and what I will do is I I will considering my topic is created now to check my topic is created what I'll do is I'll go to terminal okay I'll create a new terminal I'll go to CD downloads Kafka okay and I will check that my topic is created or not okay so we had the command available right so this is the command so I'll go to bin Kafka topics describe topic this complete Command I'll just use here and my topic name was cab location okay and here you can see that the topic is created the name of the topic is cab location the topic ID is this the partition count is one replication count and everything is one now what I'll do is I'll listen to this topic okay so I'll just create a consumer for this topic so whatever the data is getting published we can see here so read the events okay so this is the one so I just copy this command paste here the only thing I need to change is the name of the topic the name of the topic is cab location okay and I'm going to listen so here you can see that this is listening now what I will do is if I go to my intelligy idea my application is already started okay I'll go to postman and hit send here you can see that it is publishing the data and if I come here here you can see that now I'm receiving the data in my consumer so my publisher is working my consumer is just I directly connected from my CLI application so my publisher is working now I need to create this similar consumer as well that is my user application which will read the data for this okay cool right so now let's go ahead let's go to intell idea and let's build our user application as well okay so within my cab user application what I'll do is I will go to my cbook driver I'll go to application properties and this all properties I need there as well server is what is the serialization and everything I need so I just copy all this information I'll go to cab book user I will go to my resources go to application properties and I'll paste all these properties one thing I'll change is my application should work on 8081 these are consumers okay so I'll just change to Consumer consumer and consumer okay one more thing I need to Define here is my group ID because if you go here to the terminal right we have seen that the group IDs are used right you can see that which group is been used right for Consumer okay so here this group is defined so we have to Define one group as well because the group is something that will Define what type of consumer you can f the data okay if I show you so this is a sender and this is consumer 1 consumer 2 consumer 3 consumer 4 okay these are all different consumers and this is your Kafka topic maybe your topic name is suppose cab location okay and your sender is sending the data now these are all different consumers now how these different consumers can group is based on group ID suppose your consumer C1 is registered to cab location using group ID G1 but rest of the three are defined using using G2 okay and your sender has published three messages so what will happen is as consumers are defined based on groups this consumer will receive all the three messages okay but these three consumers as they are all defined using G2 okay the combination of all these three consumers will receive three messages because all the messages should be only be processed once okay so first message can go here second message can go here and third message can go here okay it will be completely based on the kafa cluster and zookeeper how they defining so this is the concept behind the groups within the consumers okay so that's why we need to define the group ID for our consumers so that's why what we will do is in our application here we need to define the group so that's why I will Define the group here this way that this is spring Kafka consumer group ID is what User Group okay so my group is also defined now what I'll do here I'll Define a service okay so let me just create a package service and within this service I'll Define location service as a service okay and this is going to be defined as a service cool now this location service this C booking service is supposed to listen the messages okay so here we have to create the Kafka listener public void cab location this is the method that I'm building and this is going to take the string location as the input okay now this string location where it will come from okay this is injecting we are injecting this but where it will come from this will come from at theate Kafka listener okay now here we need to Define topics okay what is the topic so the topic would be cab location okay and we also need to Define what is the group ID so group ID is User Group if you go here right the properties whatever we have defined that is what we are defining here okay and let's just print this location so here you can see that simply what I've done is I've just created a Kafka listener and where I'm listening the data from I'm listening data from the topic cab location that I have already created from the driver cab service right this is what we have created and from your user group I'm defining the group as what we discussed from here like how the groups and everything are defined now whatever the data we are going to take we are going to listen from this listener it's getting injected here and that is getting printed however we want to use the data that we can use but this is just the example so we created this way so what we will do is we will start our application so we'll go to cab user application and we will start our application here now you can see that User Group partitions assigned okay so here you can see that you getting the loggers here that there is no committed offsets or any information so resetting the offset for partition cab location zero to position this that is the offset 100 because we already published 100 messages right already so now that offset has reached to 100 so now what we are going to do is what we are going to get is from the 101st offset we are going to receive the data we are not going to receive the first 100 records okay here but if we want to do that we have have a setting available okay so I'll just show you that as well what to do because earlier we saw that how we can do that using the from beginning right that particular tag we used similarly here also we can do it but here let's understand that it will start from the offset 101 because 100 offset is already used so now let me just okay so now it should get printed here so I'll go to postman and I will hit a URL okay so now you can see that it started receiving the data in my consumer as well if I go to to my terminal here also you can see that it started receiving the data right so here also data is getting received here also data is getting received so my two consumers are connected to my producer and one producer is there which is producing the data so my entire application is completely working fine now as I said earlier if you want to receive data from the starting initial index that also we can Define so here using this particular offset that is spring gka Consumer Auto offset reset that is earliest so with this value we will Define that this is going to be the initial beginning position of the offset that is offset Zero from there we are going to get the data we are not going to get the data from the new values okay so this way we can do the configuration for this as well so here you can see that we created our cap booking application where driver is publishing the location information and user is receiving the location information and our Kafka is in between so we are able to create our entire publisher subscriber application using Kafka and we got to understand each and everything about Kafka as well so you can also create the simple Kafka applications and you can use Kafka within your projects it's really easy to implement and the kafka's architecture for a developer to use is also really easy it's really easy for any developer to get started working on Kafka and most of the organizations would be using Kafka in one or the other way so it is really important to learn Kafka if you're working in the organization or if you are planning to work in an organization and if you're preparing for the interviews as well because Kafka will help you a lot in your interviews as well so this was all about Apache Kafka why we use Apache Kafka what are the applications of Apache Kafka advantages disadvantages and the architecture of Apache kfka we also saw how we install apach kfka what are the different commands to use apach kka and we also created a complete dedicated application using apachi Kafka with spring boot so this was all the things about Apache Kafka if you have any things regarding any of the topics that we have covered then do let me know in the comment section below I will also share the link in the description below for you to check out this code if you like this video give us a thumbs up and subscribe to my channel for the upcoming videos also click on the Bell icon to get notified of all new videos you can also click on the join button to join my channel and support me I will see you in the next video till then Happy coding bye-bye
Info
Channel: Daily Code Buffer
Views: 46,808
Rating: undefined out of 5
Keywords: apache kafka, kafka, kafka tutorial, spring boot, apache kafka tutorial for beginners, apache kafka tutorial, apache kafka explained, kafka tutorial for beginners, kafka spring boot, kafka components, kafka producer, kafka architecture, kafka installation, kafka consumer spring boot, kafka consumer java example, kafka implementation in spring boot, kafka with spring boot microservices, daily code buffer, kafka basics, kafka consumer
Id: tU_37niRh4U
Channel Id: undefined
Length: 56min 48sec (3408 seconds)
Published: Thu Nov 16 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.