How to setup Kafka in Windows | How to Load test Kafka using JMeter #apachekafka #jmetertutorial

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hi hello vanakkam and welcome back to yet another episode on letterslaw YouTube channel so today in this video we are going to see how to set up Kafka in our mission and we will see how to do a performance testing using jmeter so before we move on to the video I will give you an introduction about what is Kafka and some of the key key elements which we normally deal in Kafka and I bet you should know this before you go for your next interview because in many of the companies in the organizations the organization would want you to test or would want you to know how this Kafka works so I recommend you to watch the entire video so that you can be on your upper hand in your next interview so please don't forget to subscribe to our Channel and watch the entire video till the end so first let's see what is Kafka so Kafka is an open source distributed streaming platform developed by Apache software foundation and it is designed to handle High throughput fault tolerant and scalable real-time data streaming so Kafka provides a publish and subscribe model where produces right data to topics and consumers read from those topics and this allows for Reliable durable and low latency data ingestion processing and integration in modern data architectures so why do we need to do a performance testing for Kafka and what are the reasons that we have to do a testing for Kafka so first thing is the throughput and scalability so Kafka is known for its ability to handle High throughput data streams and the performance testing helps to determine the maximum throughput that a Kafka cluster can handle under various workloads and this helps to identify the bottlenecks and limitations of the system ensuring that it can handle the expected load and scale horizontally if required so the next thing is the latency and the message delivery so Kafka usually guarantees low latency message delivery but it is essential to validate this through performance testing so by measuring the end to n latency of producing and consuming messages performance testing helps ensure that Kafka meets the desired latency requirements and provides timely data delivery and the third thing is the Fault tolerance and durability so Kafka provides fault tolerance by replicating data across multiple Brokers performance testing helps validate the durability and fault tolerance mechanisms by simulating failure scenarios and observing how Kafka handles them and this also ensures that data is not lost and the system recovers smoothly from failures and then when it comes to the consumer throughput or the throughput performance testing helps to measure the consumer throughput which is very crucial for applications that rely on real-time or near real-time data processing so by testing the consumer groups the message partitioning and parallel consumption the system's ability to handle high volume consumption can be evaluated and when it comes to load balancing Kafka allows Distributing partitions across multiple Brokers enabling load balancing and performance testing helps to determine the effectiveness of load balancing strategies and ensure that the system distributes the workload evenly across the Brokers and then when it comes to the last reason for performance testing of Kafka which is the cluster sizing and capacity planning so performance testing provides insights into the resource requirements of a Kafka cluster and it helps in determining the appropriate cluster sizing and capacity planning such as the number of Brokers the memory the storage and the network bandwidth needed to support the desired workload so in Kafka we have a term called zookeeper so we all must know what is a zookeeper if we really want to test or if we are really working in the Kafka so in Apache Kafka zookeeper is a centralized Open Source service that is used for maintaining configuration information managing the cluster membership and providing distributed coordination and synchronization and in fact it acts as a distributed coordination and synchron for Kafka which enables to function as a highly available and fall tolerant distributed system and in fact it has very key roles the Zookeeper actually plays in Kafka which is the cluster coordination the leader election the broker registration the topic and partition management the configuration and watches so zookeeper provides a simple and reliable way to coordinate and manage distributed systems like Kafka and it ensures that the cluster remains consistent and it performs a leader election for fault tolerance and enables distributed synchronization across multiple Kafka brokers Suzuki bird is no longer a mandatory component and Kafka can also operate without a legend zookeeper by using the new Kafka raft metadata mode which provides its own internal metadata management so before we move on we will just touch topic on what is a producer and a consumer so here we can see the producer and then the consumer so introducer if I type the word I am producer and when I click enter it automatically comes to the consumer so here this is the consumer window and where we get this value so what is the produce and what is a consumer so a producer which we see here is a client application or system that publishes messages or records to Kafka topics and it writes data to Kafka Topics by sending messages to Kafka Brokers so producers are responsible for choosing which topic to write to and determining the partition to which a message is sent so they can send messages say the synchronously waiting for acknowledgment from the broker or asynchronously without waiting for acknowledgment and producers can specify a key for each message which helps determine the partition to which the message is assigned and producers play a crucial social role in generating and sending data to Kafka for further processing and consumption so that is what we do here in the producer window and then when it comes to the consumer window so consumer is a client application or system that reads messages or records from Kafka topics so consumers subscribe to one or more topics and read data from partitions assigned to them so each consumer is assigned a specific partition or set of partition to ensure parallel processing and scalability so the consumers track their process by manifest maintaining an offset which represents the position of the last message they have consumed from each partition and they can control the offset manually or let Kafka handle it automatically so consumers can read messages from Kafka in various ways such as reading messages one by one or in batches so they enable data processing analysis or further distribution based on consumed messages so let us now see so this is how basically the cafe work so we have a producer and then we have a consumer so if in producer I'm typing that this is a producer and I am sending message as part of Little's La YouTube video if I press enter and then when it comes to the consumer so here we can see the same value which we have entered so this is the producer and consumer works so let us now see how first let's see how to set up Kafka animation because without that you will not be able to test your Kafka using J meter so first let's see how to set up kafkind emissions for that let's first go to the so here I have the step by step so the first step is to download Kafka plugin in your download download Kafka plugin in your JM machine so where you have your J meter and for that let's go to the so here is the page so here under the Kafka dot apache.org download so let me just copy paste the URL here so this is the place where we download and I have downloaded the Kafka 2.13 hyphen 3.50.tgz so that's the file I have downloaded and then I have extracted it under my folder so under C drive and under version they have downloaded the folder which is the which is the Kafka 2.13 hyphen 3.5.0 so there is one issue here so in case if you download your Kafka so usually it comes in this form it's usually when you when we download it we get it in under the Kafka name and then when we download it it comes under the folder so when you extract it and when you run the command you will get the file name is too long so make sure you are having you do not have a folder inside the folder so let me show you what exactly has happened so here is the folder I have downloaded so after you download and when you extract it you will get this folder the Kafka 2.13. hyphen 3050 so I I recommend you to just copy this folder rather than copying the extracted folder that is from this folder so I would recommend you to just copy this folder and paste it in the location wherever you want to and then the next step is setup Kafka animation so how are we going to set up Kafka in our machine so after you paste the file in your location in your preferred location so here I have we have pasted it in our preferred location which is under C drive vasanth and Kafka so this is the first step so and the next step is going to be so you have extracted you have extracted the file and then you have pasted in your preferred location so once you have done that so next you will have to go to your zookeeper dot properties file and you can find this zookeeper.properties file under the so you go to the bin and then your zookeeper.pr properties file so open this file under this location so here you have this file under this bin so open this file and once you open this file you'll get this location so make sure you change the location in the data directory line so once you change this just save it and then the next step is going to be you'll have to change another thing which is under the server.property so you will find this server.properties under your location well let me just show you so you will find again this server.pr properties under so you go to the config and here you will find the server.r property so just open the server.properties and then same way update the location like this under log dot DARS so this is the location I have so just make sure you have this same location in your file so once you have done this so the next step is we will have to start the Zookeeper the zookeeper in the try so what I did is so here you can see I have started the zookeeper and here is the command I can show you what the what is the command I have used so here if you go so this is the command which I've used to run the start the Zookeeper so it's dot bin slash windows so here under bin under windows we have the Zookeeper hyphen server dot start dot bat since we are using the running this in the windows and then we have under the config we have the Zookeeper properties so if you remember we have the sorry under the config so we have the Zookeeper dot properties file so here so this is the file which we have to update and then once we update it we'll have to run it so this is the command to run so once you run this command automatically are zookeeper server will get started so this is the next step so here we see so we will start the Kafka services and the Zookeeper Services the first step is to start the Zookeeper service and then it is to start the Kafka server so here is the command again it's a save command similar to The Zookeeper command it's the Kafka server start and then with the server properties so the Zookeeper you will start with the zookeeper.properties and then the Kafka you will start it with the server.properties and here I will show you so here you can see the Kafka server with the config server.property so once you started these two properties you are good so everything is started and it's running so the next step is open a diff open another command prompt so let me in this example I'll just open another command so for now I'll just I'll do always in the partial it's more powerful and it's more easy for you to run so I'm opening this command here command prompt and then the next thing is I will have to navigate to the folder so here if you see in my previous command I I will show you so here I always navigate to the Kafka folder so same way we will again do a navigation to the Kafka folder so here we are in this folder and I will do a navigation to the Kafka folder and let me do open this and then I'll go to this partial and CD so now we are inside the Kafka folder so let's now take the command here so here you can see so the next step is to create topics so let me break down the command for you so the first set of command is this one so here I we are going to navigate into the bin folder and then into the windows folder and then we are going to create the run the Kafka topics.bat so this is the command to execute the Kafka topics tool specifically for the Windows operating system and this Kafka topics.bat script is used for topic management operations and then next comes the create so this parameter specifies that we want to create a new topic and then next comes the topic and then I'm going to create the topic which is the little slaw so we're going to create a topic which is the literal slash so this parameter defines the name of the topic we want to create and in this case the topic is literals law so we can replace with any name that we want to do and then next comes the boot trap and then the server and then the localhost colon 9092 so this parameter the one that starts from parameter the bootstrap to server localhost 9092 so this specifies the bootstrap server to connect to the bootstrap server is since the bootstrap server is responsible for the initial connection establishment with the Kafka cluster so in this case it is set to localhost 9092 indicating that the Kafka cluster is running on the local machine with the default Kafka Port 9092 so if our class Kafka cluster is running on a different server or different port we need to provide the appropriate hostname and the port there and then the next thing is we are going to create the partitions so here we have the partitions and then I'm going to give us one so this parameter defines the number of partitions for the topic and the partitions allow for parallelism and scalability in message processing so in this case we are creating a topic with one partition and we can increase the number of partitions based on our workload and performance requirements and then the next part is the the final part is the replication Factor one so what is this replication Factor so this replication Factor parameter specifies the replication factor for the topic so replication ensures the fault tolerance and high availability by creating additional copies of each partition on different Brokers within the Kafka cluster so in this case we are setting the replication Factor one meaning that there will be only one copy of each partitions of production environments it is recommended to have a replication Factor greater than one to handle the failures so typically a replication factor is equal to or greater than the number of brokers in the cluster and by executing this command we will create a new topic named little SLA with a single partition and a replication factor of one in our Kafka cluster using the specified bootstrap server so let me start this so here we can see the topic is created which is the latest law and then let's move to the next step so here we need to run we now need a producer and then we need a consumer so let me open another partial window and I'll show you the same way how to how to create the producer so let me just maximize this window and show you so next one is so I'll just break this for you and I will tell you why is what is this so first let me break the first part of the print of the the producer command so the first part is Bin Windows Kafka console producer dot so this is the command to execute the Kafka console producer tool specifically designed for the Windows operating system and the Kafka console producer.bat script is used to interactively publish messages to Kafka topics via the command line so the next comes the broker list localhost so what this command tells us that this parameter specifies the list of Brokers to connect to so the Brokers are responsible for handling the incoming messages and managing the topic partitions so in this case it is set to localhost 9092 indicating that the Kafka cluster is running at the local Mission with the default Kafka Port 9092 ND for Kafka cluster is running on a different server output we need to provide the appropriate hostname and Port here and then the next comes the topic so here we have already created a topic so here let me type the topic and the topic is Lidl yes Little's law so this parameter defines the name of the topic to which we want to publish messages so in this case the topic name is little slow and we can replace with any design name so let me click enter K so there are some mistakes let us fix it so here we were not into the Apache Kafka folder location so that's the problem here so let's CD into it and then let's run the command so now we have got the producer ready so this symbol here will tell us that the producer is ready to run how to give the input so let's now go to the next step which is the consumer so make sure that we are we are inside the Kafka folders this is very mandatory and let's let me even paste it here so that we don't miss it so same way so let me create another partial window and let's again create the consumer code here so here we have got the consumer code and this is again almost similar to what we did so let me just copy it here and then let me just we just need to change this topic name since we have a different topic little slow so here it's again the same thing so we are defining the location the consumer location and then the bootstrap server the localhost port and then the topic so let me go and so yeah so again we did the same mistake let us navigate inside the folder then let's run the command so here again it looks like the command has started so let's go to the producer so here it's yeah so here is the producer so let's run this uh okay let me just type it here okay so this is the producer yeah here is the producer this is the producer message one and let's go back to the consumer here so here yeah we can see the producer message here so the message that we sent and that's the one we can see here so this is how the producer consumer works so let's now go back to the J meter and let's start setting up the uh J meter so let me start a new one so here we have the J meter so let's start creating a thread group so I'll just start with the thread Loop so here I have the third group and I have the number of threads and then let me add the counter for this so why do we need to add a counter so this counter will actually increment the number of messages so let me add a counter so for that let me go to the config element and I'm adding a counter so the starting value will be 0 and then I will I'll be incrementing it one by one and here the exported variable name will be message count so we will configure it later in the next uh and then the next parameters here so I'm just tracking the counter independently for each user and then next comes the Java request so we will have to add a Java request using which we will send or we will just send the values to the producer via the to the consumer so for that let me add the Java request for that let me go to the config element here so so here under the sampler and we have the Java request so we will need to make some changes to the Kafka Brokers and the Kafka topic so here let me change the values so the first thing is the Kafka broker is going to be the Local Host 1992 so this is the value which we are using to send the message and you don't need to change anything in the Java request of the topic so here the code or signal.kafka meter.com producer sampler is the one that we need to run so just leave it as it is and then when it comes to the topic so just we are change the name to whatever topics that we created so I'm going to just change it to little saw which is the topic that we are going to we have created for this testing and then the parameter key so here we we see we saw that in the counters we have created a value which is the message count so let me just copy it from here and then it has to be with the dollar symbol and then the bracket and once it then we can close it so the next one is going to be the message so we are going to parameterize it here since we are going to send it through the file so I'll just parameters and then where I will add the uh attribute next to it and then this can be same so the Kafka dot serializer.default encoder and then the kafka.co.nullink products so just leave it as it is and then let's add the next one which is the load generator configure this will actually generate the load in fact we can use this to generate the data so let's go to the config element and then the load generator config and apart from this I forgot to tell you few things which is I have added the Kafka plugin so here you can see I have added the D A Kafka meter and then the Kafka support and then the Kafka backend listener so make sure you add these three plugins so that only then you can see these particular plugins to add to your thread group so with that so so the next thing is so we have added the counter the Java request and then the Kafka so the Kafka producer so here I will just make it to producer because this is what we have provided in the Kafka producer element so I'll just change it to Kafka producer and then we have the load generator config so under the load generator config we'll have to create a file so I have already created a file which is config.json and then the variable name so this is the value the message value which we have used to send it here so what is the what does this config.json contain so let me open the file so here we can see I have I'm just passing these values so this is just a sample so this can be anything that your client wants you to send it can be even your username password or any set of paragraphs or files so this will actually uh these are the files actually which creates load to the system so this can be anything so just make sure that you have the right file at your location and make sure you are having this under the right location so for example in this example I have created a new folder the Kafka PT and I have copied the file inside this folder so the same folder where you have your jmx file so that you don't need to refer it in your test so here I just given it config.json and then I think we are almost ready so let me add the listeners which is the summary report and finally The View results tree to check the values so once you are ready so let's do a quick validate so let me validate this script and before that let me save this under the same location so Kafka PT and then Littles la video so this will save the file and yes we are good so far we got one error so let me see what the error so here we can see and then let me go to the partial yeah so here we have successfully sent the file so with that we can confirm that the validation is working fine so let's run few more uh values to it so let me change the third group here so to five users and then to count the two count to five so we're going to send the same value so let's see how does it work so let me just clear it and then let me start the test and yes the testing is completed so let's go to the consumer and see so here we can see lots of values that have come here so this is how we test the Kafka producer and consumer and I believe I have never missed any of the steps which I have used to set up Kafka to explain you what Kafka what the producer consumers and to set up the producer and consumer in each and every step and to set up the J meter test so here you can see at the end so here you can see this is the average response time we got and it's the minimum response time and then the maximum response time we have got to test this one so in fact if you are sending this in a very huge amount of files you'll definitely get a huge amount of response times so with that we come to an end and I believe this video would be very very useful to you so until I meet you in another interesting video it's bye bye from Ascension and literature in case if you missed anything please do check out me in the comment section I will also paste this in my GitHub location so that you can automatically you can directly take this and use it and test it in your location so until I meet you in another in another interesting video it's my pronounce law
Info
Channel: Littles Law
Views: 1,936
Rating: undefined out of 5
Keywords: 1. What is Kafka, 2. Why do we need to performance test kafka?, 3. What is Zookeeper in Kafka?, 4. What is producer in Kafka, 5. What is consumer in kafka?, 6. How to setup Kafka in Windows?, 7. How to setup JMeter + Kafka and run load test?
Id: USKbPs1KTf0
Channel Id: undefined
Length: 29min 53sec (1793 seconds)
Published: Fri Jun 30 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.