JMeter Container for Kafka Load Test on OpenShift Container Platform

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hi guys welcome back so today i want to talk about some of the work that i have done for a project called j meter kafka container right so what i call j meter kafka container is that i have created a jammed container that actually can perform a load test for kafka container on the container platform so basically if i have a kafka container that is running in a container platform such as open shift i can use jmeter to actually perform a load test so with this i also have uh enable the uh prometeus uh uh grab for geometer and the kafka so that you can have a better view and monitoring on the j meter as well as the kafka so this is a webs the github website that i have created uh basically um all the information is here right so this screen shows a list of some of the parameters that actually you can pass in to the generator container to customize the load test itself okay and you can basically run this container in the docker engine or you can run it in the open shift itself um i also have created a script that actually uh located in the bin folder right so where you can actually run this and it will help you to quickly uh configure and deploy components such as like red hat and kill streams create a test kafka topic um configure the pro meteors and graphana for you so basically this uh this is created mainly for openshift 4.5 4.6 above so where you have the option to use the centralized or the built-in prometheus and also uh deploy a cast customized graphana uh to point to the prometheus itself right so um in order to use this uh obviously you need to have a uh mq stream operator in stock right so i also list some of the limitations here so uh you can go through also put this perhaps some steps here if you are really interested to clone this down and actually do your own local testing or try out so you can actually run your local apache server apache kafka server your local prometers and grafana so you can follow this step and to try to set it up so i also created some of a two sample dashboard for graphana which included in this github you can actually import them uh to try to look at how it looks like from the perspective of monitoring the geometer and the kafka okay so um i actually also created a article on my personal blog site which is called brandos so in this article actually i outline in more details step by step and all the things and you need to watch out and you need to follow through right so uh i have all these uh detailed documents down in here uh step by step and add all the document all the configuration and date uh document that i have so um like i mentioned if you need to use the dashboard you can follow this follow through right this will point you to the right dashboard that you can import from the github itself okay so i'll put these two uh link for the github and this article uh in the description of the video right so uh you can refer to them later so uh without further ado right so let's look at the uh the jamieta uh note test itself okay so before that uh i just would like to talk about uh the task plan actually i created a simple test plan uh which is embedded or built together with the generator container so this is the test plan if you can see i have some of the user defined variables which is those that is listed in the github so some all have a default value and obviously so if you want to change some of these properties you can actually pass it through the environment variable when you launch the container okay so this is the track groups and this is the um so called the dsr223 sampler and i use this uh to implement some of the simple uh kafka client right using the uh java and basically it just set up some of the properties for the producer and then send the kafka message to the desired topic okay and i like i said i actually want to monitor the j meter itself right so uh example like the response times um the the so-called uh jdm uh usage and all these for the gen meter itself so i actually enable the um prometheus uh exporter for the j meter right so these are lists of some of the pro uh metrics uh that actually by people come with the exporter right so you can refer to the uh article i have actually and also the github edge to put out some of the links for this where this uh so-called component and how to actually uh fine tune or or change it right so yeah so that is the uh jam meter task plan that i have um i feel that it's uh good enough for whatever test i want to do today and but if you think this is not good enough basically you can just clone the github and modify and change it and if you think that you have done a very good enhancement on it let me know or just uh contribute to the to the github itself okay so uh let's head to the uh openshift so i'm running uh one uh cluster of open shift uh actually just for test purpose this is a uh what all in one onenote openshift so her performance is uh good but it's limited um so uh i have this um running for i have some of the testing running for a while uh but i just want to show you right so uh basically i have this um i have this mq stream installed using the operator right with operator everything is somewhat simple and i have this uh kafka cluster and i have this uh two topic here right so p3 means p3 r3 means i have three partition and three repeat cards p5 r3 i have five uh partition and three replica right just to do a test and have a feel how it looks right um and i have a so i have actually deployed the generator container here and it has been running for uh a while now and if you see here um there is this is the environmental variables that i pass it in um example my best size 3700 i have 300 threads this is the topic that i want to send it to i continue to load it right so that i can see how it looks like or until i satisfy with the statistic and this is how i expose the uh the the port for the exporter so that it can grab by the uh pro meteors um so this this is my sampler label so this is quite useful for for me actually i can use a different label to actually label all my containers so that when i grab the when the prometers grab the matrix i'll be able to see right for i i'll be able to see for this level is for scenario a for this another label is narrow b for my testing right so it's quite useful um so that is the uh environmental variable so if you look at the locks itself is running in the background so let's head to the uh grafana dashboard here so if you see um i have one uh container running with uh 300 tracks here okay um so this is the number of all the metrics so far there is about about 49 000 of total requests that it has been uh produced and you can see that um i actually also run a short while of uh another test for p5r3 right so this is sending to the sending to the uh p5 r3 topics right so the the and with the trade of 100 uh right about time is 60 so it's it's quite user friendly here with the sampler sampler label right so response size is about 5.5 milliseconds that is not very um ideal but we know if we have a where to tune our best size so that we can have a acceptable response time versus the true proof okay um so this is the um apache kafka dashboard right so you can see how the cpu goes uh how the messages come in per seconds and things like that okay um so if i want to do another load test or i want to do another uh scenario of a load test right what i need to do is just to launch or deploy another container here so i name it as uh jamie the kafka desk t100 with this 400 threads and sorry and i have uh you will it was sent to this uh kafka topic and i have my level which is here this is my topic this is uh best thing my best side is 3700 thread 100 rainbow times 60 minutes right so my batch size is 2 3 7 0 right so this is um just simple right you just do this oc new app and you will deploy into the openshift and come back to here you will see openshift is starting to create the container and yep it's running at the back end now okay so when we go back to the um it will take some while right to for promoters to scrap that metrics from the new container but soon that you will see um there is this new request coming in for my disk container which is lt 5 plt sp3 p5 r3 that's bs 3700 um that's t 160 right so um you can see the uh threads of active active users starting to increase right so that is about a hundred threads for this container right so uh you'll go up to 400 active user right so it's quite flexible and and and powerful right with uh using a j meter as a container on openshift right so what about if i want to run more threads i want to run up to 712 ten thousand of uh active user tracks right so uh obviously uh it is not a good idea to run a thousand of threads in one container right i have tried it out if i run 500 threads per container basically i will receive a low error inside a container saying that is not able to create the threats so what i can do is that i can have a smaller tracks of a port or container right so example i have one port with 300 threads i can run multiple ports to just simulate a test for thousands of user tracks so in this case when i go to the deployment right i want to increase the number of track or user right so i can just come to here and edit the port number to let's say example 10 right how to run 10 port now so this pod or container actually has a 300 threads per container so if i run it to if i run it to 10 ports here right that means i will have 10 times 300 which is uh 3 000 right so you can see that the at the fd user the track started to increase over time right so this is very powerful um i can simulate a lot of user as long as i have enough resources in my open shift right um and and and you can see the the number of threads starting to increase over the times and probably one is a good idea to let it run for a while so that you can get a constant or so-called average quite a good average number of statistics so from here with this uh container i think uh it gives us a new way to really load test our kafka on the openshift so the kafka like you see here right so in red hat our kafka flavor is mq stream right so we can actually load test the mq stream on open shift without boundary now right so uh that's all about my uh demo right so this is all i want to share uh i hope you enjoy this and benefit from this video um feel free feel free to uh just download the container or the the source from the github and and use for your project for your load test right and thank you that's all bye you
Info
Channel: braindose
Views: 2,860
Rating: undefined out of 5
Keywords:
Id: Lu0BGQLr0GA
Channel Id: undefined
Length: 15min 41sec (941 seconds)
Published: Fri Mar 12 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.