OTEL Collector on Docker | How to Setup OpenTelemetry Collector locally with Docker| #docker #devops

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello everyone welcome to my channel this is bues so today we'll be taking another very very important topic into the open Telemetry space so last video on the same playlist was to create a open Telemetry monitoring environment to Monitor and you know get the traces from a uh from a from a from Microsoft based application via kubernetes but today we'll be looking at a Docker based environment because that was really uh you know requested by a lot of users so let's quickly do that thing today and then we'll take up next sessions according to uh accordingly so so if you see first of let's quickly understand what is Hotel collector so Hotel collector is again uh similar kind of you know I would say agent just similar to your Gana agent or app Dynamics agent or any other agent but it is not you know to agnostic I would say say it can be moved from uh one environment to other environment quickly and migration is easy right so that that is the reason that become pretty popular so if you see the right hand side of it it has three basically three components it has a receiver and then it has a processor and then finally it has a exporter so receivers actually receive the metric locks and on the OTL and the grpc ports and then they processor actually processes that data Maybe via filtration or you know doing some more enrichment of the that of that data and then finally exporter exports the data to a relevant you know endpoint it can be in a jagger database it can be PR endpoint or it can be any other OTP endpoint for you know database now what will be doing everything in to in today's demo it will be uh installing everything onto a doer based environment there's no kubernetes involved today I'll be installing a micros Serv application onto the docker container and then we'll install elri collector as a container to collect the you know metrics and the traces install Jager on one of the container and then install prome on another container for collecting matrices and then installing grafana to check dashboards and finally the Pras will added as a added in grafana to visualize the data okay so let's quickly get started uh there's a small you know GitHub page for this also you can simply come here this GitHub page and this a read me Docker page where you see and this is definitely the reference you know documentation that I'm referring this open to limit iio documentation this is very good for your understanding purpose but yeah so we'll be using the same documentation as a reference and then we'll do the quick know demo of this open tet using using doab based environment so let's quickly get started so as a first step you can see we need to you know flown the demo repository I'll come on to my you know open to box which is not open now and then we'll quickly do a cloning of that depository okay so you need to copy this git clone command see just okay perfect so we go to that location let me see whether I have that folder related to that I have it so I'll just do RM minus RF I just deleted everything and then do a fresh git clone in front of you so that it becomes Crystal Clear how to so I'm doing a git clone of that open Li demo uh repository now once this is being you know cloned properly then I'll do a I'll open another vs code and open that folder for understanding so we are not doing a you know blindly running everything we'll be explaining each and every piece so that you can understand how you need to do it any kind of additional you know instrumentation so this is that folder that I've cloned recently now so this is cloned I can see this is properly cloned okay now what is the next step next step is get into this folder okay I'll do that now I'm into this folder now before running that next Locker compose command to bring up all the containers let understand what this command is doing so Docker compose is actually running a Docker compose file these are certain flexs to make sure that we are recreating any container which is already there remove any or containers and then detach the you know volume okay let's see the uh Docker compost file this is the docker compos file that we are going to run it and there are certain uh you know environment variables that are being used in like image name image version so those are coming from that EnV you know file within the same repository so you can see that image name is actually open tary demo and the image version and then and so on so all the other variables that are used in do compos you can easily find it into the environment variable and then you can modify as for the need now if I scroll it down there are a lot of services for this uh you know microsources based application uh like the first of all is the accounting service these are nothing but the end points and those end points are responsible for hitting the hitting the URL right right so if you really want to see the source code of this accounting service application you need to go to the same location SRC is the folder and then do accounting service and then there's a Docker file inside it so this is the docker file of accounting service so what it they're actually doing first of all it is you know invoking the Alpine image this is the work directory copying all the contents from here to the inside the container and then running the application it's a go go based application and then simply you know running this account service um you know application so this is what it is doing and then yeah these are certain resources and there are certain environment variabl that are uses that are needed for this application so otal service name so this is a very important thing which is being used in by our otal collector Total Service name is the service name of this application okay and then so on so on we have uh other services like add service okay which is uh giving a log it will be exporting certain logs to the OTP port and then another cart service so these are there are lot of endo that are there in this uh you know application so let's uh jump on to the the actual uh Prometheus and all those things so these are all the application related containers so let's understand these pieces load generator is okay also there's one more load generator container which uh actually helps you to generating logs using low cost and application now payment service is another end point okay and there are certain depends on that you know this should run once your other services are up and running po also there's a cka uh Q to make sure that we are not losing any messages okay redis is also used now this is that Elric components from line number six know 6 not6 you can see all the component related to monitoring is there so we have a Jagger installation onto one of the container and this is the container name jger and these are certain prerequisite that are needed for you Jer like uh it needs a preus address because ultimately we need to pass all the uh Jager you know data to the pr also and then uh these are deploy sources and then finally you got service support all these things you can easily find it from the EnV file if you remember I was explaining so see from address is nothing from service host Service Port so basically this is the address PR just call on 1990 because you know prom just runs on 1990 Port now coming back to that blocker file again which is nothing but this one so once your Yer is installed separately it does not depends on anything it is an independent container yeah this is the service port UI basically and this is the OTP grpc default now this is the P 437 where our traces will be sent by colle okay important point to note here grafana uh container which is using in 10.2 latest image of grafana and then uh yeah if you want to see the grafana inii configuration you can again come here SRC and then uh this a grafana folder and then G so this is the the basic default configuration of Gana that we are passing we can modify any any kind of things from here also now this gra confusion we are provisioning certain dashboards also in this Gra so you can see there's a proping dashboards and data source so Yer will be added by default and this is the dashboard that will be added by default the demo so these four dashboard will be added by default whenever you install this Docker compose file right this is a demo. ml which is actually calling all these dashboards perfect so this is all grafana Container buildup configurations this is a grafana service ideally it should be I guess uh uh it should be into EnV let's see what is the grafana service for that they using uh one second yes so you can see gra service score is definitely 3,000 which is which is expected okay so now this is the important piece that I want to explain the otel collector this is the agent that we are and you can see we are taking the cont uh you know image the contain isal call now these are the resources limit F this is the this is the configuration file that are needed to to get the Matrix and configuration so let's understand this piece pretty you know I would say properly and this is placed in this SRC Hotel collector come here RC and then otel collector otel collector and then this otel config file so now this is the one which we are referring into the BPD also so we have a receiver and then uh you know we have a the receivers and then uh you know processors and then exporters so receivers are there exporters are there and the processors are there first of all uh the things will be received on the receivers OTP on two methods whether gr PC port or or any other Port so this is the port and they are allowed origin like we can get metrices from both HTP and htps and this is the other methodology of getting the matrices like these are all the basically receivers okay now the exporters and the processors if you scroll down uh this is the end point service or the pipeline which actually uh you know does the job so if you really talk about matrices first of all receivers is this so matrices will come from this this route okay and then uh it will go to the OTP which is nothing but this one and then span matx is nothing but the con ctors okay I will do the processors like this filters OTL so processors are uh simply default default matrices that are coming okay and then it will do certain transform also you can actually ignore this piece or you can simply do a batch and then finally you have a batch also which is under processors and then finally this is the exporters which will ultimately send your matrices to Promethea so this is the final exporters so you can see the the this is the uh Prometheus database so ultimately the first of all export receivers will catch the matrices from this front end proxy okay and then uh it will just process uh with some default of filtration and then finally it will send to the uh exporters okay so Pro uh receivers processors and exporters these are the three components the same thing goes for logs also the same thing goes fores also fine so I think we have understood everything let's quickly run this doer compos file before running this uh let me show you the other components that are there off we have Prometheus to have all the data coming to the Prometheus data source that is fairly you know easy to understand there there a prous SRC file also you can get it under this SRC and then permit yes and then perits config so this is that perit config f okay so there's nothing in this which is okay I mean there's a default configuration that are coming now after that we have open search also there's another container which you which is helpful for searching the uh details related to your tracing and then finally there's a front end test that is okay there are certain integration test F so let's let's quickly run this Docker compose file from the documentation so I'll do a Docker compose from that location okay so since images are already there just to save time I pulled up those images yesterday so that uh there's no problem in the image full problem so it is just a uh creating all the containers and it once the containers are up then we can access all those things from these URLs right this are a certain URLs for this application this is a grafana URL this is load generator I was talking about the Locust and this is the Y from where you can see the prices so let's see uh okay uh 1990 is already in used okay interesting so let me just quickly uh see what is the okay so there was some problem within PR that was not coming up because it was already running locally so I just stopped that in a Prometheus separately into different window then you can see the Prometheus is not running or it was running so that was a problem so I mean we are good we just restarted all the containers it is working you can seeing all containers running properly so let's go back to our documentation again let's see the application whether it is up and running or not I just simply come onto this port number 880 I can see the oal demo application is properly running I'll click on any one particular you know product we'll do some simple you know addition and then placing the order of your order is complete continue shopping I'll do one more heading to a card do this this now since this is done let me check the things on grafana whether I can see I'm able to log to grafana and let me see the the Jagger okay I'm able to search any particular service run I can see all the traces are coming for any end point perfect so this point is also working fine let's see uh the feature flag is f let's see the let me just show the feature Flags also this is the description of all the uh Flags uh I mean the the feature FL that are running onto the open telary demo let's quickly show you the uh this load generator so let me stop the existing let turn a new test uh number 100 users and then spawn rate then start swimming so this will generate a new load onto our application which will generate a lot of graffic so this is done let me see the finally the yger end point which is actually collecting the prices so if I click on this find traces I can see there's lot of traces that are coming in last you know few minutes I would say and let me select some other you know end point so these are all different different container that I that you saw that we were looking into Docker compost file this is nothing but a part of your application right now the important point which we need to make out here is we have done this demo again using Docker and the best part is we can do a configuration changes with the help of your uh you know this doer compos file I can make any changes here as for our need and I can uh do changes directly in my SRC files uh the SRC codes if if I really talk about this card service I can go to this Docker file and make certain changes in the Rock code also and then build this uh image again or I can simply download that image from the open Telemetry collector and similarly we can modify any configuration of from EAS grafana H collector this is a very very important confusion we need to understand this is a point where we need to modify things as for our need so this is nothing but a similar to your grafana agent which is sending all the metes to Pras right so broadly speaking what we have done we have actually uh you know installed an application send matrices to uh I would say uh send matrices to Prometheus which is actually coming okay before we wrap up let's quickly see the databases also how many databas so we can see the Prometheus is there Yer is and open search is also there let's see how many dashboards are there so I I'll come on to the dashboard sections so we have a demo folder okay we have open Elementary collector interestingly this not data for last 6 hours Let's do for last six let's see some other data flow there's no data here any let's see some Dash ah so span metric Dash giving a lot of data to you so you can simply see the uh the top seven Services load generator front end back end if you want to really understand and into greater detail you can simply kill come here and see the edit and see the metrics used in this panel so I mean let's not get into the details of you know this from ql thing because this is a separate session I've already created into my gra playlist but yeah this is how you can create your dashboard and use existing dashboard and modify it as for our need so I would let's quickly wrap up the session just to make sure that we are aligned onto the original requirement so what we have done we have installed microsources app but this time I want to containers and then rather than installing grafana agent have installed open Telemetry collector to collect all the metric locks and traces and then send it to gra so to Prometheus installer separately as container install promia separately as a container and then install Gana to check all the dashboards so that is pretty much now if you really like this video please like And subscribe and do not forget to like And subscribe and please keep watch uh keep a watch on this different different playlist that is there in this channel so I would say happy learning see you next time okay bye-bye
Info
Channel: Bhoopesh Sharma
Views: 521
Rating: undefined out of 5
Keywords: distributed tracing, distributed tracing in microservices, grafana, how to use opentelemetry, jaeger, locust, observability, observability aws, open telemetry, opentelemetry, opentelemetry .net, opentelemetry .net core example, opentelemetry .net example, opentelemetry asp.net, opentelemetry asp.net core, opentelemetry c#, opentelemetry collector, opentelemetry distributed tracing, opentelemetry dotnet, opentelemetry tutorial, otel, otel collector, prometheus, what is opentelemetry
Id: SfXSU3JDmm8
Channel Id: undefined
Length: 19min 51sec (1191 seconds)
Published: Sat Nov 18 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.