Grafana Mimir | Introducing Grafana Mimir | Getting Started with Grafana Mimir | Mimir on Ubuntu

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
everyone welcome to my channel this is PES so today we'll be talking about another interesting topic about grafana mimir which is ultimately related to grafana but we'll we talk about a little bit more detail in a while so if you not see other videos in same playlist K playlist other playlist just you can quickly go and like And subscribe the channel okay let's as since my USP of my channel is quickly jumping onto the end so today's agenda is setting up grafana mimir on auntu box it's very simple setup I'm not setting it for production based environment it's a simple setup for learning purpose and then simply you can scale it to the next level by using following the g m documentation okay what is mem first of all so memor is actually a longterm scalable uh database solution rather than Prometheus so we should not store our production based data on Prometheus because of its not scalable if you load lot of data to the internal Prometheus that will that can you know definitely explode so rather than using prome you can use either Thanos or MIM whatever suits to your environment so today we'll be doing a small setup of MIM and then we'll sending the data to both MIM as well as Prometheus and then we'll see how it can visualize the data into grafana by data source so what are the steps we doing going to do it you'll be installing M you'll be configuring our promus as a scrape Target so that the data should go to MIM and then we'll add MIM as a Prometheus data source so let's quickly so that is all about the theory we don't do theory in our sessions so let's see what is all we have it into the produ uh the deployment thing so we have created a small GitHub page for this so where we have given all the steps how to install M onto your Ubuntu box and then you for greater detail you can go to this grafana based uh GitHub page or other documentation of Thea okay as I mentioned it's horizontally skill be highly avable mcy a long-term storage for promethus let's without wasting any further time let me quickly you know uh install M onto this machine okay so uh go back to ma data page so you need to first of all you know download this uh binary of M let me go back to that my home directory I'll create a mimir folder I'll go to mimir this my empty folder now I'll just do the binary downloading and then I'll just uh make it executable the binary you can see it it's one executable and then I'll just create one demo file what is this demo file uh yes quick L this demo file will help you to uh configure your M so what we are doing in this so yeah we enabling multia because it is just for learning purpose so these are the block storage uh the bucket storage all these storage are present into your uh your open two box there are certain Distributors compactor and ingestor those are actually working in multi threading you know I would say technology not not the framework to extract your data into Tim Li Bas and this is Cod on which the M will run so very small configuration let's copy paste this configuration into your demo file we just save this okay now I have a demo file on this location fine I'll just come back to my documentation I need to just simply run this mem so let's see whether we have anything running already so I don't think it is let's quickly run this okay the the address is already running it's already bind by someone else let me just quickly see uh let me run M so sometime this issue comes whenever you're running a locky base that is conflicting Port let me just again try it okay let me go to mem colder again and then it should run now okay so I think something is running okay injection rate limit okay that is something up from this related warning but let me see uh the next configuration I should be able to see mem Port up and running so this is just for checking purpose yeah I can see me Port are up okay so we are good 909 is a port number fine so let me just uh go back to my documentation okay now I'll configure a Pria so that it can do a scraping and then scraping of my local I mean the Matrix and then send it to memory directory so I'll come on to my uh this folder Prometheus folder where my Prometheus is installed ITC prome okay so this is my promethus file just simply do this sud sudo of this file just too Vim this file I should be open into a ah it's coming in no now you need to to add this remote right line you can see this remote right line needs to be added so what we are saying just try to so whatever scraping is happening onto prus this is a job name and is scraping the metrix from 9090 ports so all these Matrix should go to your MIM end point also this is our memory so memory is installed in the local machine on this 99009 port and the same API V1 push will make sure that matric are going to this port as well so currently we are sending M to both PR and M so this is already there let me just quickly restart my Prometheus let stop my Prometheus I'll start my Prometheus again and see the status of Prometheus it is running from 3 seconds ago fine go to the okay grafana browser so let me see Gana is up and running gra is already running on 93,000 Port my PR should also run on on 19094 yes PR is also running Prof is also running now I'll come to the explore mode H before going to explor let me go to the the database so that I can add the memory database so I just add a new data source I'll take the Prometheus as name the type of the data sorry and then I'll type mimir and then I'll give the 909 this one this is the port where your memory is running it does not have any kind of authentication I'll skip this TLS certification valid validation I'll do a save and test yeah it is perfectly working uh and then I'll come here and then I'll explore the database I'll just select MIM now I'll be reading the data from 90094 then I'll run my magic query what is that magic query it will just tell me how many you know metri are coming from different jobs so I'll just come to my graphon and documentation and then we'll run this query Where My grafana Gana is here okay fine I can see uh all the Met are coming to MIM also but from node exporter from and Nal collector so I'm I'm interested in this because I already have a hotel collector Hotel collector is pushing these two matri and my promes is pushing these all these matrices are going to the m okay same thing you can check it in promethus also the same Matrix are coming to promethus so that is the beauty if I used mixed I can show that one is coming from Prometheus and then let me add one more query one is coming from MIM and let me show you the same thing from M so potentially what we have done we have actually now store the same data but into uh both the database like M and PR just for understanding purpose right so this is from m i can scroll down and let me do a table view Legend instant both and the same thing from this okay instant F you can see okay now I can see the data from both the perit as if I do a hide of this I can see results only on the below query if I do this I can see the results of only per squ so I have both the database up and running and I can see the records from per as as from memor in addition to this I can add certain memor based dashboards also these are M based dashboard that you can add and you can simply find those dashboards quickly uh from uh grafana mimir over your dash so you can simply come here 1 1760 7 is the ID of the dashboard number okay so you can copy this ID here and then simply add that thing as a dashboard also the way I have done it so you can simply come here import the dashboard copy paste and then you can see gra over there let me change the name let me change the ID so that it will see unique and then import see same dashboard I imported again so you can see mem dashboard has been imported twice okay so yeah there's another I should call M reads since it is not having those matrices it is Shing no data but you these are the certain redem made Dash that you can simply import it on your mem you know data so yeah this was a pretty quick showcase how you can install mem as a binary you can install mem as a doc you know as a Docker also the moment I you know uh click this the member will be gone I can simply uh see that process ID will be gone let me just show you okay one second so you can see member IDs are gone and now I'll just there's no M running up and up and running now I'll just come here again and then I'll go to the administration connections I see M should not run because I've already deleted that M binary you know execution yeah so that is pretty much what we did we quickly installed mem okay we quickly have something configuration peret of a script Target and then send that data to mem the data source so that was very quick small understanding on M so maybe next session we'll we'll have a a detail session on Docker based implementation and kubernetes implementation which is more fascinating but yeah this is a startup of K we will take it to next level in ongoing uh lecture so if you really want to ask anything about G post down into the comment section we'll talk about it and last but not the least do not forget to like and subbe the channel to get more details or more videos related to Gana kubernetes or any other open thank you for n thank you bye-bye
Info
Channel: Bhoopesh Sharma
Views: 560
Rating: undefined out of 5
Keywords: dashboard, dashboards, grafana, grafana dashboard, grafana dashboard creation, grafana k6, grafana loki, grafana metrics, grafana mimir, grafana oncall, grafana prometheus, grafana prometheus dashboard tutorial, grafana tempo, grafana training, grafana tutorial, grafana tutorial for beginners, graphite, logs, metrics, mimir, monitoring, observability, open source, opentelemetry, prometheus, prometheus grafana, prometheus metrics, prometheus monitoring, thanos, traces, visualization
Id: ichrUTiMgvE
Channel Id: undefined
Length: 12min 32sec (752 seconds)
Published: Sun Jan 07 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.