Deploy, Configure, and Monitor Traefik with Prometheus and Grafana

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
good morning everybody thank you thank you thank you for coming to today's webinar or I'm sorry online Meetup we are stoked to be here so my name is Patricia Dugan I am the director of community marketing over at traffic and let me tell you what's going on here so we have the traffic online meetups and what traffic online meetups are is they are sessions with engineers that help to show you how traffic's being used to solve interesting business problems or challenges shall we say I have to tell you we have exciting news and you may have heard about this on our slack Channel that we have introduced a new community forum I will place this in to the chat box and what this is is clearly a community forum it's on discourse it's the place for you to put your questions get support I'll learn about upcoming online meetups and events we'll be at and just general announcements like our releases and bug fixes such things such as this it's really here in San Francisco so today we are super stoked to hang out with at I do my own tricks on Twitter aka Brian Christin er who is on line with us from Switzerland his company 56 cloud he's gonna most of you know him as a docker captain I'm an online celebrity and so we're super stoked to have him here he's going to speak to deploy configure and monitor traffic with Prometheus and Griffin ax upcoming in July we're going to have Jakob magic of comité REE talk about container orchestration with traffic on Dockers farm that should be pretty cool too and that will be in the forum so you can head over there to find out about that so one thing I really want you to know to know is that we're going to take questions and answers during today's session and so what you want to do is please enter your questions on the chat box or you can use the Q&A module if that serves you better and ask your questions and then we'll answer them at the end and you can also direct message me if you need to what else do I want to tell you find us on Twitter at traffic and hashtag traffic I'm Patricia underscore Dugan and if you need me ping me at my email or on direct message here on my email is Patricia at containers I'm Co MTA and in0 us and with that I think you have everything you need to have an exciting wonderful session today so I'm going to pass the microphone over to Brian all right microphone taken thank you Thank You Patricia welcome everyone I hope everyone is doing well today I'm gonna share my screen let's just start in it shall we alright and there we go is everyone seeing my screen all right good so let's get started deploy configure and monitor traffic with Prometheus and go fauna today's session we're gonna focus on a little bit about the monitoring so why we should monitor as well as how to deploy traffic in a configuration to actually to work with Prometheus and go fauna so we want to enable the metrics and traffic we want to measure it and store it in Prometheus and visualize it with Crafar sound good all right right screen there we go so my name is Brian Christner you can find me online that I do my own tricks you can contact me brian at 56k clouds I am a Sree and co-founder of a company called 56k cloud we're a DevOps boutique station here in Switzerland based out of Switzerland and we are a large company of five and we're expanding to six actually next week and we're very excited my background is actually in containers cloud engineering and these actually tie quite well together because in today's world you're mixing a lot of these different technologies and you're we're using more cloud or taking a lot of engineering experience we've had in the past and we're actually combining a lot of these things I'm also used to do a lot of cloud architecture for large deployments for casinos large operations and telecommunication providers also quite passionate about DevOps not only as DevOps but as a culture and trying to push this change into companies and help them transition monitoring of course I've been monitoring everything as much as I can for a very long time even before containers and then of course mountain biking I'm a passionate mountain biker I'm also doctor captain for people that don't know a doctor captain is an ambassador for doctor so I'm doing like sessions like this I do blog posts I do a lot of open source work and that's just where I am so enough about me just quickly about our company 56k cloud like I said we're based here in Switzerland this is actually my colleague behind me here Dara and he's waving and this is him at the top of the Swiss Alps and he's actually doing docker pools at the top of the Swiss Alps and the the exercise here is we're proving to companies that we can actually do things better outside the office than inside so evil corporate proxies and things like this we're trying to get away from improve that hey you know we can actually enable people wherever they are we specialize in cloud obviously containers because he's actually doing docker pools at the top of the Alps faster than most people's offices and of course we're combining the tech between these two things so we're doing containers and cloud let me just check to make sure questions are okay thank you okay just want to make sure everybody doesn't have any problems with the session good next slide so monitor everything as I mentioned before I am extremely passionate about monitoring and typically when people talk about monitoring they always think about it from an infrastructure point of view and we shouldn't limit ourselves to just infrastructure we should expand this out to also you know things and physical objects and we just keep going so monitoring should really encompass everything around us for example mountain bike I'm a mountain biker so this is actually my mountain bike here in the middle and I'm also a tech guy like an engineered as Patricia mentioned earlier and it's so I strapped a computer to my bike so I have a GPS tells me temperature heading direction elevation heart rates tells me how scared I am when I'm going down the mountain because I have that hurt sensor also crank speed all these things I have a computer it's providing all these metrics to me now I take this information and I import it into a program called Strava Strava is like an online tool for sports where you can track your your different segments for other athletes well you notice here's a ride I did not so long ago and I became king of the mountain so fast and loose and that actually means that I became the fastest in this section and again I want to iterate that I'm taking the same concepts we're using a cloud and applying it to mountain biking and metrics and monitoring so this segment right here is called fast and loose and what happens is I iterated I found out where my competitor was faster than me and I found out how to go faster through these sections and as they will actually go fast from became king of the mountain and here's the real-time split so you can see myself and Danny mendler Danny Midler at the end actually clipped me at the end and that is now the new king of the mountain so you know what I'll be doing this summer I'll be chasing Danny mendler through the mountains try and get my king of the mountain back but again coming back to it it's about measuring small iterations and continuing improvement just like we do in the cloud I'm applying it also to mountain biking all right so enough of mountain biking we want to talk about tech right so give me a real business example how about this in Switzerland we have a company here it's called dizzy tech digitech is our you know our Amazon for Switzerland they sell all sorts of electronic devices and do all sorts of things online e-commerce shop now Black Friday also here in Switzerland they went down for four minutes and this is actually their graph on a graph you see here and you can see the CPU going crazy and also it just drops off that's not good on Black Friday all right I mean you can see they received twelve thousand 502 errors I mean 400 people couldn't get their deal now without monitoring we wouldn't visualise this for one on the other hand they repaired a production outage in four minutes how many people can say that how many people can like say hey I have a major e-commerce platform and I was able to recover within four minutes and that's where monitoring really gets interesting because we're able to predict our failure conditions we know what we're looking for and that's what's enabling us with monstering so monitoring is a really important tool that we can actually use to recover from errors like this but if we don't have monitoring we don't have visibility into situations like this all right so how do we monitor traffic with Prometheus now before I start this session I mean we as 56k use traffic quite a bit from you know government organizations to startups to a bunch of different tools so it applies to all sorts of different use cases but for each of these use cases we monitor obviously so in this particular case we're gonna use docker Swan so on the Left we have docker swarm and here just normal docker swarm has containers and we're exposing the endpoint to prometheus so Prometheus is there time students database and actually scrapes the docker swarm and gets all the information about docker swarm on top of Prometheus we have graph ona kevanna is an open source visualization tool so we can build some dashboards and actually visualize you know what's happening inside our Swan now we add traffic into the mix and you can see we have here we have our dashboard so the dashboard for traffic if you if you've deployed traffic before you'll see a nice UI in the background and it gives you statistics about you know front end versus back end and some real-time information we're also going to deploy gravano in this configuration so girl on earth at localhost and what this is we're telling traffic that we want to provision graph on and actually give it this local host domain name same goes for Prometheus and then we're going through traffic and we're connecting it to the backends so we can see back in Bhuvana back in prometheus whatever other were webs servers we have and of course we're going to employ a cat server as well in this demo so you can see how it works in non monitoring applications so this is what the the stack looks like now we have our docker swarm however traffic is talking to docker swarm it's actually monitoring the daemon within docker swarm so anytime a new container starts traffic is aware about it and it registers it within its internal registry saying ok I know that there's new container started it is yes or no wanting to be exposed and does it have metrics next traffic also talks to go Pharma so and this is our Prometheus excuse me so we can see traffic actually exposes metrics and it provides it to permit this now Prometheus also talks back to traffic so this is one of the bi-directional situations and we'll see more of it when we actually deploy the stack now traffic also handles kevanna so it actually provisions go fana handles the domain name etc and take for Prometheus and I didn't set up an alerts for this demo but if we had alerts set up we can actually send alerts to slack or messaging or email as well ok enough slides let's get into the actual repo and you can actually follow along because it's a open source repo so head over to github comm ford slash Vegas Brian C Ford slash and docker - traffic - Prometheus I realized after the fact I made it way too long but that's how it is so this repo I'll wait a minute so everyone has a chance to write it down or see it it's basically the entire monitoring stack so it has our docker swarm it has it has in containers it has Prometheus go fauna it has also traffic and it has all these components built inside and we'll walk through each of these components or in a moment ok I think everyone has time to read it any questions so far No cool ok all right so I'm gonna open this up in a new Chrome browser and here's the repo let me make it slightly bigger just in case and we can see we have different components we have docker compose for actually docker swarm so in order to provision let me move it over here so I look at you huh okay so you change my desktop so it makes it look like I'm looking at you I'm sort of looking off into the distance now I need to change my sharing to desktop one okay this you like the sound effects all right so we have our repo sitting here and now in order for this repo to actually work we actually we need dr. and dr. swarm installed if you want to know how to use it this is how you setup swarm if you're using docker for Desktop for Mac for Windows this enables swarm by default so you should have swarm running already now this if you're running Windows you need to run Linux containers because these are all Linux components in this particular setup and all the slides if you're interested are here so whatever you saw earlier you can also have the PDF in that link there okay so if we scroll down slightly we can see the goals is a provision of traffic sack with Prometheus metrics enabled to play Prometheus and a fauna verify traffic metrics and then we're going to configure some dashboards in front and I'm going to open up my terminal and we're gonna start looking at the stack shall we I'll make it slightly bigger so if you were to clone this repo this is what you would see you would see you know the PDF you see docker compose and graph on a prometheus so let's just take a look at each component separately so we understand what's going on the first thing we're gonna look at is docker compose DMO we're gonna open this up oops my scrolling goes off the page here and the first thing in the doctor swarm file a compose file we need to ID by each service so the first service is traffic and I pinned the version to 1.7 Alpine and then we're passing the command to it that we want actually debug we want to set the log level to debug we want to enable the API which also enables the dashboard the metrics so here is where actually we turn the metrics on in traffic it's not turned on by default for your for your information and the next line actually tells the metrics what kind of metrics we want we want to enable Prometheus metrics and we have to tell what kind of buckets we want from Prometheus and these buckets are actually like a time slug so that's 0.1 seconds 0.3 1 point 2 5 0 and basically what this means is if a metric comes in and it's the duration was 0.1 seconds it falls into this bucket and that's how we query it that's all the Prometheus side works time stirs next we need to tell traffic that we want to use docker swarm mode and finally we're gonna pass a domain to it and since we're running it locally I don't have let's encrypt installs we're running everything locally and we're gonna pass docker that local host the next label actually tells traffic to watch the docker daemon in case any new containers come along so if a new container starts or new service automatically registered with traffic and decide if you want to publish it or not next up we have two networks and I like to separate the networks this is just personal preference but we have traffic and this is really our public network so it's inbound outbound and that's really where ingress traffic comes in so anyone wanting to access my web server would come in from traffic and then I have inbound and this goes to each service individually so from Griffin and Prometheus that's how it talks of traffic and then traffic talks out to the Internet through this network then we mount traffic to the doctor Damon I publish port 443 for any requests coming into our web services that way and then port 8080 is actually for our dashboard if we scroll down some more you can this is all swarms specific but compose is relatively the same as well so we're gonna deploy it global mode and we only want it running on the manager node of doctor swamps so if you're running this make sure you're on the manager node restart conditions except then we get down to Prometheus and now Prometheus this is basically standard configuration so we're using the standard Prometheus image these are fairly standard configuration files here's our inbound network but right here is where it gets interesting so the deploy section is where traffic reads this information it realizes how to register it into traffic for example we pass traffic front-end rule and this is where traffic assigns this container this service this domain name and we give it a back in so we match the front-end to actually a container so this is the domain and this is actually the container we want to connect it to the container is publishing port 90 90 and we want to give the inbound network access to this and again we're running it on a manager but if there's a real situation this would actually be a worker but for the demos I only have one node I keep the manager we scroll down we look at good fauna this is also a pretty standard config in the environment file here there's some special things and I'll kind of walk through it and also here so guarana provisioning and this is where you can auto provision dashboards data sources you know user credentials all these things is quite cool so you can actually get bootstrap your graph on an instance next up environment variables so this is where we pass in our username password for gopanna networks so inbound network user that's gonna run qivana and then the traffic labels similar to Prometheus before we're gonna give it a domain so go follow that localhost and a back in that's going to talk to so it's gonna talk to the graph on a container using port 3000 and inbound so so far that is a composed file I'll quickly go into graph Onam directory just to show you what it looks like let's cat the config monitoring quickly so you can see we set our very secure password fubar username admin and then I installed some plugins also for the dashboard let's see then if I go into provisioning you can see this is how I provision dashboards dashboards beckon there we go let me clear so I have a dashboard diamo and this is basically defining what's happening inside the dashboards so we are going to provide just default we have one organization we can define this out but this is just all standards so and the last thing is where it's gonna store these dashboards in the container this is default but you still need to understand how this works if you ever want to change or you want to automate dashboard provisioning let's see it clear so then we have the actual dashboard here and it's quite large but I'll just show you what it looks like and you can see these are the different actually graphs and metrics they see within the dashboard and I'll show you a trick how to get that easily in so that's the dashboard part and also the datasource part is here data swimmer's I see a question one second the datasource here we're gonna say we're connecting Griffin at prometheus that's how we connect to prometheus Prometheus to localhost we can actually do it a separate way as well and this is how we connect through the different authentication to this servers that just quickly how Griffin is used and let me back up tomatoe root directory here then I'll clear out of here alright let me just answer questions quickly and please ask questions cuz I'm more than happy alright here we go they oh thank you for putting the link in there per dishes so we have one question yes you mentioned your prom girl fauna would normally be node equals worker deployments for the demolicious on the manager why is it necessary I had a constraint for them doesn't storm swarm handle this automatically well the question so if I come back here to the docker compose file you can see I have this constraint here to lock it in to the manager and typically in a real situation you would have your monitoring on a separate infrastructure or a separate node from your workload so it doesn't affect your workload and swarm if you took this out swarm would provision automatically and actually deploy on any available node but for for this use case since we only have one node I kept it a manager but in the real world you want to try to dedicate it to a specific node or group of nodes which are kind of you know reserved just for the monitoring ok yep that's the link so Vegas try to see you draft prometheus alright so next we are going to deploy this so if we go back to our destructions here so we cloned in we clone the repo already done we CD into the traffic Prometheus we kind of reviewed what's happening how grifone is used now we want to deploy it so in this directory all we need to do is write this command dock or stack deploy - see so what compose file that compose valve we need to name the stack and let's do that write this up dr. stack deployed - seen and we'll call this traffic and as you see it's creating the inbound network it's creating the network traffic the services it's also creating volumes in the background so it's doing all these things automagically now if we do docker service LS we should see all our services online so traffic it is running we have 1 out of 1 replicas Prometheus and traffic so everything is now online happy days you know devops world we're going home now it's time to go write our mountain bike but we're not done yet we're still doing dev and ops right the next thing we're going to do it let me move this all the way okay so now everything is up and running and it's running a local host so if I go local host and I go port 8080 this is now my traffic dashboard so this is port na which we expose just for the dashboard and you can see graph on a graph on ax that local host is mapped to back in the fauna so that's actually the IP of the container and the port of the of the container running below is Prometheus that local host and same thing it's mapping to this particular IP and port of the container if we click on the health here's real-time statistics and I'm getting real-time statistics but that's just you know he's working when I showed Patricia a moment ago but anyway this is where you get real-time statistics it normally works but I'm sure it's something with my demo but what's interesting here is the response time and we want to see uptime and you can see average response time but again this is real-time information so if you miss it is gone and that's why we want to store it in Prometheus we want to store it in and visualize it in grow fauna so we can see it over a period of time I'm glad you asked so now we're gonna go into Prometheus so Prometheus that localhost and here we have a Prometheus up and running and we can go search status and targets you can see now we're monitoring both Prometheus and traffic and now it's interesting you see this is the IP of traffic for such metrics so if we come back here and go man tricks here's the raw metrics that traffic is producing for Prometheus so Prometheus is actually scraping this information and storing it in this database and this is a real time information and this is basically exactly what we're getting into permit this so you see back-end requests you know here's the different buckets zertal ones or @ 31.2 and configuration reloads so it's all the raw data in there so if you want to script something out if you want to use this information separately that's how you do it so traffic port number forward slash metrics anyway we're up and running now we can come over to the graph and a graph automatically we scroll over to the bottom we can see all the traffic metrics get pulled in to permit this automatically so instantly we can go let's see back in open connections we don't have any there let's find something that's working there we go so back in request duration second counts you can see here's Prometheus and you can see 200 codes we have 10 and we have 302 of 1 and we can click on the graph as well and you can see it visualized how it's working so you can just run through the different queries and you can see request total duration and you can see per IP etc so it's really handy to actually see this information but Prometheus I mean it's very very good for storing information it's also good for writing queries here but for visualization for like really graphing and having a nice dashboard this is where graph on that comes in and that's why good fond and permit this are so tightly coupled because graph on I want to focus just on collecting the metrics and writing queries where dha'fana wants to focus on visualization so if we head over to graph on it out localhost and there we go and username is admin password is fubar you know very enterprise secure here and right away you can see we install graph on a check we created our first data source and our first dashboard these three steps were done by the auto so we don't have to do anything manually and that's what I kind of walked through at the very beginning if you head over to data sources use the Prometheus and that's how it connects so that's that if we head back to home and we do the drop down we can see we have a dashboard there traffic - and there you go so we automatically imported this dashboard and I'm gonna actually do the last five minutes there we go because really up and running a few minutes so we don't have much information and that's why you only seen a little bit but you see response time you know seeing you know which back-end is being requested what kind of status codes if I'm getting other status codes from 200 and what kind of protocols are getting is so if it's HTTPS or HTTP so we can also choose which back in so you can go to prometheus you can see Prometheus is a bit more information so you can see quite some information here I mean it's really helpful and come back over to Kats we don't have anything I can't yet but this is how you get up and running with the dashboard so provisioning a traffic dashboard if you want to add more I'll show you in a moment but this is basically you know from doing one command you have traffic installed it's doing your reverse proxying to Griffin and Prometheus and we have a dashboard up and running which we can start visualizing information but I'm glad you asked there's no real information there yet right so I mean we're only looking at we're monitoring the monitor right we're monitoring Prometheus we're monitoring de fauna it's not really interesting information at the very bottom down here if we go down to deploy a new web service I created a separate stack from the fellow of docker captain and it's this cats camel so if we cat the cat yellow you get it we can see it's a small service we create a cat application it's by Mike which is a doctor captain really good guy out of Virginia Tech then we have the network inbound we're deploying it we're gonna deploy three instances of it and then we're gonna name the domain castle localhost the back end is cats publishing port 5000 and we're gonna use the inbound so just as we did before we're gonna do dr. stack deploy my NC cats and we'll call it cats Nancy is creating the services it takes a second because ask to deploy three services so service LS and you can see now I have three cats apps deployed it's amazing I know it's really cool so now we can head it over to cats that localhost and now it's playing the cat gif random generator so every time i refresh I get a different cat which is really cool but it's also reverse proxying so we can see it's being served by a different container every time we refresh so you can see container ID a container ID so it's actually rotating through the containers every time we refresh pretty cool right now if we head back over to go fauna so now we have some traffic we've refresh we're seeing wow we're getting some more information going on here if we go to the backend cats let me refresh couple times we're starting to see some information from cats we only have we can only go down to five minutes so see 12 milliseconds you're seeing all sorts of requests come in you know the response time you know if we're getting any four or four hours it's getting quite interesting and also it tells us right away we have three backends and if we have different endpoints we can go a HTTP we just want to see HTTP for example or we just want to see traffic endpoint so it provides a lot of great information and it's very basic but it's very powerful and this is just the traffic monitoring so you can imagine my taking this a step further and monitoring inside the cats container through traffic and getting the whole visibility or the observability as they say now if we want to expand our graph on a slightly so that's up and running with traffic you know now we have we go back to our dashboard we can see we now have our cats at localhost and it's going to back in cats and there's three different containers running now that's right and you can see the different status codes you can see average response time so it's doing all right I mean it's working as advertised which is good all right all right but say for instance we want to import another graph on a dashboard right instead of writing it ourselves and we can kind of dive into one of these specific dashboards so if we go just click on the title bar here go edit and this is how you write the query so what I normally do is write the query in Prometheus and then I just drop it in here and you didn't work on what kind of visualization you want that's one way to do it you can also write the queries here but it's a quick and easy way to get up and running another way to kind of easily get dashboards is you go to graph on huh right org forward slash dashboards very good and then we just come down here we call trap so here's all the dashboards just for traffic right and I use Thomas's dashboard up here I also have a dashboard let's just pick someone that's not me I think Marcos is pretty good and this is the dashboard that Marcos made quite interesting but all we have to do is take this ID numbers two two four zero we come back to our graph on a dashboard we hit the plus we go import we just give it the ID number load and automatically loads as dashboard in we just have to tell it what datasource import and there we go we have a second traffic dashboard running so now we have two traffic dashboards very quickly and it's very powerful I mean again you can I recommend people going out looking at you know various open source dashboards available and go fana gitlab also has some great ones I think it's a monitor what's it dashboards that get lab comm so get lab is like a DevOps tool they actually publish all their dashboards on lines so you can see every dashboard that get lab uses so if you go down like this I don't know I saw traffic somewhere traffic generation anyway if if they had their traffic dashboard enabled let's just pick a different this is get lab triage as an example so if you find a cool dashboard running on the internet somewhere you can click the share' dashboard button you can export it you can save it and then you can just import it into your graph on so again don't start from scratch it's very very easy to use someone else's or follow someone else's dashboards and kind of adjust it to your needs start small because you don't want to over you don't want too many requests going on you know on thousands of tiles because it's really heavy on your server so you know start with like six six or eight is what I recommend so eight or six different tiles and from there and you can actually probably see Michael ruined is join I was actually using your cat's demo cool anyway it's actually right here wherever it is there you go Mike cats that local host I'll publish it later so that's just a quick rundown of why we want to monitor traffic right so it's very important that we monitor traffic to get the information to get the metrics to provide visibility into our infrastructure and traffic out the box does such a great job of giving us this information we can just with the single dashboard we can already tell okay the containers are online if we start getting a lot of 404s here we know right away hey maybe our service is this offline or having some type of gauge degradation somewhere if our response time going up maybe network congestion so just a single dashboard alone it's very useful I use this dashboard quite often so I can recommend it but that is kind of the demo for docker traffic Prometheus if you have questions now would be a great time to put them in the chat or direct message or however you like to see so I'll kind of open the floor up to questions if anyone has any yeah Michael I don't see any coming in but could we maybe introduce I'm sorry quick Brian could we maybe introduce Michael a little bit more since he's here in the room you can't speak site this week for him I guess huh all right well I could let him I have all the power here let's see what happens Hey look at that oops Michael are you cool this I forgot to ask yeah I'm fine with it on the spot I know Mike yeah Mike is a fellow docker captain he's based out of Virginia Tech doing some amazing projects he's also using traffic as far as I'm I understand for some of your projects and I know you've built some open-source tools around - yeah we're doing definitely quite a bit with it locally and for development and all that stuff - you and hoping to do a webinar here here soon with Patricia sure and the team so we'll be sharing some of the stuff we're doing here in the future - yeah super cool thank you yeah I look forward to that and thanks for dropping by and being willing well thanks for letting me put you on the spot there I shared their Twitter in a chat box so if you're on Twitter please follow them we do have a question from Eric stone Oh Eric thank you for joining us his question is his question is how do you push to slack from there with prometheus aha so how do you make like alerting with Prometheus etc and I just set this up today for another customer so I can actually walk you through it if in Prometheus you have something so in this same directory I don't have it enabled but it's this the folder structures there and one of my other projects if you go to to around here but I have another project called Vegas Brian C Prometheus and in here is every component around Prometheus enabled so we have alert manager it's connected to slack so if you go into alert manager config you can see Ethan in here you would actually tell every alert to go to slack here's the user name the channel and the API from slack that you want to use so every time you get an event say traffic gets to 404 you can send it to slack for example and you would write that into an alert within Prometheus very powerful give that a try I mean it's I we use it quite often for a lot of our customers because when you act the whole channel that the system's down people tend to notice yeah at channel first first way to get on peoples nerves yeah right that's beautiful was there anything more you wanted to say about that brand no I mean I have some other monitoring projects on my github repo so please check it out there's also some docker training out there and kubernetes training we're also including if you go to our 56k repo 56k there's actually a logging and monitoring workshop that I did a docker con to go dr. Kong and where's that yeah so this is actually the dr. Khan logging and monitoring workshop that I gave in San Francisco and this is actually using so swarm and traffic so we're provisioning you know traffic we're introducing Aires we're looking for how to fix the errors and it's all here so it's all the content is there in case you're interested to continue on logging and monitoring sorry with yeah I'll do that now Thanks it just makes it easy peasy for peeps alright and please I mean follow me on Twitter shoot me questions email however you feel comfortable I'm always happy to like encourage the community to use these tools because I'm really passionate about it Mike is also available I mean follow him on Twitter he's also quite interesting has some cool hwhip projects running and yeah I'll hand it back to you Patricia well I just had one more question come in which is when do you use alerts through Prometheus versus in Griffin oh that's a good one that's a very good one so de fauna has its own alerting and you can actually I don't have a good example here but I prefer to do my alerting and Prometheus because it's more powerful and the reason being for example if the backend goes down or the response time goes up you can actually measure the response time let's say if it hits 10 milliseconds to restart the container as an example right and you have all these integrations directly in inside of Prometheus and that's how you really write some powerful rules you can actually auto scale your environment with these rules I mean that's how Google manages their environment etc and then on the other side if you look at the graph on alerts you can combine the alerts that you're receiving from Prometheus Integra fauna so you can actually you know add a channel you can say send emails from here from the fauna and what happens is actually I think have an example I always have an example this is my playground let's see find a good status so here's some Reforma alerts and what's handy is this is actually Prometheus alerts combined with gaffar alerts so I can actually click on CPU use alert and it takes me right to the graph and it tells me when this happened right which is quite handy so combining the two is also cool and this is set really low so I won't show you but usually you see like a line in the graph so you can see here is when the error happened and when it started kicking off so you can see exactly when things happen so yes I would venture to say I use both so I use promiscuous alerting quite heavily as well as the fauna for the best of both worlds to wrapping up and so unless you had more to say or more questions come in right now I'm going to wrap it oh okay well can we take it as ever do you have time for another question Brian sure okay I don't use stackdriver in my team will you advise the switch to Prometheus Accra fauna I don't know stackdriver so I'd have to look it up but of course I mean I'm like Griffin and natives from mithy as needed so I try to start with these tools first and then if it doesn't work go from there so I would recommend Prometheus to fajn there's a lot of great documentation out there some great starters for example the one I showed you earlier if you just want to get up and running with Prometheus I need it so easy you just click a button you just click this button and you can have it up and running you can have your whole entire infrastructure running with a click of a button so I'll drop that in the slack you're gonna be are into the chat thank you boy thanks for the question okay so here's what happens now is we will answer the Q&A on a gist I will create this YouTube video for you and then I will mail you the recording I also put the community forum link in there in the chat box so that you can head over there because I'll also place the YouTube video there and just a big note of thanks to Brian and his colleague for providing the background and yeah you hear it from Switzerland and and the photos were amazing are amazing so thank you very much and thank you to it for attending everyone who registered and came today and if you have any questions like Brian has said let us know find us were purely public-facing so come skiing as whitson we do conferences on the summit in it's upon them I'm on my way oh I'm W yeah we're waiting for Mike as well okay thanks so much I'm ready to roll are are we good to go fist bump okay have a good day good night everyone bye
Info
Channel: Traefik Labs
Views: 6,038
Rating: undefined out of 5
Keywords: docker, kubernetes, microservices, golang, traefik, prometheus, grafana
Id: 3q-K4JDcH6I
Channel Id: undefined
Length: 47min 33sec (2853 seconds)
Published: Fri Jun 28 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.