Introduction to Network Source of Truth

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
as soon as my tea is done we'll start all right hi jason how are you sir hey hey i'm gonna put my fancy uh fancy glitter filter on there's just some hands fancy headphones too man i like that oh yeah the filter looks a lot more professional a lot of people don't get to see the transformers oh let's let's hope they do let's hope we have a reason to bring them out while we're while we're waiting for everybody to join i'm gonna sedate my dogs uh we we found this amazing um chamomile l-tryptophan and cbd peanut butter for dogs oh wow it's like my my secret weapon against any surprise amazon delivery drivers that show up i'm uh moving some stuff around this is gonna be a bigger call hopefully you guys can't hear them smacking their lips it's a little obnoxious that's literally the worst sound of the world for me right here we go all right we got a lot of people coming in here oh we got some some royalty in the crowd i see tim hi tim tim from network to code is on oh my goodness hey calvin how you doing doing well doing well really enjoy all of your content oh awesome appreciate it good to meet you yeah same here all right okay two more minutes and we'll go ahead and get this kick started tim do you have any transform oh my gosh look at what's in the background starters tray yeah oh my goodness that's amazing legos and tequila well you're you're definitely gonna like the context of what we're gonna be presenting today there's there's gonna be a little star wars in there uh oh my goodness how long did it take to put the star destroyer together that was a good amount of work and then i had to order the whole shelving system to hold it because that thing it just takes up i mean look at it it's like it's like having a kid you have you're like now what do i do with it you know [Laughter] oh my god did it come with like grand moff tarkin or anything like that no nothing goes it's all it's hollow inside okay so i'm just like me just like all of us all that's all that's in there is gas all right we got some chat all right okay uh we're we're four minutes past the hour i'm gonna keep on clicking on the admit button as as our friends come in here but let me just first uh start off and thank everybody for joining us again this is our may iteration of the monthly automation workshop um if if you're new here just know that everything that we focus on is completely vendor agnostic open standards should be directly portable between whatever is in your environment we've spent the last couple of sessions focusing a lot on the technical automation components we talked a little bit about ansible we talked about automation frameworks we haven't gotten to like a deep devil on on python just yet but throughout these last few sessions one one constant theme has been coming up in each one of them and it's me referencing this concept of a network source of truth and so i wanted to bring in the experts around this type of technology around this construct of a network source of truth to to kind of share you know their insights as to what they see this as and so hopefully when we finish up today you'll have a really great understanding as to what role a network source of truth plays within your environment but you'll also have some practical experience in spinning up a network source of truth in your own environment or at least getting to play with the on-demand demo environment that's available for us so with that i'm going to go ahead and start my screen share and by the way everybody as always these these sessions are recorded we we punt them up into my personal youtube channel afterwards once once marketing finds out what i'm doing then then it'll actually become an official thing we'll have it on the official juniper website so but for right now we're going to just kind of maintain it as it is all right so uh again thanks everybody uh this is our monthly automation workshop my name is calvin rimsburg i am a senior sales engineer here at juniper networks and i'm really excited to have one of my favorite people within this industry jathan uh joining us from network to code he is a legend within this industry uh not to not to make you bless nathan but um and and for a very very short amount of time he was also my direct reporting manager when i was over at network to code before i came home here to juniper network so really excited to have the team here from what you should expect within today's presentation again if you haven't seen any of my previous uh workshops i coming from a background of like live streaming automation and network topology building data centers that type of thing so my delivery cadence is a little bit fast for especially for being a texan but don't let that stop you from interrupting me if you have a question and you feel like maybe the zoom chat or the raise hand function within the chat is kind of getting ignored please feel free to unmute yourself and go ahead and sound off i'm perfectly well accustomed to those types of things because uh questions are welcome at any time so feel empowered all right uh again that's me that's my face uh but the the real special one here is jason um just so that you know and you have our our twitter handles in case you ever want to get in touch with us um these are the roles i again i work with in a sales capacity uh jathan is a uh he's a managing director but i believe he's got a little bit more uh technical nuance than than that kind of title actually leads on to believe but um let's first talk about our agenda of course we're going to do our introductions which were we actually just wrapped up so technically speaking we're already 20 minutes or 20 through their presentation today that's not not true don't worry about that but uh the next section what we're going to do is we're going to jump in and talk about what's this concept of a network source of truth uh i want everyone to have a common frame of reference whenever we start to talk about this technology so everyone's on the same plane before we get into any kind of the product conversation before we get into any of the applicability to network automation and definitely before we get into the live demonstrations it's important at least for me that all of our workshops here we focus on the basics but we go really deep on those basics and we're going to do the same thing here with our conversation around a network source of truth after that we're going to pivot into the conversation about a very very new but very very stable and reliable tool that's published called nadobot that's actually developed here by uh there at network to code and then we'll spend the second half of our presentation today again just focusing on several different live automation demonstrations we'll show you how to instantiate nadobot in a couple of different ways we'll do a docker install and then we'll do a more production ready install we'll show you how to add plugins into the ecosystem some of the really unique chat operations that can be developed within this tool uh and then finally we'll just wrap up and we'll talk about you know where to go next how do you get your hands on this product how do you get started within your own environment all right uh is it good anyone got any questions before we go ahead and begin doesn't sound like it all right so a little bit about network to code now again before i joined juniper i came from network to code so i feel like i still have enough knowledge about the company to actually share just a little bit of background as to why i felt it was appropriate to bring them into this conversation so network to code is a services-based company they focus primarily on delivering professional services as well as educational content for customers they were founded several years ago back in 2014 they are just again like i say on the slide their laser focus on building vendor and tool agnostic network automation they have very very high reputation within the open source community and again they are the the primary maintainer of the product that we're going to be showcasing a little bit later today which is called nattobot so let's talk about network source of truth this is going to be really the the bulk of our presentation type of content right here it's not going to be a whole lot to flesh out but let's again let's make sure that everybody is on the same page whenever we talk about a network source of truth so they understand not only the concepts of the tool but what what's the actual value proposition that that somebody would look at when they when they evaluate a tool like um like not a bot what kind of problems are they trying to solve now if you've attended any of my workshops before a real common theme within there is that i like to use analogies anywhere as possible and i'm kind of a geek i'm a big star wars and star trek fan uh yes i like them both so uh what i wanted to do today to talk about what is this construct of network uh network source of truth and where does it actually fit into our environment i'm gonna be using a reference point because i love legos and i love star wars so let's go ahead and pretend that we have the coolest job in the world and we are tasked with building uh the design for the lego millennium falcon very appropriate for tim seeing that star destroyer in the background i promise that wasn't scripted uh but let's go ahead and let's explain the contract behind a network source of truth or a source of truth uh by leveraging like putting ourselves in this frame that our task is to design this millennium falcon for lego now what do you think would be some of the requirements for us to actually start or to get to the finish line from from nothing well one of the very first things that we'll want to do is we'll need to declare our requirements right what is it that we're actually trying to do and yeah at a high level it's yeah we want to build the millennium falcon but you know what does the millennium falcon actually look like what kind of size dimensions are we looking to build etc then we need to compose some kind of design right say okay we we understand you know the millennium falcon it looks like a corellian freighter because it is um some people have called it a hunk of junk but we need to still have this design crystallized right and within that design we need to have precise documentation for every single nuance for every single component within that design at that part needs to be firmly firmly documented within here i'm sorry i'm still admitting so uh so within that design that that documentation is critical have you ever tried to build something from lego and the documentation was wrong probably not i hope not because that would be that's a deal stopper right that's kind of like building a piece of ikea furniture with missing components it's going to ruin your day for you so the documentation and all the collection of that information has to be absolutely pristine now once we have the documentation we have the business requirements we have the design then we need to actually start building out the components and putting all the pieces together and making sure that everything works and then finally we'll have a sign off from the business unit that says yes congratulations you have designed our millennium falcon for lego and we'll be ready to ship the product out into the market now i want to ask you uh something or before i get to that i just want to focus on what part of this what a source of truth really play within this type of workflow well that is the documentation aspect of it right where we have to get every single nuance of our design crystallized within that documentation so you know we're talking about things like what kind of pieces we have order of operations how to engage especially if your component is or i'm sorry especially if your design requires multiple stages to kind of come together as one all those things need to be documented firmly so does this sound familiar to any of my network engineer friends the construct might not because we're talking about star wars and uh having our dream job but what if we looked at this from a lens of a networking space right all these bullet points still apply whenever we have to develop a new network for our environment this could be a branch site this could be a data center this could be a firewall implementation but all these order of operations are typically consistent with our day-to-day work right we have business requirements that we have to understand we don't just build networks for the fun of networks although that would be a killer job we have to be able to deliver something some kind of value to the organization that we represent so we need to have those business level requirements ahead of time and those business level requirements define the actual design that we put forth and it's the aspect there's all the individual components and aspects of that design that ultimately need to be documented right things like my prefixes my vlans what kind of routing protocol are we going to use to build these adjacencies is there going to be wireless is there poe all those individual components have to be documented so that everybody knows exactly everyone on the team has a firm understanding as to everything that was put together to actually deliver this network now there's definitely the assembly aspect of this right where we're actually configuring devices uh we've we've got our you know we're connecting fiber optics up we're getting link lights and everything is happy and we're building our adjacencies uh and then finally we would have some kind of a review that we would certify the network design to say yes this is good to go and so the business owners can be happy with the products or the networks that we're delivering for them but taking this back into the star wars world let's think about that component right that source of truth component that documentation aspect that i said that was so critical what types of things that would we have to document for us to be able to deliver this design well there's would have to identify what types of lego pieces that we're going to assemble all my lego fans out there know that there's thousands of thousands of different shapes of different lego designs or different lego components within all of these so we need to document what types of lego pieces that we would be assembling together and at the same time we also need to know like the quantity how many of these individual components how many of these radar dishes am i going to need to on my my millennium falcon right how many glass windows uh how many uh guns in the back all those types of cool things right but we also need to know a little bit more about those components themselves right we need to know what kind of colors are we expecting what kind of shapes are we trying to deliver what are the sizes of these individual components and are the any kind of decals or stickers that we need as part of our design and the same could be thought about when we think about the networking right there's not one type of data that we need to build a network in fact there's hundreds of different types of data and so think of it putting it keeping that context in between the two right thinking about hey we're talking about star wars right now but these these are directly portable into this construct of what we're talking about from the networking aspects so what kind of what what function would a source of truth provide for somebody that's building this millennium falcon well this would be like the documentation that you get whenever you open up your package right this is your guide to getting from xero to hero with the product that you've just purchased so we need that guide to be the formidable declaration of exactly what are the components and what are all the attributes of these components there's additional things like order of operations and processes as well but that document needs to be the sole authoritative source of reference for all the in the exact intended state of the design of our product right we need to trust into that documentation because if not everything's gonna it's gonna eventually fail uh and we also want the scope of that documentation to be limited to what we're trying to compose here in this case we're building the millennium falcon we're not building the enterprise from star trek so i don't need that type of information within my environment i only want to know what is the focus for our specific products that we're having here so let's take all these these concepts we understand the value and the importance of capturing these types of data and let's bring this back into the construct of networking and then that will hopefully explain a little bit about the role that a network source of truth plays within the environment the network source of truth must be the authoritative source of reference for our network designs right that is again all the nuances all the components that we're using both physical and virtual on our devices both configuration and state information as well the network source of truth needs to be the single point that everyone in the environment can point to and say that's exactly how we want this network to be designed now it's going to do this great job or it's going to perform this function by capturing and storing all the various types of data that we've been talking about and the most important thing is that the network source of truth must represent the network's desired state not the discovered state and this is a real point of contention and i'll just go ahead and share a personal anecdotal story about this last bullet point and why it's so important uh for myself i started playing with network source of truth concepts about five or six years ago when i started to look at a product called netbox and i thought that the tool was great it was a place where i could store all of my network configurations my designs and my devices my inventory etc but i didn't want to input the data myself and so i thought hey why don't i just go ahead and export all the information from my solarwinds environment and just dump it into netbox at the surface level it sounded like a really good idea that way i when i build my automation i can just point to netbox which would have like an api that i can interface with and then i get access to all my data my environment the problem is that my my network admittedly was kind of a mess in certain places uh we we were a small team but we managed you know thousands of devices and just things kind of built up over the year misconfigurations data on devices incorrect interface descriptions was like it was more of the more of the norm than not uh and so what i ultimately ended up with was a situation where i took bad data from my network environment and i imported it into my network source of truth so now i just replicated my problem just into a new different type of product uh at the end of the day it just actually didn't provide any value to our organization we had to scrap the whole thing and start over again from from vanilla so with that being said i wanted to kind of get jason into the conversation here and and ask him jason when when you're engaging customers what are some of the familiar types of network sources of truth that you see are there any other products out there today that you might reference and say yes this is something that you could consider a network source of truth in your environment that's a trick question uh there are a bunch and uh there's a lot of commercial products out there that are attempting to do this and i think one of the ones that stands out uh is infoblox they have a pretty good they do a pretty good job of having some kind of inventory that you can populate and integrations with other systems uh they also do ipaddress management dns dhcp um another one that a lot of people don't think about as a source of truth is actually servicenow servicenow has a cmdb component and has all kinds of records for assets and hardware it even has a asset discovery component that you can then integrate with servicenow and and there's a bunch of stuff in the open source obviously the the big ones like that box and uh what i maintained years ago called network system truth aptly enough and um but i think that there's there's also a lot of open source asset inventory databases out there and there's a lot of crossover between these concepts because i think at the end of the day when you're talking about the source of truth what you're really talking about is inventory it's about tracking what you have but um does that answer the question no it's great yeah yeah absolutely and and in fact i've i've gone ahead and included some of the ones that i've used personally in the past when i was a customer um and and not all of these uh if if you went to the if you went to infoblox if you went to solarwinds and you you said hey um what's your network source of truth tool they'd probably say what are you talking about get out of here um you know we we provide in the case of infoblox we provide a ddi service right dynamic dns and dhcp and ipam information solarwinds does the same you know there's an ipam component as well but its primary focus is monitoring um but some of the other ones you know uh servicenow was mentioned that's a really really popular one although you could make the argument that servicenow is is approaching it from not a network perspective right when when servicenow produces a product they aren't thinking of us network engineers they're thinking typically of you know the businesses that they're they're actually delivering the products to but an extremely common one that i've seen in almost every customer environment and don't be bashful don't be ashamed if this is you as well is microsoft excel managing networks through a microsoft excel spreadsheet is extremely common um don't be bashful again if this is you i'm not calling you out almost every customer does this in some kind of capacity some more than others either either ip addresses are managed through a through an excel spreadsheet or a workbook vlans might be captured inside of here prefix information that sort of stuff typically is usually in some kind of fashion is found within excel spreadsheet but in some cases customers have found hey there's limitations with you know working inside of the context of an excel spreadsheet and there's limitations with some of the the off-the-shelf products that are targeting different markets things like solarwinds for monitoring and then for blocks for ddi and opennms for the same um and so they'll they'll look to building they'll look to build their own database right so maybe they're having that they have a postgres database or a mysql database maybe they've got a a web application that they they use to insert data into that database um and another one that that we're starting to see kind of crop up here is github uh surprisingly enough um we're starting to see customers trying to to transform how they maintain their network from a configuration and operations perspective by storing some of this data in inside of git and and they'll use um traditional types of software development uh concepts to store these data so it's accessible um so and and obviously uh netbox is the one that i had talked about earlier from my experience as well so there's there's a lot of tools that might be in your environment that at some level play as a source of truth for some type of data again it's but there there haven't there hasn't been many products out there that have been specifically targeting us network engineers uh jason mentioned that you know well he didn't mention uh his environment but when he was a um when he worked over at dropbox his team created a a network engineering focused uh tool that was called network source of truth and uh you can think i guess i'm going to go ahead and credit him for creating this type of verbiage that we use today that's kind of synonymous across the industry but another really common one again is network uh netbox they're they're they're laser focused in on the networking constructs um and and maintaining all those types of documentation that we have so uh jason we've we've talked about some of the tools out there that might be considered as a source of truth but what are like some of the common types of data that a customer would use a network source of truth to store and maintain that's a great question uh well i i had forgotten to mention spreadsheets and since we talked about that you know when i when we were developing a network source of truth when i was still at dropbox or when i started dropbox we were a spreadsheet engineering company even though without high software focus it's like well where's our where's your ibm oh it's this spreadsheet it was something really it was called it was called ipm.xos so we could start there obviously ip addresses are one of the most common ones you think you have to have your ip address database whether it's a text file a spreadsheet which is very common so um or a bespoke system like infoblox or netbox or an autobot to do ipam for you but it's all the other things too like if you think about what it takes to build a configuration for even a single device not even say a network topology uh it's it's a lining or it's mapping all the things that are from the real real world uh we're sorry for mapping the things that are in the real world to the logical world and really it's about okay if you're going to take this it's your devices the interfaces on your devices the ip addresses the the site locations the computer rooms the facilities the floors the rows the racks rack elevations the cables the circuits that you're renting or leasing from circuit providers the uh all your optical hardware uh chassis line cards all of this stuff all needs to be tracked and maintained uh and so having all of these these elements inside of a source of truth that allows you to hierarchically hierarchy define objects as they would be in the real world and then also define relationships between those objects as they would be in the real world is really the key uh fundamental basis of a source of truth the network source of truth especially that's awesome and and all those points that you brought up all those different type data types they we didn't script this but this is perfectly in line with exactly kind of what i've seen right so from my personal experience the very first things that i wanted to capture were was my inventory when you start building network automation scripts one of the very first challenges that you're going to run into is your your python script or whatever it is that you're writing it can attack a network device or i'm sorry text the wrong word but uh it can it can perform its task on a a networking device only if you declare you know what that networking device is either by an ip or a host name you have to instruct the script to say go perform this task on this device now if if you're a network engineer and you've got five networking devices in your environment we probably don't probably don't need a job to be honest with you with just five networking devices and a small scale a static list of devices isn't that big of a challenge but when you start working in with dozens or hundreds thousands and tens of thousands of networking devices all with different unique uh host names and ip addresses and coming from different vendors etc a network source of truth for me the very first thing i needed to tackle was let me have a place where i can store all of my devices uh and have a reference to again like their ip addresses and to their host names as well as information like what version of code is this device currently running so that when i build my automation i can make some kind of intelligent decisions to say oh i know that this device is a cisco device i know that i need to do some ssh screen scraping to accomplish my job or this is a juniper device i'm going to use the netcomp api to to program the device so inventory for me was the very first thing that i wanted to solve outside of that ip address management a very very common theme some of the network source of truth tools out there today actually started primarily to focus on ip address management some kind of some kind of remote system that we can reference and get information regarding prefixes and vlans uh vrfs and that sort of information and circuits is also another big one but it doesn't stop there like jason was mentioning you know there's rack elevations there's power consumption within your data center these are really really important things to capture as well as constructs like vlans and sites and virtual machines and and it really just doesn't stop it it's ultimately network engineers we've got a lot of things that we have to juggle and we've got a lot of things that we need to document so having a single place that we can reference and trust is the big part of that that single place provides some tremendous value to the organization from like a day-to-day operations uh so that's something coming back to you like what are some of the features that are sometimes you know kind of it's it's a little convoluted because again some of the feature disparities in between some of these products that we've talked about but when you look at like the pure play network source the truth tools out there what are some of the things that are not fulfilled by some of these tools oh my gosh there's so many uh before we move on to answer this question on the previous slide uh i wanted to just chime in on that real quick uh and that one of the things that like with the types of data that can be managed within a network one of the guiding principles that actually the net box team um from which we forked not a bought into uh the netbox team had a really great philosophy when it says model the real world and that's an important thing to keep in mind because you know i had a question from somebody recently and it was like hey is it possible to assign an ip address directly to a device object in another bot and i was like well think about what you just asked me are you able to assign an ip address to a network device without an interface in the real world the answer is no so therefore the answer is no you can't do that the source of truth either i know so i think that now takes us to the next question of what are some features that are not fulfilled by a source of truth so many uh because you think about all the protocols protocols is a big one right you want we're trying to generate configurations and we want to be able to generate a confused config from a source of truth that we can then push out to a piece of hardware and have it do what we want it to do so you think about what are some complicated data center configurations that are out there things like gslb hsrp vr vrrp the list goes on all of these different protocols and all these things we can't feasibly model every single one of these um so there's always going to be have to be some kind of glue code from your source of truth into your mechanisms or your config generation system but uh you know a lot of another one that comes up is like circuit circuit maintenance circuit monitoring and troubleshooting uh certainly there are vendor platforms and printer products out there that do that um maybe you could have some kind of modeling of like your relationship with the circuit provider the circuit ids and the circuits themselves but having like maintenance certifications or any kind of correlation like that is going to be very complicated so i think it's like largely sources of truth try to model the common case because it's very hard to model everything um so it's kind of like you think about this then the source of truth as the starting point for automation in your environment and think about what are the commonalities those types of things you focus on yeah that makes a lot of sense uh some of the things that that came to mind for me were more service oriented right like uh ticket management right it's it's extremely common in operations world you kind of live and die inside of your ticket management system this network source of truth construct it's not delivered for that thankfully uh and there are better tools out there like servicenow and etc that can handle some of that but also at the at the network protocol level like jason mentioned uh a network source of truth is not your radius server it's not your dot one x server thankfully it's not dhcp or dns server right it's not going to be handing out these types of network reservations or these resources by you know some of the more common tools that we have within the environment so just because i uh we're making the the case that a network source is true should be the authoritative for the intended state of the environment doesn't mean it's an all-encompassing suite of products that um that are going to be handling some of the functionality that you have within your environment today even from an ipam perspective we mentioned on two slides back that ipam is something that is definitely captured within um a network source of truth but maybe not all the functionality of an ipam like you would get through in infoblox where you can uh set you know dhcp reservations and lease times and those types of things this is not a tool for that this is a tool that's going to be not only helping you kind of graduate outside of managing all of your documentation through spreadsheets or kind of getting out of the world of managing your data in six different pieces of software this is more of a documentation and an automation reference point than delivering some of those network services that we see within the environment today uh jason what are we doing here we're this is a network automation workshop why why are we talking about uh network source of truth what's where's the bridge how does this help us in the network automation space our source of truth is the bridge so i said that a minute ago and that's like for you know a network to code or consulting companies so that's the first thing we always ask for when we go into the customer environment so how are you how are you doing inventory what's your source of truth how are you driving automation are you doing configuration management all of those things are interrelated but it's like oh your spreadsheet's your source of truth and a lot of people don't think about that like oh we don't have one it's like we have to have something you're tracking it somehow some way right even if it's a notepad file on your desktop um and i think that's really the key there right is that why this is important is because you need something that you can trust and there's a joke that we've made for years it's like if you don't have a source of truth you have a source of lies um because the whole idea with um the whole idea about uh having a source of truth is trustworthiness that's it's in the name right and so if you can't trust the data that's coming from the system of record then how can you trust how these can how it drives that automation forward and that reliability is a key right trustworthiness and reliability and i i also tend to say that source of truth is just it's just an inventory database but it's more than that because it's strongly typed so it's you think about like when you go to create a device it's like okay what's the host name doesn't have interfaces what you know what manufacturer is that what vendors have what what where is it physically located um and all of these start this this is metadata about your your hardware or at the minimum but then it's also metadata about the logical components of how you use that hardware in practice um and so the more constraints you have on putting that data and making sure that humans can't fat finger entries uh the better um so like one good example is say for example you're in a servicenow shop which many many companies and states are you're using the service now cmdb component in which you know your supply chain is tied into servicenow and so new hardware gets purchased and it makes its way into service now cmdb well a lot of companies also aren't doing those integrations right there i would have encountered situations where oh yeah we have servicenow but we just putting everything in manually uh you know or we have that we have our monitoring system that has and we have services in a modern system like solar winds uh but we also just manually put everything in a solarwind so it's like all right there's already a problem you've got this source of truth already that you're not using and you could and that's wasting not only wasting time but it's potentially costing you money because if you cause an outage due to a typo you know maybe you could do a calculation for a dollar value on that but depends on the line of business and how long you had an outage for and hands down human error is the number one cause of revenue impacting outages across the industry um there's plenty of research out there back back up that statement and so it comes back to why is a source of truth important okay you know there's those reasons are you need to have that reliability in order to trust robots to do the work for you yeah that's that's the that was the selling point for me on to this right when when i was learning automation a few years ago i didn't have any kind of coding experience i had was just your traditional network engineer and so i'm learning python i'm building up my skill sets i'm learning ansible kind of figuring out how that plays into the environment as well um but there's a difference like when you execute an automation script um you know there's like a sweaty palms moment like your first few couple years that you're doing it you're like is this going to break something and if you think about all the nuances again all that data that we put inside of these automation scripts we need to know that dallas firewall 0 is at ip address 10.0.0.1 it's not changed to 192.168.10.1 and another device has now resumed the 10.0.0.1 then your automation script doesn't matter how pristine and how perfect it was if it's referencing the wrong device you're going to have a bad time and so having something within your environment that you can trust that other teammates can contribute to and some of the points that jason brought up around um modeling networking making sure that we're entering valid ipv4 addresses a valid ipv6 address right there isn't the letter s inside of an ipv6 address that there isn't a exclamation point inside of my ipv4 address having some kind of system that can provide a data model for all of our networking nuances all these things that we're capturing there's a high level of trust that you can build into these systems so you can safely build and execute your automation without getting super you know clammy palms every single time so trust trust is a really really important thing uh jason i i had just uh you know just one other thing i wanted to talk about before we get into the conversation of nadabhat and it's just an example to piggyback off of that last point like how does a network source of truth play into a an existing network automation environment and so our last session if you weren't here uh we we talked we did a real deep dive on ansible for network engineers and for anyone that was in that that presentation you would probably recognize this slide that i had put together about what it is that an automation framework actually provides uh within your organization this construct of managing inventory of having a plug-in ecosystem being able to execute your scripts behind the scenes and to to be able to not only display the results of those tasks but also to be able to send them to an external service and so if you think about the constructs of where a network source of truth could play within here well the very first thing that i i like to talk about is delegate the responsibility of the inventory outside of the automation framework and put it to your network source of truth right so in the context of of ansible you would have a an inventory file either like an ini format or in yaml where you literally declare this is my device this is its ip address it belongs to this group and then you copy and paste and repeat 10 000 times until you've documented all your devices that's a headache and especially if the devices go away if there are made things change within your environment then you're manually editing uh configuration text files you don't want to be in that business so you can take your automation framework in this case ansible that i'm talking about and you can say hey why don't you go re reference your network source of truth talk ansible i want you to talk to uh to nadobot and get a list of devices get their ips and build the groupings of these specific devices based on the role that they play within the environment and then execute this task that we're trying to do against the network and then depending on the results you might actually want to ship those results back over to the network source of truth as well there's this construct of what we call like a callback plug-in where you say hey take all of my tasks and for all the devices that were unreachable or unresponsible go ahead and update the network source of truth with the tag to say this device is unavailable or our last automation task had failed against this device and here's all the information regarding it but you can also take it in another direction as well you can actually use the automation frameworks and this this construct of a plug-in ecosystem and actually leverage that to fill and populate inside of your network source of truth so if you would not like to manually enter all of your devices you want to use ansible to also get your desired state into the network source of truth there's actually plugins within these types of automation frameworks that can help you manage your network source of truth and an automation type of consent so it's i won't call it closed loop right but it's kind of like this contract like we're automating part of the automation we're we're or we're using the documentation source to create automation uh for our environment and vice versa so there's there's some several really quick wins that that you can uh to leverage a tool like a network source of truth and some existing uh automation within your environment when we get into some of the live demonstrations you'll see tasks just like this being leveraged within our demonstration environment but for now i want to put a pause on the the theory the philosophy the value proposition of a network source of truth and i want to talk or i actually i want to let jason talk about um what is not a bot from network to code and and where it plays within this environment so jason can you can you go ahead and just kind of lead us on with uh with what this product is that we're talking about i will do my best um so first things first uh many of you are probably familiar with netbox and um some of you may be familiar with the fact that nanobot is a fork of networks we we launched the fork about almost exactly three months ago um and the big differentiator there is that netbox aims to be a source of truth very reliably uh and uh the key decision that we just made to decide to fork that box into nanobot was the vision of not about being a platform with the source of truth at its core so we're going whole hog on writing plugins and extending not about to basically do whatever you want but driven by that central source of truth database that is focused on modeling network information so uh you could you touched on something in the previous one about this this philosophy of like uh you know with ansible i've been able to have a callback plug-in and or not philosophy but the future of having a callback plug-in that based on some some context of the task that was executed by ansible to be able to go back and feed that information back into the truth that is a very important concept there and it's this delineation of your sort your intended state versus your discovered state and nautibot is 100 focused on being rock solid for your intended state and there's this paradigm in the industry called intended state or desired state networking and that's what we're that's what we're really hinging on here um so what that means is that you define how things should look and not about and that is that you so therefore how you intend for things to be and then so that's your intended state versus the discovered state is what you might find out there and that's how things actually are so you have how it should be versus how it how it is and so discover status things that comes from your monitoring system like pulling via snmp live streaming telemetry from systems like prometheus or telegraph or using open config discovery can also just be running commands on your hardware running operational commands running getters um you know getting facts so anything that would be used to add flavor to your data and i i'm going to probably keep reiterating this point about intended versus discoveries as we discuss this in the next hour or so because it's really important because you need to start with a reliable base of information and if you don't have one many of us might not have reliable sources of truth but we do have say like solarwinds actually kind of is like a jack-of-all-trades it does everything um you know it has inventory it has discoverability you can do some people and it can do ssh integrations it has monitoring dashboards it has add-ons bolt-ons to notifications to do knock you know operations dashboards it's pretty full fully fledged um so that's a good example there that with the system like solarwinds it's actually kind of hard to tell exactly what the state is because it's also updating things in real time um and my personal opinion i have i have a very big problem with the systems like that i'm not that solarwinds isn't great for what it does but i just it's not for me as an engineer as a developer either so what we'll find is like you have a lot of people out there you have a spreadsheet how do i get my information into a source of truth because you have to start somewhere um and so if you did have mining system um how you could flip the relationship so it means it's like okay let's let's clean up this data like this ibm spreadsheet we have clean it up and put it into an autobot and then delete the spreadsheet and never touch it again um or i've got all this information that came from solarwinds or from you know it was discrete it was exported from the cmdb and solarwinds or sorry in servicenow or it was from this router db file from my rancid inventory from when we're fetching configs those are all seeds they're all starting points um but um it's not that you can't put discovered data into the source of truths and i think that when we start talking about some of the plugins it's like that's what they're doing but you need to have really clean lines so you know this is what we declared and this is what was added by way of some kind of automated discovery or something workflow based so i guess that's probably enough for us to go to move on to this question yeah it sounds like you know it's really important to to have cultural buy-in within your organization right to say look we've been managing things where we reference uh tools like solarwinds to maybe build a map and we can you know put network devices on a map and then we can see you know red yellow green um you know what's the current uh state of that device um but what we're looking for inside of like a network source of truth tool is something that declares exactly what we what's the desired state for that specific device or for the specific environment and we want that data to be easily accessible now believe it or not some of these really popular tools that we've been talking about tools like solarwinds and infoblox etc they don't always have the best apis right it's it's not just it's not just like a check mark on like an rfp does your box have an r uh an api or does your software have an api that's not good enough right because not all apis are created equally actually none of them are they're they're all different in their own uh context and and what i saw as a customer i was extremely frustrated with some of the tools that i had in my environment that we were relying on for this data i wanted to use that for my automation but there wasn't an api to get the data from right or maybe the eight there was an api but it was really convoluted not well documented really difficult to get the data um and that ultimately blocked the kick for me getting started with automation there was an example i gave earlier about where when i first stood up uh my network expressed the truth and i i had this epiphany well let's just convert my solarwinds data and import it into uh my my netbox environment um there wasn't an api for me to do that i actually had to get at the database level um the master operational database of solarwinds in order to actually get the data that i needed to import and that as a new network developer was terrifying right that solarwinds database is a monster and you just don't want to be making the wrong kind of changes to it so it's it's not enough to just have the data in different places in different locations we need access into that data we need easy access into it so that we can ingest it within our automation but i i want to go back again to the cultural part there this was a difficult thing for my organization at the time to kind of wrap their head around that we were going to bring in yet another tool that the networking team was going to manage and maintain and a lot of my teammates they really didn't see the value i was the only one doing automation at the time and so they're like why are you doing this isn't this data in servicenow isn't this data in our ddi isn't this data in solarwinds why are you why are you putting them all together um so one of the things i just want to just say that when you start going down this path of of looking and incorporating a network source of truth into your organization you have a responsibility if you want this to succeed you're going to have to be able to kind of share the value with your teammates kind of help explain exactly why it is that you're evaluating this tool and what's the unique feature that it's going to be providing in your environment or else it's going to be just yet another dead piece of software that's out there within the environment you you there is a cultural aspect that you definitely need uh some some um some grounding within yeah let me jump in on that real quick so when we talk about when kevin said it's a cultural thing that's actually huge that's probably the most important thing about driving automation in your environment say like you're you're sea level exec and you've decided to buy off you know a set budget and we're gonna okay we're gonna spend money on automation this year we just got you know the new cisco platform that will integrate all of our hardware cool problem solved right no not at all like what about all the other things that are required for people to use all the automation reliably in in your environment and kevin touched on that you have if you have this really amazing software platform like nadobi for example i think it's amazing if nobody's using it it doesn't matter so driving adoption is one of the biggest things and that starts from the top you know leaders have to open floodgates for their for their teams and for their employees and their you know to empower those engineers and developers to do to automate things and that's things like change control processes maintenance windows um you know change change review process why i already said that one um but also empowering the teams like hey you you can work on this you should work on this in fact i want to set a deliverable that by the end of the year we're gonna have 50 of our our workflows automated something like that you know it has to be intrinsic to the culture of the company uh and if it isn't then it's just kind of called fall flat um and and it comes back to standards too right like the culture has to be around people are going to use the same system because we've all been there i know many of you and i was in the same boat for a long time who love hand jamming those vlans at the keyboard oh yeah i'm going to turn this vlan out i'm turning this vlan up but what happens at one time you fat finger you take an entire data center down because it happens all the time and i said this earlier human error um so it's like comes back to those change control processes the culture of some degrees enforcing that people will use the animation or making the automation the only way to do things but they also have to be it has to be the past path of least resistance because people are always going to go wherever it's easiest to get their job done and i've seen it time and time again people will circumvent draconian controls that don't make sense just to get their jobs done they're not trying to defraud the companies they work for they're just trying to get their jobs done so that's where automation is everybody's best friend when it's done right but again it's it's a it's a both top up top-down and bottom-up approach has to happen agreed uh so talking again about the product that box uh you had mentioned um that well starting off i said that this was a relatively new project i think it was announced back in january uh you you shipped 4.0 i think about a month ago it's been 4.0 i'm sorry 1.0 but but the product itself is based again it's based on netbox 2.10.4 an extremely stable and reliable piece of software um can you tell us a little bit about um you know what are some differentiations that we that we see what are some of the features uh that nattobot is is moving down uh moving forward you you had said that it's a little bit more developer centric um you know can you expand upon that to help us understand exactly what you mean by that yeah we've added a handful of new features some of them were lingering features that were desired in the community for a long time not to say that that box hasn't been following suit and keeping up with what the community wants um but um really we that box i think is in a somewhat unique position i'm not saying we're the only ones out there but every single one of our customers was using that box and so we were kind of aggregating a bunch of problems that people were having and one of them was the entry point how do i even start using nautibot for the first time so we uh like we made it uh it took a lot of effort which basically had to rewrite every single firearm file in the return repository to turn netbox into nanobot and make it an actual python project that is published onto the python package index so outside of the the system dependencies the installing number is literally as easy as our fifth install modify and that's a it's a really compelling story so we've also added a when i say we earlier i said we we're going whole hog on on portability and accessibility um that's that's actually really true is what we're doing is enhancing the plug-in api as compared to that box and then really putting a lot more effort on making it so that it's easier and it's a plug-ins are a first-class citizen in fact we also have a longer-term vision of actually establishing an app store of sorts where you can go and search for apps and and plugins that will extend autobot for all kinds of different stuff um we have we um another key one is we added custom fields to all objects so all primary objects now are able to support customers i also think that was something in the pipeline for netflox but um you know it's kind of hard to always keep track because we're trying to go our own way you know i think you know longer term there won't be a necessarily need to compare to that box but this is also brand new um but you know what i also said earlier thinking more of that not about as a platform for your infrastructure than just the source of truth um and what that means is like extending it to do things like tracking your cloud compute um managing your bgp your bgp appears with hp asms doing vxlan or uh evpn uh we don't because that's not in the core because as i was saying before it's relatively unfeasible to be able to model every single thing that could be possibly done on a network but various customers and various environments have different needs and so the door is open there for anyone and everyone to write plugins to serve specific needs like that and then furthermore we've we've published a handful of plugins already i think you could you had kind of mentioned kelvin like things like chat ops things like configuration management so it's like uh so like we do we have a formalized chat ops plugin that we're providing we have we have a golden configuration plug-in that we built um data validation um when i said earlier a source of truth should be a strongly typed database right like a device is a certain specific type an ip address is a very specific type of data um you can't just go put words into an ip address field that's going to barf it back at you but more more importantly there's a lot of need out there for custom validation rules and so we also added it down a database data validation plug-in model that you can write a plug-in that provides its own custom validators to say oh i always want to make sure that every host name on every device in my network starts with calvin right you could do that with this the validation plugin now i don't know how useful that would be to you but the idea is that you any attributes on any object inside of the database you can come up with your own custom rules which is actually pretty cool um the last one and i think big cool feature is it's user-defined relationships um this is one that people have been talking about for years we put some well actually there's two things but but user defined relationships um currently in the data model for example um if you want to define an asm you have to do it on a site and it's probably a leave over from when netbox was originally developed at digitalocean it gives you that maybe that's how digitalocean manages their asms but that's not how every organization manages serious sense um so we're sorry vlans too like you know there's no direct relationship between a device and a vlan it's actually very difficult to say show me all the vlans on my that are connected to this device or attach this device it's possible but it's not straightforward so you could for example define a custom relationship that says i want all the vlans to be directly into phones i can create a relationship between devices and view ants um it's a little advanced but it's totally possible to do and then the relationships also end up with having their own api endpoints that you can call uh it's actually pretty cool and then the last big feature that we added were was graphql and you think about every network is a graph right and so if you're modeling a device and it's interfaces and the cable that those interfaces connect to and then the interface and the device on the remote end you know that's basically a circuit but what if i wanted to fetch all of the interfaces for a specific site or for each interface i wanted to grab a specific set of metadata and um you have to make a lot of round-trip calls to the api and so that's really where the graphql api shines it allows you to query you know it has what's called a query schemas and so you define at query time what it is that you want from the api and the api gives that back to you exactly the way that you asked for it so it's it's incredibly cool uh and very powerful and we're actually using that functionality in one of our plugins yeah the the graphql aspect i i just have to i just have to make the statement we we are a very big fan of graphql here at juniper and it's it's kind of interesting because i believe the person that built your graphql api is the same person that built the graphql api for appstr um so it's funny to see these things kind of come together but uh like like for uh take an instance like if if i let's just say we got a cve level 10 um you know critical vulnerability in our environment um and let's just say that it's directly associated with the specific uh optic right and from a specific vendor and that specific vendor has a specific set of serials that that cve uh plays to now if you wanted to say hey nadobot give me a list of you know all the devices that have optics with this serial match and let me know what systems they're connected to so i can start beginning scheduling changes within my data center or within my wan infrastructure what what graphql can help you do is is you can actually do like json i mentioned you can actually develop your your query mechanisms and say hey look for all optics made by finisar that have a serial number um between you know this specific range that are online and active with uh with servicing traffic uh and give me all the endpoints that are connected directly uh give me all the circuits all the interfaces uh and what are the remote end systems that are connected to these optics so that we can begin that and that can happen with a single api query whereas uh you know as in contrast it could probably take dozens of apis a dozen of api queries to actually get that data and then you got to merge it together and format it and get it all nice and and perfect that's a lot of effort so graphql is is is a really big feature um very big fan of it i also wanted to talk about the the tags that you had mentioned like you you uh you enabled these custom fields to be on any object within the environment and thinking about like the customer environments that i've been in and the different teams that i've worked with everyone's got like a different way of viewing network components right different ways of looking like a data vlan and a pcv lan and maybe a voice vlan a wi-fi vlan or maybe a specific type of device or platform etc so having the ability to add custom fields or maybe even use them as like a tagging concept on every aspect within the product is extremely valuable for me because i know everyone views these different types of objects in a different type of perspective so being able to have these kind of custom fields that can be something that we can query within the api quickly even like through the graphql give me all the vlans that have you know this specific custom field associated with it that's an extremely powerful um powerful thing to have with them uh before we get into the live demonstrations i just wanted to talk about some of the components that make up nadobot and and this is going to be really important when we get into the the installation because each one of these components requires its own installation and its own configuration parameters to set up some more than others but these from what i read in the documentation these are the core components that actually make up the product of nadabot which is great because uh json correct me they're all open source and available um so um for for uh application and another types of use cases as well uh but uh the the the first thing i just want to mention is that uh the nadobot project is is and that box which it was forked was is based on django uh the django uh web framework is is a python um web framework that's extremely mature extremely robust and extremely extensible that that foundation having the foundation of django within the project really opened up the floodgates as far as what can actually be delivered within the product and another really important thing that django provides we're not going to do a this isn't a web development course but uh just know that like one of the real killer features that django provides is it can help manage uh the the data and the relationships within the actual database itself that what we call an orm in that context is a killer feature of django but the database that it uses today correct me if i'm wrong here but it only supports postgres is that is that the case that is the case but that is actually another feature that we're working on um we're adding mysql support that's actually coming up on our v11 release which we're targeting for the middle of june so in a few weeks from now um which resulted in us just changing quite a few things in the database uh core to be able to support both platform and mysql um and my sql actually will open the door to many other database technologies as well because the hard part was decoupling from postgres but there's another thing you said that said there about django django is a very robust framework it's perfect it's great for enterprises and large applications like this like these are people that know know there's a big rivalry in the python scene between flask and django and you know why would why use django just use flask it's like yeah but when it's a big application like this that has things like authentication and our back and to being able to support multiple database backends and having you know the list goes on django is really really good for that so agreed there's a couple other things that are focused more like on a on a at a task level i'll just mention that there's there's the redis um is is more like a it's an in-memory caching that's used for uh primarily for for tasking are you using it for like key value uh data store as well uh yeah redis is used for redis is an incredible product uh but because runners can serve as a message broker so that's how the tasks are are scheduled and so we can fire up background tasks and uh and have the redis workers pick them up and execute them and then feed the results back into nautibot uh the redis also acts as our data our cache so by default nanobot is cached and this is the same in that box we inherited this from that box um we've made a couple tweaks to it but what that means is that by default uh you'll get all the data the circuits serve to you will get be served the first time from database but then after that first response that gets cached uh and what that does is that first of all i'll speed things up second of all it shields the database from getting crushed under high load um your database engineers were looking for that and um that cache gets invalidated on on right so let's say if you update something that's cached it'll refresh it understood uh and and in addition with that that task concept there's there's also a napalm uh component of this that that can be used to interact with the network devices that's correct there is an abom integration uh you can give it credentials to be able to log into devices to be able to run fact getters things like get interfaces get optics get lldp neighbors et cetera so it's it's a nice way to be able to tie things together if you so desire to have an autobot be more than just a source of truth it can then be a platform that can go out there by way of plugins or jobs because jobs are a built-in feature i guess that was when i skimmed over we unified scripts and reports from that box into a single jobs entity so you can a script is a job that takes inputs and a report is a job that doesn't take inputs but using those mechanisms you already have a built-in way to say have a job go log into some devices to fix your information and if you so desire feeds amount information back into autobot i see so that information could be things like give me the running configuration on the device show me your lldp neighbor information and those types of things is that fair yep okay uh the last thing i just want to touch upon is there's uh the web aspect of this right we're accessing django over a web browser over http but we actually need two other components to actually help us deliver that so there's the wsgi which can be replaced with uh gunacorn as well and then there's the nginx aspect of it um so for for people who have never uh or just learning python and may have not gotten to the web development aspect can you help like explain what the w the the uws sji helps uh yeah so for those of you been around the block you probably have heard cgi before common gateway interface it's an http thing that was backed by apache for many years when we were writing perl scripts to write real web apps so wsgi's web server gateway interface it's an evolution of that and it's just a more modern standard for cgi um so what that does is that's when we're running the code for the not about server it's actually being ran as a wsgi service which just speaks http naturally um and then so we we made a decision to switch to uwiski as it's called or uwsgi uh because it's a it's a bit more performant than than unicorn um it forces comes with its own set of trade-offs but um you can also still use g-unicorn if you want i heard calvary called it gunnicorn nobody knows what it's really called but yeah if you really wanted to you could still use uh unicorn um but we just made the decision to swap usgs the default whiskey server because it actually can do a lot of really cool stuff like it could you could actually not even have to use nginx you can have it serve static files serve ssl um we're not going to get into that today but it's it's pretty cool all right and then the nginx component is just acting as like your your entry point into uh into the web application yeah it's very common for it to use nginx to serve ssl and to serve static files like i said you can have usb do that but that's not what we recommend right now we're still experimenting with that so what that means is that you have the stack where usb's at the front say being the port 80 and 443 listener for your application and also serving the ssl certificate and then it's basically a reverse proxy to your your usb front end for that's the cert actual service front end for fernando bot um and so that's also a way it's very common you can have you could have nginx behave it like a load balancer or be talk directly to the load balancer to be able to ride you know a round-robin pooled front end um enginex is fantastic uh it's just another you know another piece of the piece of the stack that needs to be there got it all right uh jason this is the part where we transitioned to my favorite section the live demonstrations um i love it because this is where things break this is where things go wrong uh and it's it's a lot of excitement so before we get to that i just wanted to to make some notes we did have some um some feedback within the chat um our friend mufasa had said that you are so on point as person environment where people are scared of automation i can totally agree with you and that that comment came in when we were discussing the cultural necessity uh the the mind shift within the culture that needs to take place to get people familiar to get people confident with automation source of truth can really play a vital role in there uh mark earlier had made a question he said what about products like metasolve that is more uh inventory oriented without the design information are you familiar with that product i am not but i was going to say i i think that the design intro that's interesting to hear because i guess but maybe more is more about maybe that person means modeling right like you're modeling a network in order it is it is tied to the design um so yeah yeah that was probably uh probably kind of a red herring comment that about that when i mentioned it um i was thinking about a and whoever said it it's common in carriers is absolutely right but unfortunately what you often have is a whole group of people in that type of organization who are involved with asset management and they stuff everything into something like metasolve and it started me thinking about my pet peeve of having multiple databases that have a lot of overlapping information because they're never going to be perfectly in sync right yep that's a big problem that's a that's a paradigm source of truth aggregation or sought ag it's a big one um if like okay we've got you know it's like you almost have a mesh network of sources of record and one of them maybe has the authoritative info you trust um it's hard to get an angle on that especially at larger enterprises yeah and uh one of the large enterprises that i worked at you know we we relied everything on solar winds like solarwinds was was what we would call today like our our source of truth is it's what we used to to reference all of our inventory all of our devices all of our config on all those constructs and for all the warts that it had it was our our primary go-to but the rest of the organization used servicenow and there was never a firm like uh understanding as to you know depending on what part of the organization you are do you reference the data in solarwinds or do you reference the data inside of servicenow and surprise surprise they were always different right and same with the dns servers and the ipam servers you know they're to have the the same or the same device but represented differently either from a configuration aspect or from some other kind of construct in different parts of the systems your automation's never gonna you just don't have that high level of confidence that your automation is gonna be performed successfully uh just you know working with these disparate uh systems so absolutely understood that one okay uh so what are we gonna be doing today we're going to be getting up and running with nottabot i'm going to show you a super quick easy way and then we'll actually show you a more production friendly environment and we'll after we get it up and running then we'll talk about how to how to discover what kind of plug-ins are available and how to install those and get them operational i've got one that i'm a really big fan of so we'll we'll bring that into onboard also we'll do some real live automation workflows right we talked last month about you know ansible and we talked about awx or ansible tower let's talk about how nadabot can can service within that environment as well and we'll also you know a couple months ago we did a fairly deep dive on apis we talked about rest apis and netconf apis we we showcased a tool called postman that can help us make programmatic calls to an http server we'll showcase uh you know do performing the same kind of tasks today of interfacing with netbox or nottobot or whatever tool that you have in your environment programmatically through an http client like like uh postman or in my case i'm gonna be using a product called paw but same same exact product at the end of the day all right uh so with that being said let's go ahead and stop the powerpoint and let's get into some live demonstrations and so i have uh two virtual machines that are ready for our provisioning one of them is going to be a docker-based deployment and the other is going to be an actual more production friendly one so i'll go ahead and start with the the docker-based deployment uh but as i i start and get ready jason do you want to add any kind of disclaimers or warnings about using um the docker-based deployment in production yeah so it's the docker-based deployment is rad but it's also choose your adventure because what we do not what we have not yet published our docker compose files and those are coming we're also going to have helm charts for kubernetes coming soon but right now you have to kind of know what you're doing in order to deploy this docker container correctly so i assume i assume with a little bit of confidence that calvin does know what he's doing here oh we shall see we also do have sorry real quick i don't want to try that we do have a not about lab repository that we published it is a self-contained docker environment that does have docker compose i want to be very explicit that that is not intended or for production use is a somewhat of a hack just to get not about quickly and easily into your hands and to play with but it is yeah not not ready for production the the docker container that we published to docker hub that is also part of the core repository is very for production right so this this first install i'm going to be actually using the nadobot lab this is a single docker file like jayton mentioned that can do it all but this is more for you to get familiar with the tool without having to go through some of the installation processes it's it's so much easier to spin up a docker container on your computer than reaching out to your server team and getting an ubuntu vm to reach out to the database team make sure that you're doing everything correct let's just go ahead and get you some hands-on with the tool but we'll also be doing again a production install so my my virtual machine that i'm i'm on i'm just going to run the command history here just to so you see that there's you know no no tricks up my sleeve basically the only thing i've done on here is i've changed the ip address i've done some dns lookups and i'm changing my terminal to make it look pretty because i'm obsessed with aesthetics in my terminal so i don't even have docker installed so let's go ahead and just breeze through a docker installation because it only takes just a couple of minutes for us to get up and running and we'll talk through each one of these steps so the search that i did was a ubuntu docker install it'll lead you to this specific page if you're running a different operating system if you're running a windows 10 desktop or if you're running a mac os desktop you need to understand exactly how to do a docker install on those so in my case i'm going to just do a lot of copy and paste because that is one of the best skills that i have so let's go ahead and do an update on my package repository now we're going to be doing a lot of linux things we're going to be doing a lot of databasey things web servery things um but again this is uh talking about all those six components that kind of comprise of uh make up nattobot we're going to be doing the installation and configuration of each one of these so let's go ahead and copy in my installation packages these are some bare minimum packages that are required to get docker at least some prerequisites for install things like certificates um things like uh curl and and some other kind of web utilities next we need to add the official gpg key for those of you that play a lot in the ubuntu linux space you will understand that this is kind of like a way of of validating that the remote resource that we're about to reference is actually the company that we want or is the is a legitimate uh company like there's no kind of like man in the middle posing as that resource that gpg key helps us for that the next thing that we need to do we just need to add that resource or the the remote repository to our list of trusted sources for packages that's because the the docker package uh the for the docker engine doesn't doesn't ship with the uh the the native ubuntu uh release and there's a lot of conflicts there's a lot of weird naming things but to avoid any of those name space challenges we're just going to reference that remote repository instead and the last thing is we just go ahead and install the docker engine on our ubuntu machine now this should go ahead and take care of the installation of docker for us there are a couple of additional steps that we'll need to actually execute on some post installation steps where we need to create a user group actually for ubuntu that group will already be created but we need to add our user into that group and then we'll log out and we'll log back in so our permissions to be able to execute docker are updated so in this case we'll just go ahead and skip all the other stuff we should be able to let's let's just move right on uh to the post installation let's go down here post installation steps so we've skipped a bunch of the other documentation for for the most part some of dr the doctor's documentation is is spot on sometimes they they add like 18 000 other steps that aren't necessary so hopefully you're following along and seeing which ones i'm picking if you're watching this on replay on youtube then you can hit the pause button the last thing i'm going to do is i'm going to add myself to the user group this is adding my username there's a a dollar sign user that's a it's a shortcut of just saying whoever i am will go ahead and add me to the docker group and then i'm going to exit out of my shell and i'm going to jump back in so that my permissions are automatic or are updated because it's not an automatic process from here i should be able to type in a command docker ps now that i've what i've done is i've validated two different things one i have docker installed on my machine and two my user has the permissions to execute docker without requesting elevated privileges through sudo so to get the nottobot lab up and running there's one more command that we gotta do and that's another copy and paste and we know pretty good at that game so let's go ahead and paste it into our terminal and what the very first thing it tells us docker tells us hey we see that you're trying to run the network to code slash nottobotlab container image but it's not on the machine so what you see us doing or what you see the machine doing is it's pulling down that image in multiple different layers now if you're interested i can do a real deep dive on docker or kubernetes or or these types of container concepts it's a it's a passion of mine but for now just know that a docker image is composed of different sets of instructions inside of a text file and each one of those instructions is is associated to a a layer within the image this allows you to not only just pull down the image as you know several different layers but if something hadn't changed in a specific layer then that layer would have been cache and so uh if you if you want to make changes to your docker container you don't have to continue to pull down more and more parts of the of the container image itself and like that we're done right we should be up and running with nadobot now what i'm going to do is i'm going to type in docker ps this is going to let me know what type of what containers are running in my environment what image are they based upon what command that they're actually issuing inside of the container how long they've been up and running and more importantly for us is which ports which tcp ports or udp ports are exposed so that we can actually access the application within the container in this case i can see that we've got a wild card of quad zeros and looks like we're listening in on port 8000 so let's just go ahead and what is my host name again all right uh that's pretty long let's just go ahead and copy this uh let's go ahead and jump back into my browser i'll type in the name of my the alias for my machine or my ubuntu vm and we'll go ahead and click uh 8000 as the port and what do you know we're in there we're up and running we got nautobot installed the the the last thing that we need to do in order to actually get access into the application start making some changes and playing around with it we need to create a super user now this is a construct that comes from the django world you need to have at least one kind of super user with username and password and and all the privileges that get associated with it so what i'm going to do let's see i'm sure there's a guide on oh there we go create a super user let me go ahead and kill slack because that usually gets me in trouble let's go ahead and paste this into the chat jason's laughing because i did get in some trouble about slack not too long ago let's create a username let's create it uh let's say power ranger let's just make it random and we'll say pink power ranger dot com is my email and my password is juniper one two three all of you know that's my super secret password please do not hack me and now let's now we can finally log in into the product so uh power ranger is my username my super secret password is juniper123 and what do you know we're logged in as power ranger we can start adding sites and rack layouts and elevations we can add oh by the way this is a product that's built around multi-tenancy for those of you that operate either within a service provider managed service provider or if your enterprise is large enough to where you have multiple tenants this is a product that is built upon multi-tenancy so really good to go so uh not too much right to actually stand the product up i think it took about five or six minutes with some copy and paste again this is not production ready uh this is uh not something that you would want to actually put you know real life automation workloads onto this is a really easy way to kind of get familiar with the tool understand you know its workflow understand its nuances how it's expecting things to actually be comprised with that being said let's go ahead and destroy the vm or in my case i'll just go ahead and restore my my latest snapshot and that'll take us back to that intended state or the the install state now let's go into an actual oh i shut down the wrong one you guys should have called me out on that let's fire this bad boy back up that might be a problem uh let's shut down the docker host this is why you need more differentiation in your naming format like jathan said i should be naming everything with the word calvin in front of it okay so as this vm boots up let's uh exit out of this terminal we'll create a new one ssh my super secret username is c dot and we'll say nottobot dash demo we'll see if it's online okay it's up it's active again no tricks up the sleeves i'm wearing short sleeves anyway right but just looking at my history and if i change it to my bash uh to look at the shell history i'm not doing anything here there's nothing operating on this right now um so okay we're starting with effectively a vanilla ubuntu 20.4 virtual machine now what i didn't tell jason is that one of the things that i um one thing that i hate more than anything in this world is bad documentation especially when it's like a getting started guide so i'm going to actually run the gauntlet here and use the official documentation um jathan you're going to be put on the spot here if there's anything wrong with this documentation the whole world will know about it so here i am at the the nattobot read the docs dot io this is the entire documentation not only for building an autobot environment but also helping with day two operations helping you understand all the options that are available for you and how to configure them things like ldap secure single sign-on right some of those additional features all that is captured here within the documentation now i i know about the product because we just did a workshop on this so i don't need to to revisit this but if you've forgotten any of the concepts that we had talked about in the previous hour this would be a good place for you to start let's go ahead and start with the installing the prerequisites and again i'm just going to be flexing my my copy paste skills because they're just on point uh looks like we have some mandatory dependencies a modern version of python so python 3.6 thank you for not making this an archaic python install looks like we require that postgresql database we knew about that uh and redis as well so with that being said let's figure out where we need to go to actually do our uh nadobot installation of dependencies and in my case i'm be i'm using a ubuntu vm so it looks like this purple link right here installing the dependencies is where i want to go first let's go ahead and open that in a new tab uh and this is what i love i love to just be able to to do this without you know any throw caution to the wind we're just going to be copying and pasting like it's going out of style here so i apologize actually let me see if i can't split the screen here because i really hate to just be jumping back and forth in between displays yeah that should be good enough let me clear the screen all right my next command i'm just going to copy this this again similar to the docker install this is uh installing our dependencies in this case we see that we're installing python 3 the package management within python which is called pip looks like we're also installing the virtual environment management system in python which we've talked about python virtual environments in the past it just it's a safe space for your python packages that they don't conflict with your host operating system uh we're installing the python 3-dev uh postgres database and redis server i think the r was on there uh it wasn't server looks like my my vaulted copy paste skills are not as good as i thought let me go ahead and hit enter all right so we're installing our basic dependencies that we had talked about here let me see if i can yes i was hoping that would automatically collapse nice very nice uh after these dependencies uh are installed the next step for us is to start working on the database this is the that postgres aspect of that we mentioned here there is a danger sign here it says do not use the password from the example we're going to use the password from the example but don't do this in production this is only for demonstration purposes only uh so we'll go ahead and copy our first command looks like we're changing our user so right now i'm the c dot user it looks like i'm changing my user to the postgres server a user that was automatically created when we installed the postgres package and we're typing in a command called p sql what this will do is this will put us in the command line interface of our database right you see that the shell prompt had just changed it now says postgres equals and pound sign so now we're going to be issuing some database commands first thing that we need to do we're going to install our we're going to create our database this database is called nattobot now here we're going to create the user called nadobot with the password of insecure password yeah that's good to go and by the way uh jason keep me honest here but you the postgres database user doesn't have to go by the name of nadabot if you've got some kind of corporate naming convention for database administration where it's got to be in a specific format and etc you can make an adjustment to the um to the the commands that i'm pasting in here to align with whatever your your corporate standards are is that a fair um am i lying is that true uh that is true you can call it whatever you want you don't even have to have a user if you don't want if that's vibes with your environment i wouldn't recommend it but one thing that is important though is that whatever user or not user that you have the user has to have access full access to the table or sorry to the database because it needs to manage the table schemas and stuff and that's exactly the next command right where we after we've created the database after we've created the user and the password we then grant the permissions for that user to actually interface with the database that we had created in the previous step all right the next one's a little weird if you haven't played with databases before it's it's a it's a backslash q and that's going to quit out of the postgres uh shell and so we're back into our typical linux shell um and then looks like we're going to be validating uh this uh that that what we had just created was legit so here we go uh password is uh was it insecure password there we go so it looks like i got um so what i did is i ran a command to reconnect into the database now as the user that we had just created uh and there i passed in the the password of insecure password looks like we can run some database commands just to make sure that the database is online and ready to start accepting connections so here it says we are connected to nadabot with the user of nadabot on localhost with these ports okay looks like we're good to go so let's go ahead and drop out of that and that will be the last that we see of the database um hopefully until we start talking about database migrations but that's one of the things that nottabot just does so well we'll definitely revisit that here just in a little bit uh for redis it's a lot easier for us to set up we just uh type we installed it during the uh the package dependencies installation now i just type in redis cli ping and i get a pong back and then so we know that redis is at least ready to start handling some uh traffic so let's go ahead and move to the next step which is the installation of nattobot so i'm gonna open up a new tab here and i'm going to just pretend like i'm reading this documentation i i only want to see the commands i'm sorry i know this stuff is important but my patience for documentation uh it's it's yeah we'll just move on uh so everyone else for everyone else though please be mindful of these these uh admonitions these warning boxes they do have important things if you're doing this for the first time yes don't be like me is the message at the end of the day uh so uh it's basically uh the documentation here it looks like it's it's kind of walking you through the steps saying hey we've got these different processes that need to be configured in specific ways and so that's what this installation guide is going to do um here's one of the things that i really enjoy about what nadabot does in comparison to my experience in working with with netbox is there's this contract of your natabot root uh folder directory and what's great about this is that it really kind of simplifies and streamlines a lot of the um the interactions this this kind of lays the ground for you to start using um when you when you change into the nottobot user having this root directory with all of your virtual environment right there with all of your nattobot commands right there it really streamlines the way that you interact with the product i'm a very big fan of that so jason and team thank you for for making this a reality it looks like by default that path is going to be slash opt slash nautobot uh slash opt if you're unfamiliar is just a it's a path within a linux file system that is generally accessible by all users as long as they have the right permissions to get into there so that's where we're going to be installing some of our our nadabot packages here um just one point we chose op nautobot but again you can choose whatever path you desire whatever fits your infrastructure that's all understood thanks for the flexibility um let's now create the nottobot system user now jason as i issue this command can you help us understand why have why we have to create an autobot user uh it's just a good it's a best practice for making sure that you're not always running the user as root bringing not about his root that's kind of a bad thing um so you know it's constraining it to run as a service user which is also very common paradigm in enterprise environments where you have a dedicated service account that runs a service for you so we're just adhering to that model understood so the the the actual nadabot application needs to be ran as a some user on the system uh but we don't want that what we would call a kind of like a service account we don't want it to have king kong access into your your system right because then that can lead to all kinds of types of problems so i'm sorry oh i said right okay so we're well because we don't want you to run it just root because that's just bad that's a bad habit that you know should not be perpetuated and you shouldn't run it as yourself because you know if you're going to be running this in production it should be ran as a dedicated thing that's all agreed yeah so the next thing is we're going to create the the python virtual environment this is going to host all of our packages uh for um to interface in with uh for to get an autobot up and running and we're gonna do this in a virtual environment and i know we haven't done the the python deep dive uh but just know that when i say a virtual environment i don't mean a virtual machine it's just a dedicated safe space for your python packages to run without conflicting with some of the python packages that ship on the host operating system and any other python packages that you might have for other applications on the same device so in this case let's go ahead and create this python virtual environment all right pretty easy peasy and let me see update the nadabot bash rc so uh the only thing that we're we're doing with here is that if you're unfamiliar with linux systems and you see bash rc and the only thing that you can think of is rc cola just know what we're talking about right here is that each user has a profile within the shell uh and like when i ssh to a device um i get like this really pretty shell that comes up right here right there's certain parameters that i can store inside of that shell configuration so in this case what we're doing for the nottobot user we're we're just telling it exactly you know we're creating a variable that we call nottobot underscore root and we're setting it equal to the path that we had talked about earlier which was opt slash nautobot uh so that's all that we're doing right here so that when when our nottobot user uh is instantiated well guess what it gets that that root directory as a variable um oops it looks like i didn't copy and paste as well as i thought let's go ahead and try that again all right outstanding so uh the last thing that we're going to do from my user is actually change over into the nottobot user this is the user that we just created in a couple steps above the one that's got that that root directory and the bash shell profile the bashrc so now we see immediately at least in my side right my shell has changed i no longer get like the pretty colors and all that type of cool history stuff i'm back into the basic linux bash and i can see that i got the dollar sign it's my username in this case nadobot the name of my host name all right so we know that we're now into the user of nottobot let's just first make sure that that variable that we had set inside of the the nattobot uh i'm sorry the bash bash rc file let's make sure it was properly loaded whenever we changed into this user so we do that by echoing out the value of our variable to the screen and here we can see that the value that's returned is slash opt slash nadabot so that we know that this step right above here was successful um and let's see uh this is regarding the path so if you're unfamiliar with the path on a linux system it's basically tells your system where to find executable binaries on your system that's everything from like ping to ssh to curl all the commands all the executable files they're stored in all these different directories on a linux machine the the variable called path is what your system will look for in a little as a reference as to which directories it should go look to find uh the specific executable here that we can see at the very top of our path is slash opt slash nadabot bin so that we know that the executables for nadabot are stored in that path and that our system is going to be able to find it so that we've got some some good to go stuff we'll go ahead and skip the centos type things the last thing that we're looking for here is uh pip3 pip3 again is the python 3 package manager allows us to make installation allows us to manage packages on our python packages our environment now the before we install nodobot we just have to do one thing and that is we're going to ins are you installing are you upgrading um wheel on here i don't think wheel is um uh installed into a python virtual environment by default uh and if you're unfamiliar there's there's multiple ways that you can install or there's multiple ways for a python package to get installed on your computer um or on your workstation in this case we're going to be using wheel to help us manage the the python installation of or the installation of the python packages so that we needed to get that one first now we're ready to actually install python autobot and this is one of my favorite things as jason i mentioned earlier the only thing that you need to do to install nadabot on your computer up to this point is just copy in this command pip install nanobot and that's because nadobot is stored on the remote python package system called pi pi which is where you would find things like ansible nor near all the other things that we've been talking about it from an automation perspective this is a really killer feature because it gets you away from some of the the grueling uh get operations and moving of directories and shifting all things around to be able to just run a single command to install nadabot is is is a super big plus so really great job on that i'm sure that took a lot of work to get um but really cool um all right so now we can verify the installation of nadobot uh by running some not-a-bot commands i'm gonna go ahead and clear the screen because it's getting a little bit cluttered over here let's go ahead and run our first nanobot command where we say not a bot server and show us the version now in this case my i just installed the latest package version which is 1.0.2 and if you're unfamiliar i didn't declare pick the latest package version uh this command right here if you don't specify a version it will always retrieve the latest version of that package so that we have the confidence in knowing hey we're working from the latest version of this piece of software and so that version for us is 1.0.2 now that we have the necessary packages installed now we need to start configuring the nottobot system itself so for us uh what we do to to actually um get a con or have a configuration file um uh just kind of like a bare bones one configured for us on demand we just run this command right here not a bot server init let me see that paste it into here and we get a feedback that says hey we've created a configuration file here's the actual path and that's great that's the file that's going to help us declare all of the configuration aspects for our autobot uh installation so let's go ahead and um let's pretend like we're oh no this is i was gonna say let's pretend like we're reading the documentation but this one's super important this is probably the most important section on this page right here is the required settings now in this previous step we actually instantiated the configuration file we haven't looked at it just yet but we got the output here that hey there's been a configuration file created for you what we need to do now is we need to get into that configuration file and start entering in the parameters of all these other components that we had set up previously like our database our username our database password if you've got a password on your redis cache all those types of things we need to declare that now within the configuration according to the documentation and jason i'm going to hold you to this it looks like we have three things that can be configured the first is allowed host and i didn't mean to click that this value if you're unfamiliar with django and django web frameworks allowed host is basically a way of saying to the application uh if someone points to um this uh yours this server either through like an ip address or a host name or an alias or something like that then you can actually load the application itself um in my case i'm just going to be setting it to a wild card of of this right here and then we're going to be entering our database information and jason correct me if i'm wrong if i don't really care about redis uh configuration parameters i i should be able to pass that one up is that fair that's correct you can just go with the defaults on a single system install like this okay so i entered a a command and i didn't actually talk about it but let's do that i typed in this this word vim them is a text editor it's a super powerful text editor and if you're if you get really good with them you have bragging rights in every geek circle that you go into you also will understand all the memes about not being able to escape them if you feel more comfortable in a different text editor like nano go ahead and use that it's going to do the same job but in my case i'm going to be using them to edit this configuration file that we had just created now we see a lot of colors we see a lot of text for those of you that don't play inside the linux space this is gonna start to look pretty intimidating don't worry too much about that i will be your guide here uh it looks like according to the documentation allowed host is the thing that we need to edit i can see that right here on my screen what i'm going to do instead of of this string that's that's listed here i'm actually going to put that to a new line i'm going to delete it what it was basically saying and you can see other references that are very similar was uh it was telling the configuration file look inside of my environment and see if there's a variable that's called xy or whatever it is in this case the allowed host look for an environmental uh variable inside of my profile and uh and then use that value in this case my for me i'm not gonna be creating any of those environmental stuff so we're just gonna go ahead and let's clean that up here we go i'm going to be entering in the wild card just like we see here inside of the documentation now i'm using single quotes you can use double quotes actually if anything just be consistent don't be like me let's let's go ahead and stay consistent i see that there's double quotes that are used here in python it's it's quite flexible between the two so we got the first thing done the next thing is to edit our database so inside of the database we remember that our user was not a bot autobot and does anyone remember what our password was i think it was insecure underscore password institute password all right uh all the other parameters we'll just leave uh we're not using a remote database on some remote server but if you were this is where you would either enter like the ip address or the domain name for that we're going to be using the standard uh database or postgres port to connect to it so we'll just leave it as it is everything else should be good to go i'm not going to be using authentication within redis i don't i i think we're just we're like jason mentioned we're good to go with the defaults so let me escape from them and if you've never done this before let me just tell you how to how to save your vim configuration parameters hit the escape key once hit the colon wq you'll see that in the bottom left-hand corner here once you've done this you are now an elite hacker and you can finally understand all those vim jokes it's kind of weird if you've never used it before but hopefully that makes sense we basically said bim i want to exit i want to i'm sorry i want to write our changes that's what the w was for and i want to quit them that's what the q was for so colon wq after hitting the escape key uh so there we just did the save your changes proceed to the next step so this is an interesting one this is talking about if you want any additional packages any additional python packages in the future to be automatically included with the system whenever you you perform your updates they're suggesting that you create a local file or a file called local underscore requirements.txt and within the root directory so we'll go ahead we'll be a good scout and we'll do that by copying this command right here and pasting it in our terminal all right so what we're basically saying is we just created a text file just now called local requirements in our root directory which is the directory that we're in right now and inside of that text file we are there should be just the word napalm so if i wanted to validate this hey there we go so we see that we've got a local requirements file it's got a single line in there called napalm uh the remote file storage i'm not going to be doing this um jason correct me if i'm wrong i believe that's more of like if you've got like heavy duty fi like static files that you want like either images or like binaries or something like that is that typically where the remote file storage comes into play that you want to put that to a different system yeah most commonly uh it's like if you want to say serve your stack files on s3 like in an amazon s3 bucket you can use django storage's they have an adapter to do that got it so it's like storing it other than your local device file system you can use the file storage plugin to do that and that's one of the many benefits that django provides is that's that's typically a really really hard thing to to get into an application so just right out of the box it's pretty simple but it's it's outside of our our use case now this command is it's just it's it's the money shot this is it right here right so the we talked earlier about like django and one of the key features of django is that it can help manage and maintain uh your database through this what we call an uh an orm um and basically what we need to do is we need to create a bunch of database tables and we need to spec uh you know specify what kind of data specification is this is this one an integer is this a string is this an array all those types of things rather than us being a sql database administrator guru all we need to do is to get to the system to the state that we wanted to do to perform all those database migrations is simply copy in this this command and paste it into the terminal and so this will actually perform all of your database operations so we see here looks like we're creating some different tables we're setting some different parameters within those but again i'm not a dba i'm not a database person to have this kind of like delivered for you is just it's it's awesome i'm a super big fan of it and the last thing that we'll do regarding oh i keep saying the last thing but there's a lot more um in this case we we need to create that super user just like we did on the docker install remember django requires at least one user that has king kong privileges uh for the application in this case what we'll do is we'll just paste in this command and we'll create our new user so i want to be named jason and my email address is cool guy at ntc.com and my password will always be uh the greatest vendor in the world which is juniper123 uh and then tell us about the the collect static because this is kind of a weird one uh jason for those that don't play within like the nginx space quite a bit it's all all the files that aren't your code like your your javascript your css your images all are your static files there could be other types but that's the most common ones um we also have as on the screen there we have a couple other things that static places for plugins i guess like for jobs and git um but that's not a scope for this so when you do collect static it's going to pull in any of the javascript or css files that are bundled in the internal applications for an autobot and dump them into a single place and what that does is it allows you to point to that place say in your nginx config to say here's where you serve the static files from um by default if you don't do that well it's just a decoupling it's generally is an industry best practice conserving web applications that you shouldn't quote unquote have the web app also service static files i'm not sure if that's still a thing but that's where this comes from oh that's perfectly fine so it looks like it captured almost a thousand files this bunch of javascript a bunch of css a bunch of goodies and it put them in the right spot for us uh the last thing that we'll do regarding the python virtual environment we're going to install the um that napalm package the the the output of that text file that we created called local requirements remember it only had one thing in there which is called napalm uh we're going to go ahead and install that into our python virtual environment as well and so there's a health check apple uh application uh within nadabot which i was really surprised to see but really thankful for this so it looks like you're doing a bunch of server checks on the application itself to make sure that it's good to go and we we got the thumbs up from them now django itself comes with a kind of like a development uh web framework or a development web server just to make sure that your application can can actually run before you start configuring w sgi and gunacorn and nginx so we'll go ahead and and do that we'll just validate that the application was installed correctly and is ready to start servicing uh connections so in this case uh we we told nottabot to run at the wildcard of zero zero zero with uh port 8080 so that means for myself i need to just type in the name of my nadabot demo on port 8080 and here we can see not about loading on my right hand side of the screen and we see like literally what's going on as far as like get requests http requests and etc all those static files uh there's quite a bit that make this product work uh we should be able to log in now with our super cool person username and our great vendor password and here we go we're logged in as json into our service now this is a development web server that django is providing here this is not made for production this is just for us making sure that the the system is online or was installed correctly so i'm going to exit out of that by saying control c uh and then the next thing for me to do is find my documentation uh where we got okay so uh so yeah great we've got an autobot installed it can talk to the database we see the application loading in our browser we're able to authenticate looks like we're ready to start rolling this is where we need to start configuring those two elements to help us with the web aspect of that so let's go ahead and move on to our next page here and i don't need to run this but i guess we'll go ahead and do this right here not about server okay so it looks like we've got a lot of options uh that we can run here pretty cool just this come from usq okay god oh okay i got it i got it we didn't write all that i got it all right uh so i'm going to clear the screen and it looks like what we're going to do is we're going to create a new file uh this ini file right here so i'll go back into vim again and i'll copy the interesting elements we'll just copy the whole file and it by the way if you're if you're trying to make changes to a vim file you got to make sure you're in the insert mode when i press the letter i i get this insert and i know i'm now inside the insert mode so uh copy paste into there looks like we're good to go everyone remembers how to exit them it's escape colon wq feeling good about ourselves now we need to do some linux system services uh configurations we want nadabot to automatically start up whenever the system reboots uh so we need to tell the system how to start up our application so that's effectively what we're doing right here but really important point anytime you start messing with your your system d configuration to create these types of services and etc you need to be in on a user that has elevated privileges and we know that nadobot is not that elevated or doesn't have those types of rights so i'm gonna exit out of my nadobot user and i'm gonna come back into my user that actually has those king kong permissions and i'm just gonna say let's go ahead and create this file this file doesn't exist yet so in order to get that elevated privilege i'll say sudo them and type that in and uh normal people you'll you'll be prompted for a password i disable all that in my lab so that's why i wasn't prompted just now uh i will copy that again to insert the inner uh to enter the insert mode i press the letter i control paste escape colon wq you'll get really familiar with that um muscle memory so it looks like we've created the nottobot service for the system to be able to to know how to actually execute nadabot whenever we uh reboot or instantiate the system uh and let's also create the worker service uh this is uh for the um the redis worker is that right yes that is what uh they'll subscribe to their queue if there are any scheduled tasks and that the workers want to execute some the tasks understood all right we're at the finish line ladies and gentlemen we're almost there let's go ahead and escape colon wq we need to tell the system to to reload it's systemd daemon so that you can understand the changes that we had just made in with those two files that we just created uh and then now what we should be able to do is finally start up the service itself so let's go ahead and start up the nadobot service it looks like it should be online let me go ahead and validate this by saying status and nadobot yeah that's fine okay the the thing that we're looking for is we want to make sure that we see that it's active and running and that there aren't a bunch of errors right here so this looks good to go let's also validate the the worker service as well that we created it's also active it looks like it's good to go so we know that nottobot and the worker process of nadabot are both ready to start handling some traffic but we still have uh our final step which is the actual web configuration aspect of this so let's go ahead and create a ssl certification this will be this certificate will not be a trusted certificate so you will get a browser issue if you want to go through the process of obtaining a real ssl cert there is a product out there called certbot it's fantastic it's really great it's just sometimes getting it working behind an added device is kind of a pain so i'm just going to go ahead and create a self-signed certificate instead um so there's some basic things that i need to enter here what country we're in the coolest country what state i'm in the coolest state what's the name of my city it's magnolia uh organization that's fine so just some basic self-signed certificate stuff uh nothing really to worry about right there now let's finish up by installing nginx into our system and performing the configuration i think this is yeah we're here we are let's uh for ubuntu this is the file that we're going to create let's clear the screen sudo bim again we're in the etsy folder so that we need elevated privileges to start making changes so we're still not back into our not a bot user right now and we'll do our glorious copy paste i to insert and then escape.colonwq we have now written that file uh and nginx has some weird kind of defaults sometimes that you kind of gotta override in this case we're gonna remove some of those defaults it's okay it's completely safe you need this to take place so we'll remove some of the the default configuration files and then we'll create an alias of our nattobot nginx config and then point it to the actual nginx configuration where it's going to look for then we just need to restart the nginx process and we should be good to go that should be the last thing it looks like um we want to make sure that the nattobot user has uh the correct appropriate permissions for the directory that the product is is hosted in so let's just go ahead and shift back into the nattobot user and run this command and we should be good to go now i will point to my nottobot demo i will not use port 8080 because nginx is actually listening on on port 80 and 443. uh oops let's see here i'm let's go ahead and move this around and there we go okay and we get their self-signed certificate notification now uh if you use google chrome or or brave or some other kind of microsoft edge um you might see well where how do i proceed what do i do there's a secret command i'm not joking it's it's a secret command i'll type it up here in the name of the browser this is unsafe you have to literally type that in inside of your page so this is unsafe and now you can bypass that self-signed certificate error that you will inevitably will hit and now we're back online with a production ready installation of nadabot so go ahead give yourself a round of applause that's pretty awesome your company is now ready to start doing some network source of truthy stuff the next thing i just wanted to do is talk about the installation of a an extension of a plug-in and we're pretty close to time so uh we'll have to kind of cut this pretty short but uh jason uh how do people find out where where to where are all the nattobot plugins what would be your google search to find a repository for nadobot plugins well we have all of the ones that we have developed are found right there on the github organization for narrowbot so if you just go to github slash nanobot anything not about dash plug-in that's the naming condition where we'll be all the plugins we have i see so okay outside of that coming in this in the future i mentioned this earlier but we are working on a kind of a plug-in registry and with a longer-term vision of actually having like a store of sorts it doesn't necessarily mean there'd be for pay but just where you can have a central place where you just go and find plug-ins that already exist and easily install them from there okay so the the the plug-in that i'm going to install here is for device onboarding this will help streamline the adoption of devices inside of that are in my environment and bring them into nattobot so if i look at the installation steps here the first thing it tells me to do is jump back into the user of nadobot so let's go ahead and run that command and hit enter uh am i on the wrong suit oh i don't copy the dollar sign calvin uh and now we're gonna uh install this specific plugin by typing in our pip command again pip is the python package management in this case we're going to be installing the nattobot device onboarding plugin this will allow us to simply say look at this device use these credentials and then grab all the information about the device and bring it into nadobot for us so let's go ahead and clear our screen make things nice and clean and i'm going to include this in that local requirements text file that we did previously for uh for napalm so i'll just i knew it was going to copy that dollar sign let's go ahead and remove the dollar sign now if i look in that local requirements.txt file what we should expect to see is both napalm and this not a bot device onboarding listed in there and sure enough we do have that uh so the uh the configuration aspect of this is quite simple we just need to get back into the natto config not about config.pi file and i'm going to uh type in a search inside of them to look for the word plugins all caps all cap around here uh so in order to do a search in bim it's going to be a forward slash and then the the the search criteria so i'm gonna say plugins and there we are we're taken directly to that path um so i'm going to delete this by saying dd um don't know what dd was supposed to stand for but that's the command to delete a full line and we just paste in the name of our plugin and if you want to if there were any kind of additional configuration parameters for your plugin you would have this plugins config variable where that information would be stored in but again to exit it's escape colon wq and then we run our our migrations again and then we need to restart so kind of interesting we just made a change to the database because of the plugin that we installed the plugin has a the plug-in itself has data that it needs stored and so that it has a separate table for that and so what we just did by running this nanobot server migrate it actually automatically created that table for us so the plugin can actually store data uh for it and this is just called migrations i'm sorry they're just called migrations okay i use i'm in sales i use 30 words for every two um and so let's go ahead and copy in our final command here uh to get the system services restarted we need to reload an autobot and reload the nadobot worker and i'm prompted for a password which tells me i was on the wrong user to do that so let's jump back into my user and restart those processes and now what we're going to see if we move back over to my nottobot tab something's going to happen when i hit this refresh page and that is we're going to have a new section up here for plugins and we're going to see our on device onboarding plugin loaded in there so let's go ahead and refresh our page and sure enough we have this plugins drop it down menu it only shows up if you've installed an additional plugin that's a really good way of validating whether or not you installed your plugin or or if you forgot to update your configuration file so just always look for this now in the drop down looks like we've got a an option for onboarding tasks before i get to that i want to do two things first i need to create a site because we're gonna onboard a device but i need to also tell uh not about which site to onboard this device to so they're here we're gonna make our first configuration or our first change into the nadobot system we're going to create our first source of record in this case we're going to create a new site so under organizations i'm going to select sites click the little green plus bar and i'm going to call this site we'll call it dallas and that's uh the status um what's the status of site is it being commissioned is it active retired staging see these are all networking uh real life types of categorizations we would have for sites so again we're working in a tool that's focused for us network engineers it just feels nice to be appreciated once in a while i'll skip all the other information but you can see kind of where they're going with it autonomous system numbers time zones if you're doing multi-tenancy physical addresses if they're ordering pizza where to go that type of information you can also include tags in this case but for us i'll just go ahead and click the create that site uh here we get a little overview of the page of the site that we just created looks like i don't have any racks i have no devices no prefixes or vlans or anything really associated with it so let's go back into the plugins are actually before i get into there i want to use this repository called the device type dash library and the reason i like to use this is that this repository's cut like almost any classification for a networking device and it adds all the different kind of attributes of your networking device now to be completely uh transparent with you the device onboarding plug-in would automatically create a device type if if you onboard a device that that device type didn't exist before so in my case i'm going to onboard a firewall we didn't create that device type inside of nanobot so the device onboarding plugin will actually automatically do that for you it's just i like to use this repository instead because it also accounts for all my physical interfaces all my power supply information console port information etc so what i'll do is i'll look through the thick here and find the coolest vendor in the world oh there it is yeah cool all right uh juniper and i will go ahead and select my firewall model so here we'll say an srx340 and you can see that this is composed of a yaml structure so only thing i need to do is copy this file and return back into my nattobot installation under devices i'm now going to create a new device type and i'm going to say import and then i'm going to paste in that yaml uh format right it can be json yaml and json if you remember our session on data types it's kind of one's a superset of the other so don't worry too much about that we'll go ahead and click submit uh and it looks like the uh choice was not actually available so we actually have to create the the manufacturer first so let's go ahead and create the manufacturer called juniper and hit save there we go now let's go back into our device types and import that again and say submit all right now i have an srx340 this is kind of like a base template for any srx340s that get onboarded i can see my physical interface names i can see what types of interface uh types cat6 or optical or you know the the out-of-band management port as well as well as console ports and power ports and all those cool things all right now let's actually leverage this onboarding plug-in so rather than me entering all the information about my device my serial number my interface configurations my ip addresses all that great information rather than entering it manually the only thing i need to do now is just select the site that i want this device to be associated with type in the name of the host name which my case is dallas firewall zero port 22 for ssh there will be a discovery process uh over that port to understand kind of what what device it is my username is automation and my password is juniper123 and i think i mistyped that and i will skip upon the other stuff juniper doesn't do the config secret silly stuff uh the platform will automatically be discovered the role you can you can override this if you want you can say this is a firewall this is an mpls vpn router any of those types of things but for us let's just go ahead and create click create now what's happening right here and json keep me honest because i haven't looked at the code but what's happening right here is we're leveraging that task aspect of of nadobot right it's it's sending a task through the the redis cue to say perform this discovery of this device right that itself is like a worker effort that needs to be um performed you know build the ssh tunnel um use napalm to do discovery grab all the facts on the device that itself is a task so now we're using that that redis layer to actually handle the task queuing within the system now this page does not automatically refresh but i will go ahead and click that for us it looks like the status of this onboarding has succeeded we even get its ip address as well now if i come back over into my devices panel i will see dallas firewall zero it's currently active the role is a network device remember we had an opportunity to adjust that ourselves the type is a juniper srx340 here's the site once you get into like building out rack elevations and such um that will also be a part of this but we can also see the ip address for the management interface now if we drill into this a little bit more we get the serial number information we know how many rack units it takes we know that having four dogs on a zoom call in the same room is a bad situation hey guys calm down um but uh we also see like the platform this is running juniper junos uh we can look at the interfaces and we can see um on my fx p0 interface this is my management interface it's it automatically onboarded uh the ip address as well so that ip address will now be inside of the ipam construct of us we can see console ports we can see power ports and if i had configured napalm and my nottobot configuration file the the nadabot would be able to then log into the device and do like an lldp discovery or even grabbing the configuration from there as well now in my case it's going to fail because i didn't set those napalm configuration parameters but if you wanted to that would be the place to to go ahead and take care of them uh just to give you a a sneak peek there's there is a demo.nanobot.com this is available for everyone to access anywhere you are in the world uh and uh the the username and login are on this banner right here so that you can actually see what a real production environment would actually look like there's also plugins already included inside of here as well including like golden configuration circuit maintenance all kinds of really cool rules the last thing i just want to show you really quick is this chat ops aspect of nadobot that we had mentioned as a really cool feature so chat ops is a is a an interesting way of interfacing with um your nottobot system by leveraging some of the tools that you already use today to communicate with your team this could be microsoft teams this could be in our case it's slack um but i mean you can really build any uh a discord you can build chat bots for really any kind of system as long as there is an api and as long as that api is good to go these types of integrations are possible so in my case i'm going to type in was it slash schnautobot and what this is going to do is it's going to get back to me and it's going to say here are the commands that you are able to run to get information out of the nadobot system so i'm going to say why don't not a bot um why don't you go ahead and give me all the devices and i won't leave a filter on we'll just go ahead and copy and paste this command right here so not a bot space get dash devices very rpc-ish i really appreciate that by the way and then what we'll get back is an option for us to provide some kind of filter in this case i'll say filter on the manufacturer and we should get another query parameter here and we'll say discard all the basura and then pick on the cool one which is juniper and that will go ahead and reach into nadobot and provide you with all the information in the system about your junior per devices um gives you information on the role that they provide what kind of platform it is whether or not it's active or decommissioned the host name really great information but it doesn't just stop with the device types as either right as we can see we can get information into the circuits so think about this as you um are you're you're told that a specific remote branch is down and you need to open a case up with a t that you think that there's some kind of circuit outage rather than you know opening up uh 10 000 excel spreadsheets to find the circuit id you can actually interface directly with nadobot simply by using the same tool that you're already using to communicate with your team in this case we'll say give me the provider and i will limit it to att now i don't know if there's any att circuits we might not get anything back oh looks like we got one we have one atm t circuit uh it looks like the status is offline perfect for this demonstration in an example that we're talking about here so couldn't have scripted that any bit better but think about the power of this right it's it's we already talked about like some of the cultural challenges that are uh required to overcome to get a network source of truth adopted by your team even still once it's been blessed within your organization you're going to have pockets of resistance and that's fine people aren't going to always like every tool in your environment but if you can abstract extrapolate the functionality of nattobot of a network source of truth and bring it into the tools that you're already using to communicate with your team well then the ease of consumption is going to help drive that adoption within the environment so really really cool uh type of way of of interfacing and getting data outside of nadobot now we're one minute over and so i do apologize we we are going to have to to cut the presentation off at this point um but again i would want to open up to chat see if anyone's got any questions if anyone wants to ask any types of hey tell me about the art of possible of doing x y or z this would be a really good opportunity for those types of questions all right jason it sounds like we we answered every single feasible um question out there um and so everyone else is looking forward to the memorial day uh holiday so uh with that um everybody thank you for attending our monthly automation workshops we hope that you have a better understanding about the value that a tool like nadabot can provide in your environment um and hopefully you understand kind of the role that it can play inside of an automation context and also you might be a little bit prepared as to maybe handling some of the constructive feedback from teammates that might be a little bit resistant to incorporating a tool like this into their portfolio and hopefully the demonstration can show you a couple different ways to actually get up and running with the product so that you can start incorporating it into your environment as all as well and as always jaythan a super big thank you for hanging out with us today and sharing your expertise this was a great session and i really appreciate all the work that you put in to the product but also into preparing for the presentation itself so with that thank you everybody we'll see you next month again these happen on the last thursday of every month um that's mostly because uh it gives you a big block of time where you're not working and so if you're off on friday or if you just don't want to like you know put in those last couple hours of the day hey you can come hang out with us next month we're going to be doing a python 101 for network engineers again we only focus on the basics but we go really deep on those basics so that you can walk away with having a firm understanding on the technologies and how to actually incorporate them into your daily life with that being said i'm going to close out this month's session and say thank you everybody and we'll see you in a few weeks goodbye
Info
Channel: Calvin Remsburg
Views: 582
Rating: undefined out of 5
Keywords:
Id: dR05boo2ln0
Channel Id: undefined
Length: 152min 58sec (9178 seconds)
Published: Wed Jun 02 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.