Navigating the shift to cloud operations with the HashiCorp stack

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
my name is Dave McKean at Hasek our CEO I spent a lot of my time with some largest companies on the old as they think through this infrastructure transition to cloud they are all going through here I wanted to step back a minute and maybe to share how we think about the market transition that's underway and how that that informs our product evolution I think the the big picture of what we're seeing is the at the infrastructure layer the world's going through a transition that we go through once every 20 years and we're going today from a world where predominantly infrastructure is running on premises in kind of a static environment and that is typically in either infrastructure that you own or that you are you know the long-term leasing to a world where you know perhaps this was running on premises today you were moving to really a world of running infrastructure that is much more dynamic in nature and that's the fundamental shift that's happening today you may have some running on your private cloud but then you will have some running on Amazon you might have some running on Azure you may have some running on GCP etc the core distinction is is pretty profound as it relates to the operating model for infrastructure simply stated this world is extraordinarily dynamic and therefore every organization we engage with has to think about how they are navigating this transition from running infrastructure this way to this way the easiest way that to think about it is is to sort of decompose it into the core pieces so let's just talk about how infrastructure is provisioned in these two different worlds so the way of provision infrastructure here is really predicated on a static set of servers that I own here so it's basically you know really the static idea in this world here I I don't just stand up a hundred thousand servers and leaving these and running on Amazon it's much more much more sort of on-demand so I'm moving to a world of provisioning essentially infrastructure on demand at the security layer the implications are pretty different you're going from a world really that is basically you know a high trust environment with a clear network perimeter and therefore I can use IP address is the basis for security to the new world we're in the dynamic world you really have to think about wait a second this is fun fundamentally a load environment and therefore I have to think about what else can I use rather than IP is the basis of security I so I moved to the world of identity as the basis of security for the core networking well the challenges against slightly different in this world everything has a physical server host so I can then have an IP address that's based on that specific server host so fundamentally its host based connectivity all right everything is predicated on a physical machine that's running over there well in this new world here there's no notion of a physical machine and therefore everything to think of the world in terms of services so the world moves to one of basically service based connectivity right so maybe it's a database or an app server where is it in this new world and they use that as the basis for connectivity recognizing that it's going to move around and then for the application developer well I'm no longer deploying an application to a physical you know location I'm deploying an application perhaps using something that's running across the distributed fleet so it's basically I'm deploying an application to a fleet so at its core this represents a completely different model for how to think about infrastructure relative to the world that we're all familiar with how are most of our customers think about it is they then decompose the problem into wait a second there are four core people in my IT organization there are ops people there are security people there are the development function and then there's essentially a call it a networking group all four people have to figure out how to navigate this transition and that is the core challenge of cloud adoption at scale is recognizing that all four of them need to understand the implications of this new model so the way that we then think about it is to say well wait a second let's talk about the ops person how is their world changed their world has changed in really three ways one is fundamentally in terms of scale so the scale of the infrastructure that I'm provisioning here maybe 50 VMs the scale of infrastructure I'm provisioning here maybe 50 thousand machines so the scale challenge is just a different one as it relates to provisioning exercise the second challenge is one of variety most of our large global 2000 customers are running not just on the private data center but also some workloads and Amazon someone as or someone Google someone Ellie Bala and therefore the challenge for the ops team is how do I provision infrastructure across this variety of of target platforms and then lastly the challenge is one of managing dependencies I'll just call it dependencies so as I'm provisioning infrastructure as a core policy in this world I need to provision the monitoring agents and the connectivity aspects that are perhaps you specific to my environment well how do I include those components in that new world as well so for us terraform plays that role in Tara forms extraordinarily popular it is not our most popular product but it it's certainly up there and it's used as to provide a consistent provisioning experienced for this new cloud operating model that allows people to leverage all the innovation coming out of these cordial core different platforms without providing a lowest common denominator across all of them so the way terraform actually works is relatively simple terraform really has two parts there's turf from core and then there's a provider for every environment you want to interface to so there's an Amazon provider there's an azure provider there's a GCP provider if there's a vSphere provider etc and what terraform does by decoupling this much like a middleware broker and adapter would do it allows me to be able to expose all 220 services on Amazon that I want to invoke and provision plus perhaps you know on Azure obviously they don't have 220 servers they have different services then maybe they only have 150 on DCP maybe there's 120 services it's etc the idea here is these cloud providers are going to continue investing for the next 20 30 40 50 years and you want to be able to expose the core services that those providers are going to make available over that period of time and so by adopting this core Plus provider model what you allow for is every time Amazon introduced introduces a new capability it's now made available in the Amazon provider for tip 4 for terraform for everybody to consume so there are in fact two categories of providers for for terraform there are the core infrastructure platforms but but invariably you're not just provisioning compute capacity when you're provisioning infrastructure you need to compute need to configure something on top of that as well so as a result they're about 150 today and that number grows every week other kinds of providers whether that's for Palo Alto Networks or f5 or kubernetes or data dog you know pick take take your pick these are things that are part of the provisioning process that want to be deployed on top of the core compute what what you can then do is people then create a tera form template which includes for example the configuration of maybe the three Amazon services that you are interested in plus the configuration from data dog and kubernetes and you know f5 and Palo Alto Networks now and that becomes a reusable template that anybody can provision so what you now have is the ability to deploy essentially in a codified manner an infinite amount of infrastructure in a very repeatable way and that is not just including the core cloud services but also that the the other aspects you want to configure as well so that that is why Tara forms extraordinary widely used by operators in this new model at the security layer well the problem is also pretty profound and we talk about it here you're basically going from a high trust network to a low trust network and therefore you need to assume that that your network network is not secure that's a safer that's a safe assumption it is secure but it's but it's now outside of your control so a safe assumption is that it's not so the problem here becomes number one how do I protect secrets how do i do secrets management so secrets management being things like database usernames and passwords that previously in this world like I just leave unprotected in this world you know that's that's that's not not a not not a solid approach so you need to centralize your approach to how you're managing secrets and secrets could be of all types whether that's a database username and password or it could be a login to a system you fundamentally need to assume that those things should be protected in a different way than they were previously the second challenge is one of encryption I should also assume that if I can encrypt everything both in flight and at rest across this distributive fleet then it's actually okay that the network is low trust so therefore I need some way of addressing the encryption challenge in addition to the secrets management challenge here and then lastly I need to be able to use identity as the basis of access rather than physical IP address recognising though by the way each of these environments has a different identity model for example I might use Active Directory on-premises the Amazon iam model on Amazon Azure Active Directory on adjure Google I M model on Google you start to get a sense for the challenge so this is how terraform works as I described here the way that the way that vault which is extraordinarily widely used to address the challenge of cloud security is is really to think about how things used to work and then how things work today in the old world what we would do is you would have a client basically very client connect to a back-end system like a database and I would pass the user name and credentials and get back information from that from the end user in a high trust network that's a valid approach in this new world you need to insert a secondary step and that's where vault comes in vault inserts itself in the middle of this particular flow and says let me authenticate against some sort of trusted source source of identity that I already trust and there aren't that many of those in a typical organization as I said for your on-prem it's probably Active Directory or LDAP or it could be Amazon I am or the I am model of the cloud provider it could be Olaf from github it could be octa you know whatever system of record do you trust and then the way that vultan enforces it is when the client makes the request rather than going directly to the database or the backend system a database is an example of a system but it could be you know any kind of system or application the request is made to vault vault and authenticates that request of you know I are you do you exist in my records and then it grants access to the backend system and it returns in fact a token and that token is what's given back to that requesting client the policy associated with that token is defined by the security team and that is where the sort of the handoff and the recognition that there are multiple parties involved here comes in is that I cannot give my developers in my ops team a single place to go to access a system or an application and this and the policy associated with with how long that credential lasts for example is defined by the security team so I as a security team might say hey every time this client makes a request you know let it give it a token it lasts for one second or thirty seconds or one day or whatever that condition might be that's the basic idea of how vault helps people address the reality that they are now operating in a low trust a network there's also a transit back-end we have this idea of basically off backends and system backends same way the terraform has the idea of providers vault has this idea of backends an auth back-end there's a limited number of authentication mechanisms or not that you might have but those are all supported in in vault it can be fairly confident but then there's essentially an infinite number of system backends Maghrib is for an Oracle database or an essay pede Hana database or an application system but there's also a transit back-end that allows vault to act as essentially a certificate authority to encrypt data across your fleet so without changing the workflow of the application you can now start to encrypt by policy everything in that flow and that's how vault addresses the the challenges of running in a low trust network the the third layer is well now your developers in this new world have to figure out how to navigate this new way of deploying applications and frankly their job is much easier than anybody else's the core challenge here is hey you're gonna have heterogeneity at this layer you're gonna have some Java apps you're gonna some have some c-sharp apps you're gonna have some you know call them Nats met apps you can have some Hadoop apps you're gonna have some container based apps you can have some VMs and the challenge becomes what one one really that we categorized into in two and two parts one is how do I separate the concern from my developer having to know where everything is gonna run it shouldn't matter they just get to say I want to deploy this application this is what this application needs number two the challenge is one of essentially binary packing if I'm spinning up a hundred thousand servers for my applications I'm paying for them all and therefore how can I schedule essentially the this course scheduling function of scheduling these apps to efficiently is that infrastructure and Nomad is extraordinarily widely used to do that right if you have container applications that you are using something like kubernetes for now it's very very common for people to use kubernetes for some of those pieces nomads for some of the other pieces cloud foundry for some of the other those pieces your Java or basically a java application server for some of those pieces the point is there will be heterogeneity at this layer the last challenge is one of you know the fundamental challenge of this model is actually knowing where everything is you're now spinning up things across a distributed environment and so our most widely used product is in fact console and console is deployed across really every Severine that that runs in these distributed environments as the system of record that tells you where everything is so it really serves a few different purposes the first of those it acts as a dynamic service registry that tells you where everything is in this distributed world so when an application gets deployed here's the latest version of that method for the ops teams it acts as a dynamic infrastructure registry hey where is everything how many app servers do I have running how many containers are running in my environment whereas where's the database so previously I would have described when I deployed as an example a database into my environment it would have an IP address 1.1.1 no one in this new world well because everything's moving around people use console to say basically I'm deploying this database here it is it's DB one okay and then that becomes the basis of how you communicate with it rather than I the IP I so it acts as this common registry and backbone that allows you to establish a mesh of where all the services are in your environment and as I said console is by far most widely used product and it's used not just in the container landscape that's used for affronting mainframes and it basically everything in between and lastly the last component of the last use case of console is certainly once we have all the services registered and discovered so I know I have a hundred thousand databases in my in my in my environment you can then use console to enforce connectivity in terms of what what service can connect to what service so for the security teams console plays a role in terms of service connectivity and it's because the core challenge of how I actually enable applications to see the light of day in this new world is fundamentally a security one the console connect capability within console which is what allows the connectivity is what really makes consoles so powerful as a common service match to underpin this distributed fabric of compute you've now set up so that's how we think about the pieces and I think the the the seminal takeaway for me is the fact that we are going through this transition to a different operating model for infrastructure from no static to dynamic and we think about in terms of a cloud operating model that cloud operating model actually very often we see a dotted on-premises but it's a different mindset it's about ephemeral infrastructure it's about assuming low trust it's about dynamic IP addresses it's movin to software based everything and that while it came to to bear as a result of this transition to cloud is just as relevant on-premises for example there's a vSphere provider for terraform right and that's how people do that people use vault on-premises very often to because the idea of the low trust network is actually a good idea so number one we going through the shift as a as an industry number two the best way we found to describe it is to actually decompose it into the problems for each person or per certainly you know practitioner type inside the IT organization and in so doing establishing a common operating model that allows our customers to rock cloud
Info
Channel: HashiCorp
Views: 13,559
Rating: 4.818182 out of 5
Keywords: HashiCorp, HashiCorp Terraform, Terraform, HashiCorp Nomad, Nomad, HashiCorp Vault, Vault, HashiCorp Consul, Consul
Id: Imh6Qucx0Vo
Channel Id: undefined
Length: 17min 58sec (1078 seconds)
Published: Tue Aug 07 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.