API Gateway Pattern & Kong in a Microservices World

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
okay I guess we can start my name is Marco I'm the CTO and co-founder of mashape misshape it's one of the makers of conch the API gateway and today we're going to talk about micro services we're going to talk about Khong gateways and how the two really work together and you know we're also going to show a demo of Kong running on top of messes so I've got my environment running so we're going to see Kong in action we're gonna add an API on top of Kong and see how all of this fits together so before I even start why are we talking about gateways and micro services what's happening right now well so micro services are the software answer to more and more complex demand to more and more complex requirements that our software has to take care of API is you know got very popular in 2007 you had a mobile app you had a public developer community so how do we you know make these developers consume these applications consume our API since I mean sometimes API is used to be an afterthought hey we have to create an API so that developers and mobile apps can consume our our system and then as we onboard more and more partners as we support more and more platforms software becomes more complex and micro services really in 2014 start this huge trajectory and adoption and do you know why it's happening in 2014 well docker was released a year ago a year before I'm sorry in 2013 which means that docker in containers are finally giving the tooling for everybody else from the rest of the world to engage into this transition to micro services some companies have done it before think of Amazon think of Netflix some companies were progressive enough to adopt micro services before this tooling was in place but really containers are what enabled the rest of us product micro services and move forward with this new way of building software you know somebody somebody said that moving from monolithic to microservices it's like moving from a chicken - chicken McNuggets right so you have this whole chicken you have your monolithic app and you're transitioning now to smaller components that are working together with each other and the whole thing is during this transition which is a very painful transition that pretty much every Enterprise it's doing right now this transition you have to keep the chicken alive you can't kill the chicken you know you need to keep supporting your clients you need to keep your applications running you have to engage into this transition by you know making your system working again keep working and there are different strategies to move to micro services you know there are three main strategies I would say I've been talking with lots of people lots of companies who are approaching this this transition now or have approached the transition before and usually it comes down to three different options there is the ice cream scoop strategy so you have this huge ice cream Cup and you need to extract from your monolithic app individual components and micro services then now can leave and be scaled separately the leaves separately that can be built separately and then there is the nuclear bomb strategy so companies some companies are deciding that monolithic it's not a solution anymore and so how about we just rewrite the entire software in micro services so they get rid of their monolithic app and they transition a hundred percent to micro services in dendi's would like to call the legacy macro service right so you only build new technologies new software new products in a micro service-oriented architecture so you keep your legacy monolithic app there instead of your system as a gigantic service effectively that works together with those new micro services that new teams are creating you know by the way transitioning from monolithic to micro services it's not just a technological change it's a organizational transition when you're transitioning and making this transition to micro services you are also changing how your teams are built how your teams are working together in how your organization its you know communicating with each other you effectively transition from large teams to smaller pizza teams and these smaller teams they can experiment with different technologies but effectively they're working on separate code bases and they need to publicize to the other teams and need an easy-to-use interface for them to understand think of uber sober used to have a huge monolithic app that was baking together all of these different you know feature functions in one codebase effectively and then they decided to extract all of these in thousands of separate micro services that now can operate independently from each other running running micro services right separating these monolithic apps into smaller p components it's really a little bit like running a city you need to have roads for these micro services to use to communicate you need to have fire departments you need to have security police departments you need to have an infrastructure in place that can make you successful with this new way of building software so you know since this is one of those transitions that everybody's doing right now I have it here on my slide deck so we can do it together right so we start from a monolithic app with all of these components built in into one code base and then one team comes in and decides to extract one of these services outside of the monolithic app and so we have this items micro service now that lives separately from all the other components and guess what once you have this micro service you know it doesn't live in a vacuum you need to have that infrastructure that city in place in order to be successful and so the same team will come in and build more and more logic that's complementary to this micro services success you build security features authentication features you build logging transformations you build all of that stuff and then another team or maybe the same team goes ahead and extracts another micro service and now you end up with rebuilding over and over again these complementary the feature features that every micro service micro service will will have to implement with lots of fragmentation across the board right lots of duplicated features lots of you know duplicated code bases that eventually will will create problems down the road in so gateways and api gateways can help in two different ways number one they can help by becoming an abstraction layer that sits on the execution path of every request that's going to one of these micro services and centralized together into one place all of those common features that otherwise each team or each micro service would have to implement so think of authentication security again login transformations service discovery and stuff like that in this case the Gateway its kong and we call those features plugins plugins are effectively middleware functionality that you can dynamically apply on top of any microt service behind the your kong cluster and then another use case for api gateways it's you know aggregating and collapsing different responses into one response when you have a micro service consuming other micro services sometimes you know you'll have to make requests to more than one upstream service and so what the gate we can do it can become that abstraction layer that you put in front of your micro service-oriented architecture in order to collapse these responses into one response so the client makes one request but then the Gateway itself will trigger other requests in your infrastructure and return one response that's a especially useful if if you want to optimize for for a bandwidth and for size because you do not have to retrieve or trigger multiple requests and keep track of the state of those requests from your clients because the gate we will do that for you Gateway's are also been used for a third use case which is you know decoupling that monolithic app under the hood without having clients to deal with these changes so assuming you have a client consuming a monolithic app and assuming you are decoupling that monolithic app now the client needs to know you know where to make those requests but the Gateway can be that curtain you put in front of your monolithic application the client will deal with them with the Gateway only and then you can decouple your monolithic behind the curtain without having to worry about updating your clients and this is especially useful if you do not control your clients you know when you when you think of when you think of gateways you really think of something that stays in the edge because that would get api gateways used to do so you know back in the days you had your monolithic app you had your api api is sometimes where an afterthought and then the api management solutions you would adopt where monolithic black box is effectively closed source hard to extend and hard to scale in a way but then with containers and micro services something happen first of all the topology of our traffic it's increasingly being behind the firewall and not just outside of the firewall you have lock critical information happening here in in behind your firewall and you cannot put a black box you can control effectively you want to put something you can extend something that can scale alongside your micro services into pretty much any container contains a shim platform measures for example you still have the external client use case but that becomes one of the many clients that are now consuming those micro services and the reason is the reason why we have these you know increase of communication well you probably read you know it it's straightforward micro services have to communicate with each other in order to function a monolithic IP doesn't have this problem everything it's the same codebase so you don't really have to go on a network most of the times to provide what you have to do and so you know actively this changed how gateways are being used internally but when you think of of gateways again you usually think of us relized lair an extra hop in the network that are processing you know these additional features but that doesn't necessarily have to be true you can also run a system like conch for example alongside your existing micro service process you can effectively then get rid of that extra hop in the network and reduce the latency now latency it's another important factor you know in back in the days if that solution added a hundred milliseconds or 200 milliseconds of latency on top of your requests and responses that was not ideal but it's something you could live with with micro services every latency compounds and so at the end of the day you will end up with an enormous latency if you don't take that into account immediately so I can show you how Cong can you know implement sub millisecond latency on most of these features you know micro services can provide internal communication private communication or maybe external with two partners for example or with a public developer community or maybe you're starting to adopt functions as a service or server lines functions with AWS lambda I be able a BM Open whisk and khĂ´ng supports all these use cases you put it in front and then Kahn can handle for you all of those common features you you need to execute on every request effectively you are reducing the fragmentation of your system you're moving from the picture to the left to the right you are putting in one logical place all of these functionalities so let's let's talk about concrete technology so what is Congress conch it's an open source API gateway it's the most widely adopted API gateway right now we have more than 300,000 running instances per month across the world it's built on top of nginx and nginx it's a very solid foundation for us chances are if you have an API you're ready using nginx you know it's an engine X process that starts up and then it's being extended with all of the Gateway features we call those gateway features plugins and wagons are Middleware features you can apply dynamically on top of any API or micro service behind Kong you do that with an admin API that come provides so you have a JSON restful api that allows you to provision new services on top of Conch to provision new consumers new credentials to provision new plugins in a dynamic way doesn't matter if you have one kong node or a hundred kong nodes across five different data centers the admin API will eventually propagate all of these information across every node every kong node without you having to restart to reload those nodes without you having to reconfigure those nodes cache most of the dynamic information kong has to deal with in the process in memory so after Kong warms up and I'm gonna show you later in the demo Kong will cache all this information in the process and for most use cases we will achieve a sub-millisecond processing latency on top of those requests we support pretty much every containerization platform it's cloud native we have a native support for 4 meses and for the CE OS we also have an official conch package on universe if you're using mesosphere extremely fast and then khan comes in two different flavors right so we also deal with enterprise use cases across the board you know when you have an API whenever micro service-oriented architecture it doesn't really matter what industry are working into right if you're a healthcare if you are IOT if you are government if you are bank you will end up with the same set of use cases that everybody else has and so Kong it's you know it's being adopted by these customers all across the world in in four different time zones you know Asia pack us West US East in Europe to help dealing with these problems so let's take a look at plugins what what are plugins crank it's built on top of nginx in Lua on a system on a framework called the open resting in plugins our Lua it's Lua code effectively that hooks into the life cycle of every request and every response and then executes some sort of operation it can also change how the request it's being made it can also change the response so for example there are authentication plugins like plugins of any kind there is authentication or security there is logging but for example authentication plugins you want to you have an API you want to start authenticating those requests with a third-party open the connect provider or you want to implement OAuth 2.0 authentication on top of that API you let Khong do that for you by installing those plugins on top of of your micro service maybe you want to start invoking and reacting reacting to events and you want to trigger AWS lambda function invocations you want to trigger I blame open risk actions you can still do that on top of Kong and then add other plugins as well on top of your execution lifecycle to protect secure and rate limit how these functions are being invoked for example and and this is really how you use plugins so you've got this admin API which allows you to configure the system so in this example here I'm applying the rate-limiting plugin on top of that specific API with that specific ID I make this one request and I tell Kong ok every plugin has its own configuration right so in this case I'm telling Kong I want to allow 10 requests per second 50,000 requests per hour I execute this request and now Kong will dynamically apply this plug-in on top of my API across every node so effectively if you have many data centers you have just you have just that you know apply that distributed rate limiting feature on top of your inter cluster its platform agnostic runs pretty much anywhere and most importantly it can run on in hybrid environments we do have you know came across some use cases of customers that are transitioning to containers not everybody is as progressive as the people here in this conference some companies some some teams are slowly transitioning in containers some things are still stuck with soap and you know they want soap to rest right so the world it's ten years behind what we are discussing here in containers the world is moving there so that's good news in some companies for example want to run hybrid clusters of Kong they are transitioning from bare metal to the cloud or bare metal to containers or maybe they are playing with different containers Asian platforms and they can spin up a con cluster that runs across the board and shares data with each other node across these other installations deployments if you're using mesosphere in this u.s. it's extremely fast to start getting up and running with Kong you just search for Kong and it's one one-click deployment and you and you're good to go so let's take a look at the architecture of Kong okay so Kong it's a stateless server you can run Kong you can add Kong nodes remove Kong nodes nothing will happen the state is being stored in either Cassandra post class you can choose it's either or we recommend Cassandra for distributed and more complex use cases we recommend Postgres for simpler use cases Postgres and the reason is post class it's you know a master/slave datastore so if you're running a con node in an ad or data center that connote will have to go all the way to the master DC of Postgres in order to write some information with cassandra instead you haven't eventually consisted consistent distributed data store you can its master less effectively you can write and read from any node in Cassandra itself will then take care of to replicate this information across the board and as long as your kong nodes communicate to the same cassandra key space or Postgres database then you're good to go they will all share the same data now the trick is kong doesn't make a request to the datastore on every request that comes in it only makes a request to the datastore the first time and then it will cache this information in memory in the process which means that after Kong warms up it will not it won't have to go again on the day you can tolerate a datastore failure in Kangol process of this in memory if you're changing the same data from a different connotation will propagate an invalidation event across every con note telling the Connaught hey this data has changed the local copy you have in memory it's not valid anymore so refetch it again from the datastore chunk it's built on top of nginx and open resting it's a very performant architecture we're running Lua code on top of the logit virtual machine legit it's a very fast c implementation of the Lua virtual machine and we can embed that into into nginx to script how nginx works on top of that we have added support for plastering in datastore so you don't have to worry once you start a Connaught you don't have to worry about restarting reloading it or reconfiguring it it's all dynamic in support for plugins plugins are written in Lua they basically request comes in took nginx and the implied ins takeover that requests in response and they change it if they have to do so and then they proxy to the upstream service and then when a response come back response plug-in can take again ownership of that process change with the response does and then return back to the client and on top of all of this we have got the admin API the admin API allows us to configure the entire system you can provision api's you can provision plugins you can provision credentials can't can run in two different modes you can decide for example to use Kong as your authentication store so Kong will store all the credentials and for example if you're using o out it will storage access tokens the Refresh tokens if you're using basic out it will store the username and passwords but that's not always ideal you can also use Kong in a different mode by leveraging a third-party authorization server to authenticate your request so for example any open ended connect compliant provider or allowed to open or introspection endpoint compliant providers when Kong starts it listens on on a few ports by default these ports are divided into proxy ports and admin API ports the proxy ports are the ports that your client will consume when he wants to consume a map Stream service so this is the one that would be made available to either other micro services or external clients in the admin API it's it's you know the one it's the API you're gonna use for configuring the system and of course you will fireball these ports to prevent external access in production so when dealing with Kong we're going to deal primarily with a few entities we're gonna deal with api's with plugins and with consumers these are core entities that sooner or that or later you will have to deal with when using Kong but then plugins themselves can extend the DAO the underlying Dao model of Kong by extending you know the system with new entities that you can use for example if you decide to adopt a key authentication plug-in well now you'll have a way on the Agni API to provision and store those key credentials and if you have for example again while 2.0 you will have api's that allows you to create world 2.0 applications tokens and authorization codes all right so I've got my demo environment running it's an Apache measures 1.3 open one on Amazon ec2 it's running it's very simple setup it's running one master two slaves and these are the URLs that I'm going to be using when configuring the system we've got Martin on port 8080 and then we've got the Kong proxy URL in the Kong admin URL on two different ports so I'll go ahead now and load my terminal first of all if I go here on my browser we can see this is Martin we can see the Cong the conga application running and if I clicking here we can see there are two different containers running there is Kong itself and then the respose grass so in this demo we're gonna use Postgres for storing all of these all the Kong State we only have one Kong node that's listening on two different ports so this is the proxy port so if you make a request to this port Kong it's not coming it's empty right now it doesn't know where to proxy this request to and then the other port it's the admin API of Kong so this is the default index index page for the admin API and for example if you go on slash api's we can see that there are no API is configuring the system so I'll go ahead and add and provision new API on top of Kong and then I will secure this API and then I will rate limit to this API using Kong plugins you can see a list of Kong plugins on the website these are the ones that come bundled with the system so these plugins are immediately available to you to be used but you can also extend the system with your own plugins so the RISM there is a guide in the docks which allows you to build new plugins and so these plugins that you're seeing here in the plugins hub so this is the guide for building new plugins and these plugins are seen here these are the ones that you know are the most common ones the most used ones but you can also go on the community on get up for example and find over 200 contributions to plug-in skunk plugins to pretty much deal with any sort of use case you can also extend and build your own plugins for your own you know internal requirements so for example there are many users that have to deal with legacy authentication systems or have to deal with legacy transformations you can build plugins that are available only to you in your system that deal with that legacy use case for example so it's a highly extensible you effectively control everything about this gate will air okay [Music] so let's let's go ahead and provision an API so for this demo I'm going to use HTTP bin org I don't know if you're familiar with this service it's a cloud service that effectively provides provides some endpoints which we can consume to retrieve information about the request from making so for example use a slash get endpoint which returns back a JSON response with the headers the client in sending and this information we can use this for you know debugging really what's gonna what's happening between our client Kong and the final API all right so let's let's make a post request to provision our first API on top of conch effectively what I'm doing here it's telling Kong okay post a new object create a new object on your API whose name it's HTTP been the base upstream URL it's HTTP been org and the mapping it's going to be slash test you know when new requests coming to kong-kong needs to understand what upstream api were trying to consume and we can do that in different ways we can create a URI mapping we can create a host mapping or we can create a method HTTP method mapping or we can use any combination of this tree to create our custom mappings so effectively by doing this I'm telling Kong every request on slash test in Kong will have to go to this HTTP bin org API Kong can work very well with existing service discovery tools or you can use count as well its own built-in discovery tool for resolving dynamically those host names so for the demo I'm just in fact in putting Kong in front of a public API but you can put this in front of any internal or external API as well so if I make this request we've got our response Congress provision the API in the system and now it's ready to be consumed so if I if I make a request to the other port I can now consume that API so we'll see how it works if if I just make a dump request like this one here without any specific URI congo complain that no api's are found with value even if I do this you know there is no API that that matches you know that specific mapping but if I make a request on slash test which is the mapping we've created slash cat for example Kong will understand that we're trying to consume the HTTP API it will append on on the base URL we have configure the slash get endpoint it will strip out the slash test and then it will reverse proxy to http bin so this is Kong proxying - HTTP bean we can see how we have a few latency headers we have 11 million seconds for Kong 9 for the upstream HTTP bin if we make requests with Kong eventually the Kong latency will go down to zero so this is the Kong in process caching mechanism that takes into into effect so Kong the first time needs to know from the data store what what API were trying to consume it will cache this in the memory of the of the process and then you won't have to go to the data store ever again ok great now we have an API in Kong simple reverse proxy let's start using some plugins to enhance what this API can do so like we can go on the plugins list for example and we can decide we want to protect this API with an API key so we apply this plugin by making a post request to the HTTP being API slash plugins and we tell the system we want to key out install and we want to configure this bug in a very specific way thanks thankfully for us we don't have any mandatory configuration parameter besides the name so that will be enough to protect this API so let's go ahead perfect now there is an API install a plugin installed on top of that API if I consume again the same way I did before this time Kong will complain there is no credential in the system in the request so let's provision a consumer and let's provision a credential for this consumer think of a consumer is anything that can consume the API so it can be a developer or can be another micro service or can be a mobile app so I'll create a developer called demo in this case and this demo user will have a key credential called secret one two three as you can see I'm effectively making requests to the admin API of Kong it's a regular HTTP request this also allows us to integrate Kong with our continuous integration systems with our existing applications or with scripts you can automate how Kong it's being configured in the entire cluster it's just a decent restful api so whatever can make a request to this API can also configure the system so now we have a consumer we have an API we have a plugin installed and we have a credential so we can consume our API by appending this API key that we have just created secret one two three if we do this Kong will validate the key it will validate the consumer we're trying to use and it will proxy the request to the upstream service if I use a different key that does not exist of course Kong will will block me again it doesn't matter if I had one Kong no there are hundred Kong nodes across five different data centers across five different clouds on top of measures for example all of these information would be propagated dynamically you don't have to worry about it so let's rate limit now how many requests this consumer can make and so for that we can go pick the rate-limiting plugin which is here in the rate-limiting plugin allows me to configure you know a few configuration options but effectively I can configure how many requests per second per minute per hour per day per month or per year I want to to use to rate limit to the users I can rate limit by IP address my consumer my credential so these it's a little bit more complicated but let's go ahead and rate limit by let's say five requests a minute I'm adding a new plugin this time it's rate limiting on top of the same API I consume with my key like the same thing I did before and this plugin will go into effect it will dynamically be fetched loaded and it will rate limit how many requests I can make it will also obtain some response headers telling me what's the number of total requests I can make and what's the number of remaining requests I can make and so if I make more than more requests we can see how the counter decreases and if I make more than five a minute the system will block my my requests now this is a very simple demo okay so I'm securing the API I'm rate-limiting the API this can be can get way more complex if you want to do so maybe we have consumers that only that should be able to make more requests or maybe we have internal clients that should be able to be blocked on specific your eyes we can stack together plugins as we wish to create these policies that will be then applied on the execution path I believe that we don't have much more time five minutes okay among the plugins that you can use with Kong besides authentication because besides security you can also apply you know request termination you can apply server less invocation so let's assume you want to be able to create a restful interface for your lambda function or IBM Open whisk function you can do that with this plugin and then you can stack together other plugin act secure rate limit how many requests these people can make you can then also plug this in with monitoring and analytics solution but if you have for example Splunk or the alex stack cabana elasticsearch you can use any of the logging plugins to push to these systems all the information about you know every request and the every response in order for you to keep track of what's going on across your micro service infrastructure we do also support dynamic load balancing and service discovery so if you use Kong you can actually tell Kong to be the dynamic load balancer for your upstream micro services the admin API gives you n points to add and remove target notes from a named upstream service that you can use on top of conch and and you know you can then also create your own plugins again and you can find plugins on the community as well so what we are so this is a static page what we're doing right now we're changing this to make it a searchable hub so you will be able to push your own plugins to the system and you will be able to fetch and search for other people's plugins to use on your system and this is it we do have a booth in the in the hallway so if you want to learn more if you want to see a live demo we can show you a demo there there are a few guys from Kong as well I really transfer your questions well thank you [Applause] [Music] can you put Kong in front of your user interfaces as well yes you can Kong it's an HTTP proxy that supports any HTTP or HTTPS ervice so you can use these also for your web applications [Music] what practical limitations are there to what you can do in a plugin like what do you recommend you don't do I could I go and host my entire web app as I plug in and Kong why shouldn't I do that what kind of guidance do you get people you you could as a matter of fact you could use Kong as your function as a service platform think of a plug-in it's something that can also terminate a request with a custom response so now if you think about it you can create functions instead of a plugins that can be invoked on top of an HTTP interface and run on top of Kong now the nice thing about it is that you're running on top of a very efficient architecture which is oak engine acts and Lua so you could effectively do that you can also synchronously or asynchronously communicate with third-party services instead of a plugin so you can receive a request by leveraging the asynchronous i/o of nginx you could make a request somewhere else you could aggregate multiple requests you can then return a custom response with whatever transformation you want to do in the meanwhile so you could pretty much do any you could also open new UDP or TCP ports in a plugin so we don't recommend that but you could do that so we we looked at configuring the admin API and that's how you configure a whole con cluster is there any sort of authentication mechanism or role based access control that you could find for Kong to protect the admin API yes so the admin API should be protected from external access because anybody who has access to that API will be able to mess with configuration we do have two different editions of Kong there is a community edition and then there is an enterprise edition so this is the community edition of Kong you can download it you can use it it's free to use there are no limitations and then there is an enterprise edition of Kong which extends Kong with some enterprise features among those features we also have role-based access control for the admin API so that you can control exactly what users what teams can access the admin interface so they don't mess with it you know the enterprise edition of Kong it's also a full cycle API management in micro service management solution so we also provide developer portals for your team for external consumption analytics and so on and so forth last question so how does come on integrate with something like marathon or is it just kind of up to you to to orchestrate both of them at the same time so-called it's being shipped with a official docker image right so it's a we provide a docker image you can use pretty much anywhere what we have done was to create a specific integration for the CE OS and you know you just use marathon to schedule your your con cluster like you would do for any other docker container effectively the the system itself can run anywhere so it's very simple it's stateless you can spin up as many kong nodes or you know instance replicas of your conch containers on top of of mezes just by using marathon or by using you know this cos it's it's straightforward a conch can integrate with that so Konkan out to discover your upstream services so we do have support for that thank you [Applause]
Info
Channel: The Linux Foundation
Views: 58,691
Rating: 4.8952236 out of 5
Keywords: containers, microservices, apache, Apache software foundation, mesosphere, open source, cloud computing, docker, spark, dcos, cloud native, mesos, kafka
Id: OUUiS28hZuw
Channel Id: undefined
Length: 41min 45sec (2505 seconds)
Published: Wed Sep 20 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.