An Introduction to Orleans

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello hello hello and welcome to this reactor session on an introduction to orleans look forward to talking to you about um aliens what i think is one of the most innovative and exciting projects to have come out of microsoft research and i know ben and i very keen uh to talk about it and the experimental work we've done with it so before we explore that let's just do a quick introduction so hi my name is david gristford and i'm a cloud solution architect and i work for microsoft hi everybody my name's ben coleman i'm also a cloud solution architect work for microsoft and same tone team as david um spent a lot of time working with david on this particular project and some of the exploring some of the sort of techniques around microservices and orleans so cool so let's get going and explore this so this is a session of four parts and i'm going to alternate between the two so i'll start by talking about aliens give a little bit of a background to it uh because there's a lot to cover conceptually then ben will take over and talk about smiler a microservices showcase that he's been working on for quite a while now we'll then switch tax slightly and then i'll explore how we took those ideas of all the ends and plugged them into smiler and built part of the service using orleans and then finally ben will talk about how he took that implementation and deployed it in to kubernetes so that's the journey we're going to go on over the next sort of 50 minutes or so so let's start with an introduction to all ends and because i i'm going to guess based on my experiences not a lot of people are aware of this project and the background to it it's very interesting very exciting very innovative so we want to kind of set expectations about we don't assume knowledge of this to get up and going we can't talk about orleans without talking about the actor model this is what it's all based on now um the actor model is a programming model got to say this up front and like any programming model such as cqrs or lambda or something like that you have to very much buy in to the programming model all these models tend to be suitable for certain types of architectures and not for another so picking the right kind of architecture is critical to success for example ben i've worked on cqrs projects it introduces a layer of complexity but the idea is in complex systems the benefits it brings outweigh the potential complexity in the case of the act model it was something that was designed uh to address some of the issues concerned with trying to build and run highly distributed highly performant concurrent systems now interestingly it has its roots in a paper that was showcased in 1973 that actually was more influenced by hardware the original idea was to imagine thousands or tens of thousands of little microprocessors uh each with their own memory and a bus gluing them together sending messages between them and the idea was that if we adopted this sort of model we could really scale out uh big time and indeed there have been several attempts at this if anyone's been around the industry a while they may be aware of the inmost transputer and i used to program in parallel programming language um all trying to address this ideas how do you build a system that can scale out uh using messages in its early years i think it's fair to say that the actor models sort of languished a bit and just remained a more an academic discussion of how you might build such systems but with the cloud the lowering cost of processing and memory there's been a desire to sort of revisit that i think there's been a lot of interest more recently in it uh originally ericsson uh picked it up for erlang but that was more an internal project a few people may have come across akka as a name associated with actors but they're all trying to take the same thing of this concept of actors and make them real and in the actor model the idea is every actor has its own state and it communicates for the outside world via asynchronous messages so messages flow across the system and these actors receive the messages and based on their own internal logic and local state make decisions perhaps a sensor sends in data and the internal state needs to be updated or perhaps as a result of something happening more actors need to be communicated with and the message needs to spread out and this is the model it was academically exploring and its goals though were to reduce complexity in complex systems because the sorts of systems that actors are really envisioned for with those big complex systems that are hard to write and hard to scale well so by building an environment that avoids dead locks avoids the issues around reentrancy implements asynchronous communication we can have systems that scale to thousands or tens of thousands of actors and make them perform well it's also received a lot of interest for performance in terms of and this is what we will be discussing today instead of the middle tier in an app being used simply to wrap around crud type calls to a back-end database let's actually implement some of that logic inside of the actor so all liens is an implementation of this model it is both a programming model as in there's an api and an implementation which we'll see for real here and there's also a runtime as in an environment for these actors to exist and the two together are the winning combination that's what really i think makes it a very attractive nice clean programming model excellent performance implementation it came out of microsoft research it's open source and uh now started with the normal traditional.net now it's dot net core and that's really where it's coming to its own given its ability to run on linux windows and macos it makes no um prescriptions about where it can run you can run it as they say on bare bottle you can run in vms you can run on-premise you can run it up in the cloud and more recently a lot of interest in running it in kubernetes which is what ben will explore towards the end of this session initially its usage was mainly inside microsoft and i'll come back and revisit and talk a little bit about halo but it's also been used by a lot of big name companies a lot of interest for these high performance systems in orleans terminology a grain is an actor so we'll tend to use the terms interchangeably and it is indeed the building block for all the logic here on the right hand side if you've got an interest in this this paper is really interesting i've read it many times and i said in 2014 i worked for the team in 2015 when we were running an early adopter program and it's a very academic rigorous paper that describes the whole background to this this session we're going to pick elements of that and explore them from a programmatic viewpoint one of the key things that's interesting about how the team took the actor model implemented was the idea that grains always exist they're virtual you don't need to think about their creation and their destruction they're just there for you whenever you need them and that simplifies the programming model but it does have some interesting side effects potentially you need to be aware of and in line with the goal of actors you know internal state send and receive messages and they only come to life when a message comes in and they have some more advanced features like timers and a sub pub model which makes it easier to get notifications when things happen in the outside world that can then trigger actions i said it was a programming model so one of the obvious questions is when would you use this well at one end of the scale i certainly we wouldn't be building an accounting package in this i wouldn't be building a database system uh what i found is that when you have a model of something that is autonomous independent with its own behaviors and the like orleans is a potentially really good fit so for example iot devices they are great you can imagine a world in which there is a one-to-one mapping between an iot device generating data and a grain inside all liens that's receiving that data and holding it locally and based on the data coming in is potentially making decisions so for example a temperature iot device that's sending temperature information in it may be that if the temperature exceeds a certain threshold for a certain period of time the grain then initiates some action by calling another grain that may raise an alarm or kick into a cooling system for example in halo on the right hand side you'll see from that paper an example of how it was used in the present system so if you're a halo 4 fan you may remember waypoint the app that lets you see what was happening in the game and what your standing was if you joined a game saying your red versus blue 32 players what would happen is a grain would be created that represented the game and a grain for every player and as updates were received from the game in real time via this heartbeat call it they'd be cracked open rooted to the right actor and then the information would be disseminated to each player and then that would be reflected in the web ui so if someone left someone joined headshot kill for example all that would be handled inside the grain so it's a great model because each player each game has a degree of autonomy but brings together a certain behavior in terms of working with these grains well it's a great pump that their grains live in silos and that's the runtime aspect of what the microsoft research team brought in building this so you can see in the diagram the number of grains that live in three different silos and audience hides the complexities of the placement of these grains so when you make a call from one grain to another it fits in the same process addresses it will be automatically handled it has to be serialized and deserialized and moved across the network that's all done for you transparently so it manages that it garbage collectors um away the grains when they haven't been used for a while it continually monitors the health of the system so there's a failure it will take action to reinitiate that so we'll be exploring that we also see on this diagram there tends to be a wrapper that protects the grains from the outside world so we use the terminology of a client that's the only thing that's allowed to make calls into the silo and typically we wrap that behind say a web api you'd never typically expose that onto the internet so in our model we'll talk about say an mvc web api taking calls from the outside world validating them transforming them and then making client calls into the silo system from a programming viewpoint it's helpful to perhaps think of a grain as a combination of an identity which is it's key because every grain must have a unique key the behavior which is an interface that describes the methods and the methods themselves and potentially state that may need to be saved at certain critical points in order to ensure the correct behavior of the system so it's a nice little way of thinking about the design of a grain because everything is message based everything is rooted around interfaces so you define your interface as your starting point derive it from one of the orleans to find ones that indicates what key will be used to address this so here i grade with string key i'm saying that a string is going to be used as a key for this and then i define my methods everything has to have a task in front of it i think when i first started playing with this that was the thing that took a while to get my mind around that everything has to be asynchronous everything has a task whether it's just a simple result or a list of something that's enforced to make sure that everything is asynchronous inside the grain this is the world's simplest this is the moral equivalent of the hello world type grade we basically have the ability to send a message and retrieve a list of the internal messages so the things of note here are the fact that the top there we have a store for the internal messages so every time a message arrives it's added to that private list and then we have a write state async method that's used to persist that state so we have a checkpoint so if the grain needs to be garbage collected away or there's a failure on the underlying hardware we've got a checkpoint that we can recover from and from the client's viewpoint remember the client is in this case the web api he'd reaches into the silo with this get crane call passes the key of the grain so you've got here uh dave gri and ruben here and once i've have a handle to grade what i will then do is make asynchronous calls into it and the underlying infrastructure will handle all the hard work the heavy lifting of routing that request to the appropriate grain bringing it to life it needs to be letting it process and return anything and this is the constant cycle here that we find with grains finally for this bit mentioned a couple of times about state and managing that there are some scenarios where you perhaps don't need to persist the state so if something would happen to the grain um the state's ephemeral we don't need to worry about it but a lot of cases you do care about the internal state and making sure that that's resilient and robust is part of the programming design so on the left hand side you'll see the diagram that talks about the way in which grains come to life so the life cycle is when you reference a grain if we've never seen one before it's never existed before because these things are virtual always exist we will automatically create it for you bring it to life initialize it and then it's ready to receive your calls at the other end of the scale if a grain has been hanging around for a while and hasn't been used what we want to do for performance is free it up to get the thread and the memory back so we persist that grain to disc um and that all happens silently in the background so that you can view an estate of millions of virtual grains interact with them without thinking about that life cycle but in the background we are managing that to help you create um these really big scalable systems and the mechanism and we'll see it in action uh when we talk about the implementation of the orleans version of smiler is that you define an interface that contains all the member variables that need to be persisted and then plug in a storage provider and there's a load provided by default with aliens and a community that's involved in building more and again this is the nice part of the ecosystem that people are choosing whatever stores they want to persist that stuff we use table in our example because it's nice and easy and simple to track right this is the point where i hand over to ben hi everyone hi everyone so i'm just gonna share my screen here that's better thank you sorry about that so yeah i'm going to talk a little bit about this project called smiler that myself and david worked on um it's a microservices application kind of showcase a reference application that we've created uh and then we'll this isn't necessarily about orleans per se but then we'll bring it back together at the end and explain about um you know how we brought it or leans into this into this project so um some of the goals of this project and how it kind of came about the kind of um genesis of it was myself and david were you know doing a lot of presentations and talks to microsoft customers around azure technologies um compute patterns design uh an architecture you know considerations for moving to the cloud sharing issue with the slides sorry uh okay i'm definitely sharing sorry about that is that better if you can see them yeah okay sorry about that um teams teams technology uh snafu so um yeah the way this came about was trying to come up with a system that helped people understand the different architectural choices that they would have um try and showcase some of some of the azure capabilities because we wanted people to be able to adopt azure and have an application that showcased different approaches and provided a kind of end-to-end scenario for people to to explore the the different patterns a key one of them being the microservices pattern um alongside this you know we wanted to develop a real application you know a simple hello world or some sort of you know theoretical code on a slide you know we wanted a real application we could go and go and learn from learn ourselves and help our customers learn and also explore the you know the kind of modern way of developing applications the sort of modern patterns and technologies to to be able to do that so what we came up with um kind of concept was this idea of a a sort of sentiment capture application um this is yeah you know the kind of classic thing you see at sort of airports and various places where there's five or five or six faces and you press one of them to say oh you know how happy you are with the service you've had that day we would create like a digital version of that that we could use actually in our events and workshops and hacks to uh you know gauge sentiment so to create a system for that you know we had a concept of using a microservices pattern where we'd have multiple you know loosely coupled components at this stage there was no implementation it was just kind of conceptual so we knew obviously we needed somewhere to store data would we use a relational database what would we use to to store data um we'd obviously need some way to access that data some sort of api in front of that and obviously we need a client we need some people to actually use this it needs to be usable by by end users so it's going to be some sort of web application i wanted to explore the use of using single page applications approach where a lot of the logic has moved across to the client side um and then you know there needs to be a data access layer for that to be able to to be used so at this point it was still kind of fairly conceptual we knew we needed to use sort of standard patterns like rest and see how we would implement these different services and how many of them we would actually need at this stage so really the way this all took shape um the core of it became the api so what got designed first was a kind of mock version of the api and from that everything got built out so from from the api itself there's a sort of specification of what data we needed what the entities looked like and what the operations would be on top of those uh top-notch entities with that api in place you could start to wireframe and sketch out the front-end part that this which was the single page application in this case uh and also start to scratch out you know a working implementation at the back end which initially you start out as the node.js version of the implementation so this api icon that becomes the sort of heart of of your kind of design process and your thinking and it becomes a kind of boundary between your clients and consumers and implementations from the consumer perspective it doesn't make any difference what the implementation is you know because written in the node.js or go or serverless functions or cobol on the mainframe if it exposes that same rest api it it makes no difference to you as a client and as a client you know of course there's kind of classic things like web applications and uh mobile apps this is the sort of standard pattern that allows people to be easily create both a mobile version and a kind of web app version of that of their service or of that of our product but also you know command line interface for people who might want to automate it or even other services treating your your application on your services as a black box so the api becomes absolutely crucial and it's effectively your contract between your you know what you're doing and providing and your clients so this concept of api as a contract is kind of quite important um and it's you know it's all around agreeing the shape of your api once you've got that you know that shape agreed you can build your implementations that provide that api uh and and implement it and you can start building clients so if you are working um you know in an agile team or you're splitting this world cup as perhaps you are you you could start working on this you know straight away especially there's technologies to be able to mock these apis quite quickly so you can sort of work in parallel on both sides of this uh you know when you're talking about this shape we are talking obviously about the operations you're going to be doing in the api the models themselves the entities um you know what fields they have what properties they have um all of that needs to be taken into consideration so when we're talking about rest uh restful api is the kind of key way of you know dividing this contract is using open api which previously known as swagger so also gets called swagger quite a lot um it's now called open api v v2 and v3 have been coming out the last couple of years and these also allow you to define your api as a document and use auto generation so you can auto generate the swagger definition um which itself isn't as adjacent uh of yaml description of your api or vice versa create this and then use the tools to generate code if you're not going down the restful route http route some people are adopting another technology for microservices called grpc i don't talk deeply about this today but that enforces by default uh you do have a contract rest doesn't the rest is gonna let uh quite free format there's no sort of standard implementation for grpc you have to define what your api looks like in this image protocol uh definitions so that's just an example of our smiler um api all the different operations you can do on events and posting feedback all the kind of standard kind of uh create read update quite kind of operations you'd expect and there's a couple of links there i'll actually share the slides at the end for some good practices when it comes to restful api design so when it comes to the actual implementation of smiler what did it what did it look like uh in the end so there was a bit of a journey here worked with david on this probably for on and off not every day for a couple of years now um and it's ended up looking a little bit like this with the front end being written in a javascript framework called ujs and that's a single page application most of the work's obviously done on the client but there still needs to be something to serve that single page app up to the clients so it's a very simple service basically serving up static uh html content static content for javascript and and not any of the entities required for that that also has a config end point it's a very basic endpoint for this the single page application to be able to fetch its configuration at runtime from the back end so one of the challenges with single page applications is injecting configuration into them and so creating this api uh dynamically the real key heart of the the whole project is the data api service so this is the sort of microservice that provides the data layer this is what the single page application makes all of its calls to to to fetch its data back and forth so that's the client-side http requests to this service to fetch all of its data that in itself is communicating directly with a back-end data store this went through several iterations but we ended up using nosql technologies and settling on the nice thing about is we can run it in a number of different ways we can run it as a container we can run it as a dutch machine we can run it in services like azure cosmos and then that has its own kind of wire protocol so we're not talking http there we're going to be talking you know directly to the data api service kind of wraps the database and then some optional you know components you know this being a micro service kind of architecture we can add in and bolt on uh extra uh elements when we need them so the data api service can optionally call off to like azure cognitive services to get sentiment analysis so somebody leaves a comment you know there's some text in there we can get the sentiment of that um we don't want to be able to do that kind of number crunching and ai kind of model ourselves there's you know off-the-shelf services like azure content services that can provide that to us so that's basically just another restful api that we plug into our application uh but it's optional you know it's not kind of if it doesn't happen it's not kind of instrumental to the heart of the app and this allowed us you know to have these implementations of the smiler api and the app to be able to deploy in different places so we can deploy as containers uh these are azure services here azure container instances as your app service as your kubernetes service or in fact any kubernetes implementation once you've got it containerized be the world's your oyster you can run this in a number of different platforms and that's really what we wanted to show so when it comes to security this was quite a tricky one um we went through a journey here and certainly when this application first started and we wanted to just you know show people uh how to deploy sort of a micro services app like smiler to to azure there was no security at all um this wasn't great so i quickly hacked in some sort of fairly simple one-time password security it was a little bit better but still not great um and so i settled on which is the kind of industry standard way of securing a single page application is to use what's called the the oauth 2 implicit flow the challenge here is if you thought you can't store any private information in your single page application because it's all loaded in the clients and you can go into that browser view the javascript and they'll see any special keys or any secrets or any passwords you put in there that will basically be available to the client to very quite easily extract out so this implicit flow does away with that um and it's you know it's it's it's like i said it's a standard standard flow part of the oauth 2 specification and it kind of looks like this where you have a a provider in this case something like azure active directory which you can provide tokens well the user logs in the login request gets provided an id token back we can use that id token to go and request an access token so the access token is for the access of our api our smiler data api and we say we want to be able to access that api and then we use the token given back to us to call that api and we put it in the headers there's a authentication header and the smile update api then has to validate that token because if we don't then we just you know we're not we haven't secured anything at that point we still have to reject uh requests that don't have this token so we um use a method called the json web token validation decode that make sure that's a valid token and this is a really standard pattern for uh single page applications and security so with that i'm gonna hand back to david to talk a little bit about um you know coding the smiler what i talked about with different implementations of it i talked very much about the node.js implementation the standard implementation let's talk about about what it looks like before leans right am i back on then the screen so right yes so yes let's talk about how we actual we took that smiler model and implemented in orleans now i love smiler ben and i have consumed many cups of tea and coffee over the years discussing that you've had a little bit of the history there as to how it came into being um i think it's a fascinating way of facing some of the real challenges about the real world implementation such as the security we've just been talking about but for me what's fascinating is as ben said played around with a number of different databases different models at no stage did we not have a database in there the assumption was there would be a database so this was an interesting challenge for me and the audience project is can we take something that's sort of entrenched with the idea of a database that holds um the core data and move it to an actor-based model so this is what we ended up with it's a relatively simple straightforward implementation we well i ripped out uh all the database stuff and replaced it with grains and uh we i built it on.net core 3.1 which is the long-term supported solution it's in c sharp and i'm currently using all lian's 3.2 so just note that all the ins has kept very actively in sync with net core as it's gone along very active in keeping uh the latest innovation is piggybacked on that my model really simple uh starting off with an mvc web api because you know i'm a c-sharp kind of guy mvc web apps that's uh where you typically start point so uh i work on that and that is the client into the silo and i built two types of grains one that represents um each of the events that take place we'll remind ourselves of what that looks like for smilo so i'm one for the aggregator and as you see on this diagram my client is on the left hand side of that line it's the only thing that's allowed to make calls into the system the orange represents grains one per event and that aggregator great ben has showed you uh the swagger definition and the open id stuff so that was my starting point too if i'm gonna play in this game and you become part of the smiler ecosystem i need to implement that so i on the right hand side handcrafted a load of um the implementations needed for both uh events and feedback in the system those are the two kind of key building blocks and on the left hand side very simple classic nvc web api code here i have a post and i create a new event in the back end system and i grab the body pull out the title turn it into an event code that was one of the design criteria was to create short event codes inside the system and then i call get grain to start talking to the grain that represents that particular event and remember of course grains of virtual this event will always exist even if i've never seen it before so i don't have to worry about if it's a new grain then do one thing if it's an existing one do something else the system handles that for me and i just pass in the core data it needs and return back event code so really simple really straightforward useless pointless we talk about the great implementation is to think what does that smiley system look like as you said it's we picked it because it's very easy everyone's walked through an airport or something and press one of those buttons with a smiley face on behind the scenes to build the system uh there's some admin capabilities that requires for each new event such as a day's workshop for example you create that event that's when we use the terminology an event and it has a date associated with it and one or more topics so if it's a workshop for the day they may be five sessions and what you want to do is create five topics and then allow users to feedback on each and every one of those topics so that smiley button when you press the happy face it gets rooted to that particular event to that particular topic and hangs off that so that is a perfect fit for the orleans way of working because each of these events is autonomous it has its own internal state the type of event the start date each of the topics and all the feedback that amasses over the day as people press the happy or unhappy button now it's important that we hold on to this feedback so we also have to persist the data for this uh to make sure that you know it behaves in a coherent fashion and withstands failures on the underlying system the starting point of course because it's api driven is to create an interface that defines all the methods and what's nice here because of this persistence model i've been talking about for me as a programmer making sure that my state gets persisted requires no extra work other than to define a class with all the member variables that need to be persisted and included as part of the definition at the top of the slide there and say which persistent storage provider am i going to plug in once i've done all of that all the ins will handle the hydration and rehydration under the covers without me needing to get involved and all i need to do is on the left hand side of this slide implement these four methods uh to handle all the logic needed for the event so for example here when i get an update method to indicate that there's been a change perhaps a new topic's been added everything gets passed into me one of the api calls i can make inside a grain is to reach out to the current context and find out who i am so i can reach out and call get primary key string find out what my key is because i'm probably going to need that inside my code and in this case update that internal grain state so the member variables that i've set aside to be persisted and then the critical bit halfway down this slide right state async is when i flag to the system i want this state persisted as a checkpoint and from here on in i am confident now that that system uh is being put in place and laid into a persistent store so it can be recreated at any future point in time the final bit is to call the aggregator grain and this is the grain that's responsible for having a system-wide view of every id that exists on the system and it's the one that will take the query so i need to call into it with the minimal information is that it needs in order to do any queries such as what events are there on a particular day so why do we have to do this and this is opens up one of the interesting aspects of picking a framework um cross-grain querying is not something that's either recommended or really supported and if you think about it that makes sense imagine a system with tens of thousands of grains each representing one event if you want to ask the system how many events took place on the 5th of january that's a lot of work and you don't want to be building bottlenecks by querying all the different grains here so what it requires is a cqrs style approach where what we need to do is separate the reads from the rights so what happens is the calls to the grains are used to update the state of the grade and to implement the logic necessary but if i need to do a cross-grain query i need to go to another part of the system now the good news is this all gets hidden behind the web api so the user of the system is totally unaware this is an architectural decision that sits inside the silo and inside my code the two main approaches that people tend to take in this system are if it's a read heavy system that might have the requirements for complex filtering and the like then what you may be best off doing is exhausting to push out from the grains a denormalized list with all the data and push it into a store such as sql or whatever that can be used for the queries so then we end up with one part of the system the audience grains used for updates and um an external data source denormalized optimized for queries that you could plug power bi into or use it to execute complex queries and that works really really well the other approach is to use an aggregator grain so to keep the problem within orleans and create a grain that can be query that knows about all the other grains that are existing in the system in our case this makes sense partly because i just wanted to show off orleans and show two different grain types but partly because our query requirements for ben smiler is a very simple one it's a filter in the past in the future or a specific date so that's very easy um to implement in code the model i've gone for is we have one aggregator grade for the entire system and that's the one we talked that was the green one on the previous slide now obviously in a real world system that's not a good design it would very quickly become a bottleneck for updates and queries and there are mechanisms of having uh placement of individual grains one per silo to balance that out but to keep this simple that's not the implementation i've gone for just to help you get your head around the model so that aggregator really really simple um implement the four methods it needs to do one is to check whether an event and the other one is to add event delete an event and list the events given a query filter that's passed in and all i have to do inside that aggregated grade is contain a list of all the events along with the minimal details that i will use for querying once i've built that then we have grain to grain communications so as you saw on the previous slide when i updated an event what i need to do is call that aggregator and update it and it's a really simple process it's a normal grain that just maintains its own internal state so that's that's something i wanted to call that particular session because of this need to sometimes separate out in a secure style fashion the reads from the rights and having a grain-based system with the logic locally means you have to approach querying in a slightly different fashion so with that i shall return control back to ben and he'll talk about how he took my code and got it up and running in orleans right hopefully everyone can see my screen this time and i haven't uh what's up i think i'm sharing this time so i'm going to talk a little bit um about how we took what david did which was the taking the the sort of smiler api and creating an implementation in orleans uh and then we really wanted to see about running that in in kubernetes uh because in parallel to this i've been doing a lot of work and getting smiler deployed into kubernetes or let's say that the reference implementation of smiler which was the node.js kind of classic way of doing things could we take the same approach from our liens and get that running in kubernetes um so just a quick introduction to kubernetes in case people aren't familiar with it i should imagine most people will have heard of it at least because it's pretty hard to escape in the industry at the moment um you know kubernetes has become the de facto container of orchestrator it's a way fundamentally what it comes down to you know a lot of people think kubernetes is a really big complex thing and it can be in certain cases but you know what it boils down to it's it's a way of automating and running containerized applications so you know if you have apps you've containerized and you want to run within docker and you want to take that to production where you want to be able to scale them run them over you know multiple multiple instances been able to deal with failures that becomes quite complex and that's something you can hand over to kubernetes to to run and manage for you um it's a very kind of portable system you know you can run it you know uh even on a like the age on iot devices you can run it on a raspberry pi or you can run it at sort of hyperscale on you know thousands of nodes up in the cloud it's a very open project very extensible there's lots of ways you can hook into this ecosystem so there's a big community of projects around this um yeah and it's become you know we're very very very well adopted by running containers uh some of the common scenarios we see is you know lifted and shifting traditional workloads um the one most people associate with kubernetes microservices and that's probably what we're going to be exploring a little bit but there's also use cases of brand machine learning iot devops so you know kubernetes is not strongly opinionated on what you use it for um it's a system for you to be able to run almost anything so when it came to um deploying uh smiler kubernetes this is the non-orleans version of it let's say you know the the kind of standard version um why it was stood up this is very common pattern so what you'll see for most kind of single page applications or anything where there's a web component and a back end api it will look roughly similar to this in kubernetes um so there's a way of deploying that that front-end host that's the host single-page application that's not very heavyweight we might want to deploy that over a couple of different pods and then expose that out to the internet using a service um you know that's quite a lightweight component we don't need that kind of uh heavily replicated around the cluster similarly with the data api again it's a stateless component we can run that across multiple pods in our cluster using what kubernetes calls a replica set we could say how many replicas we want and kubernetes will just handle that for us and we put a service in front of it to allow us to be able to access those pods in this case the data api is critical so we perhaps want to scale this out to more pods if we needed to and then my back end um i decided in this case for example to run my database in cluster you don't have to do this the recommendation typically is to bring your database out of your cluster and use a kind of cloud service if you can but in this example we've got the database there running in kubernetes as well that's running as a single pod and it has its data stored into what's called kubernetes calls a persistent volume allow access into this we create what's called an ingress which is like a layer 7 routing to be able to route traffic to the different services so this is the kind of almost canonical way of standing up kind of web application with a database back end in kubernetes so what did that look like when we had the orleans version of of the smiler so things were slightly different um and you know certainly for myself as a person that spent quite a long time deploying things to kubernetes this is uh took a little bit of adjustment um so from the from the api perspective uh david talked about this mvc api um very standard uh similar to what we had before so we have a expose that out using the kubernetes service that's uh allowing port 80 traffic in through an azure load balancer into the cluster and distributing those um requests to the the pods where we're running that mvc application is stateless so we can just deploy it across as many pods as we want and then we have the sort of more or less specific part which is the silo so we deploy the silo in what communities refers to as a stateful set um and we decided we'll have two of these for resilience so okay uh multiple silos silo zeros i know one and those two are running now normally you'd create a service in front of those to be able to allow communication to those parts the recommendation generally of kubernetes is not to allow direct traffic to pods however orleans provides a clustering mechanism it has its own built-in application level clustering so this is the orleans run time here it lets you provide a number of mechanisms to implement this uh the one we went with which i think is the default which is about using azure storage as your tables as a synchronization point so with this the silos register themselves in this table you see now the entries in in um in the azure table there for each of those silos and they register themselves with their ip address so this is how um they keep themselves in sync they know which which um silos are alive and it's also how clients discover how to connect to the the orleans cluster if you like this cluster of silos so they can query this table storage um they both need connections of course to the table storage you know the standard azure connection string mechanisms and from that they can decide which silos to connect to which ones are alive and use the ip address to directly connect to those um to those silos so this is this seems kind of obvious but uh if you've come from this sort of creating other applications and kubernetes this will seem slightly an unusual approach but it's you know can be we're kind of handing the clustering and the failover mechanism to to our liens rather than having to worry about bringing that into into kubernetes so what did this look like i appreciate we haven't got time to do a full sort of deep dive on kubernetes manifests but i think hopefully this should be fairly uh easy to follow so when we deployed the the only in silo we used a mechanism called stateful sets the reason for this is by default kubernetes gives random names to your pods um they're semi-random let's say and so as you remember on that table previously the sellers registered themselves in this table storage if we didn't use a stateful set they'd get a random name and the registers themselves again and again and again using a stateful set guarantees that they have well-ordered names that's why we've got sign of zero silo one so we decided to run these two replicas so that's what the red pickers 2 up there requires we haven't put a service in front of it because we're going to hand over network discovery to um to the orleans cluster mechanism we do have these fairly unusual ports being exposed these are standard ports used for the uh highly all link system by the silo um so these ports thirty thousand and and five ones that that the defaults that we decided to go with um and we needed a way to connect back to this storage account to be able to use the table storage as this connection string contains the secret or uh sensitive information let's say you use the mechanism in kubernetes called secrets to store that so that the information is not visible in plain text to anyone looking at the deployments oh and then middle there you can see that the image name so that's where we're storing that uh image in azure container registry we've already built that image pushed up to the container registry ready for kubernetes to be able to pull that down and run that so you'll see that the reference and now is smiler orleans silo that's going to pull down the silo image that we built on the other side the client which is the mvc uh api um we don't use a stateful set we use a keyboard for what kubernetes refers to as a deployment because it's stateless so then it will spin up number of pods give them random names we don't really need to to care about the life cycle of them quite as much you see the container image is really similar it's in the azure container registry but instead of you know instead of slash silo it slash api this also needs the connection the connection string to the same table storage that the cluster's using so you can just go and discover them and rather than those silo ports we talked about it's going to listen on port 4000 which is one of the mvc kestrel ports and what we end up doing is exposing this out to the internet using a kubernetes load balancer so this then maps between port 80 and the outside world and port 4000 are on our pods so that's quite quite a sort of traditional you know way of deploying a web-based application to this point you know yeah it is a very standard kind of um piece of piece of code and just lastly you know to see this up and running this was running on my machine uh last week so as you can see on the left-hand side a small orleans api or lean silo running you can see the pods running there with the green dots on them meaning that they're up and running and you can see me making a post request across to to the end point you see some of the uh json what it looks like and the http response coming back and at the bottom you can see the logs of the silo there so i'm actually getting the logs of the the silo back end and you can see the kind of grain events coming in uh the event ids so this was a kind of live view as it was running inside kubernetes and we can see the different parts as it was all hooked up and running inside my demo kubernetes cluster so with that i think we're um we're about done david are you still there i am indeed yesterday thank you brilliant description of ben had the herculean task of taking my code and making it work there we went through a big journey discovering that uh stateful model and how it differs but all this is all open source it's all available for you to play with so you can see uh links to where you get started where we have the samples documentation it's all out in the open please get involved in all of this and then the specific stuff for smiler there ben has loaded up uh the slides into his repository and smiler benco io that's where all this code is so you know if this has excited you and you're keen to find out either about smiler generally and microservices and all the things we've learned building that or orleans and want to do something more than a hello world program this seems a great place to start we've both learned a lot uh in the process and we've had some great important support from the product team in redmond who we know and who've been very active in giving us feedback and tips and tricks and the gita io team are very active as well so leave just a couple of moments in case there's any more questions coming in uh that we can answer verbally rather than a sonic panel oh though uh feedback slide then we need the yeah okay sorry yeah no worries so yes please um just take a few moments uh to visit this link and give us some feedback it's important for us to be able to run these sessions like this and know what stuff folk want for future sessions so please um if you'll be listening live go in and complete this yeah i think we can probably i think most of the questions are on the q a chat so um so couple of minutes on there more a quick one yeah so um the slides we just put up that um link and we'll copy it into the chat meeting someone's asked um and is the recording yes it will be up on youtube uh probably typically a couple of days from now um someone's asked about uh popular systems used aliens there was an early slide in which i mentioned some of the internal microsoft stuff so we talked about um halo and how it was used there um so there's a bit more info there and the paper that's linked to in the slides gives a lot more info on that as well okay brilliant well i think if there are no more questions thank you for your time ben thank you very much please fill in that survey and keep a look out for the recorded ones and visit this smiler repository and get engaged thanks everybody thank you you
Info
Channel: Microsoft Reactor
Views: 3,795
Rating: 4.9384613 out of 5
Keywords: Orleans, Coding, Languages, Frameworks, .NET Core, cross-platform framework, Microsoft Research
Id: 9OMXw0CslKE
Channel Id: undefined
Length: 56min 54sec (3414 seconds)
Published: Mon Aug 10 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.