Cloud Native Architecture | Keynote : The Cloud Native promise to reduce costs and complexity |

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
so this is going to be the agenda for the session so first I will finish in the groundwork so I'll be trying to talk about what's the current scenario what are the challenges etc then I'll be talking about some of the application design guidelines so these are more like a little bit of design patterns when you think like what are some concepts that you know things from an application is like perspective then you talk about what are the different fountain of models which are there as a cloud we'll talk a little bit about infrastructure design guidelines from a cloud native perspective so we will be going into the beautiful lights etc so my colleague tells we'll be talking about the ami life cycle by using hatch above and a of these tools and finally you look at some of the examples from native an action where did you drop into truly like so first is what is cognitive so this is a definition that I pick from CN CF CN CF is cognitive compute formation so which is like an organization which is trying to lay out how many of your activity is here in order so yes the CF CF is family like this organization is trying to consolidate all the cloud native technologies bring it under single umbrella so really good thing that you can check out the CNC and landscape so it is good so this is how Sears give defines so didn't technologies empower organizations to build scalable applications in more than 90 now the key parts here is empowering scalable and dynamic cloud native it is not necessary that it has to be for a specific cloud or you know it it is it's a methodology of doing design and and so the key part is in cloud native you can't predict where your workload is going to be running so that is the first mentality that you need to be aware of so it could be anywhere so it could be dynamic so you can't predict it upfront where it's going to be running which means that there are some challenges and there are some concepts that you have to be aware of which I'll be talking about and of course it helps in terms of scaling and it's so umbrella set of technologies which work together so that's what allowed Native is now what are the challenges so I think the principal challenge for cloud native is that there's a lot of ambiguity in what cloud native means and I think I don't think everybody has a consistent definition of what cloud native means which is fine because it's difficult to define it but cloud native else is something which has evolved organically over a period of time over the last three four or five years right it's something which has been evolving organically so which means that everybody is definition of cloud native is different and I think that's a key challenge because when I say like I'm doing cloud native and somebody else says they're doing cloud native third person says something we three might be talking different things right there might be some common elements like Cuban 80s or something of that sort but that's probably the similarities which should be there so that's the key thing which is there which is we need to have a clear understanding of what exactly cloud native means the second part is from a architecture and development standpoint there has to be a significant shift in terms of mindset this is something I think there is a lot of movement in this but I think there's a lot more to be achieved in terms of this mindset shift which is required the third point is fear to experiment evaluate and switch between languages and frameworks this is something I've seen a lot because a lot of organizations would have these things and I don't mean it as disrespect for anyone I think a thing like okay we have like a lot of Java people so we'll stick to only Java right or we have only Python we'll stick to only Python etc but there must be a certain willingness to switch between things experiment things but at the same time it has to be practical as well you can't just keep switching whatever to whatever is coolest like every six months or something that's not practical either but there must be we must overcome this fear because there are pros and cons for each of those things the next point is about determining micro-service boundaries loose coupling I'm sure more a lot of people who have worked with microservices will have this experience so you build micro-services you think everything is amazing all is good you deploy things and all that stuff now the problem that happens is one of your micro services goes down it brings down the entire system now is that what really the thoughts of micro service is because you should be truly able to operate independently and so what happens is it's difficult to draw the boundaries of micro service you think that this is a micro service and you go ahead and make it a micro service but once you are and that's what boundaries need service boundaries so boundaries is a concept that you will find with topics like domain driven design and CQRS etcetera so if you're not able to figure out the boundaries correctly and if you lay down wrong boundaries you're going to be in a lot of you'll be in hell basically afterwards right once it goes to production etc because you'll have to then go back and change everything so the fact here is that with micro services you need to have loose coupling so if you create micro services with tight coupling might as well as left it as a monolith there's no other point that you have done and finally the increased complexity right because when you have like tens of hundreds of micro services how do you handle complexity right how do you handle deployments how do you hand handle things like you know this service has to come up first the dependencies between the services etc so these are some of the challenges which are there so yeah how you will benefit from this session so I just try to list down like you know our audiences must be diverse here so there should be business owners architects both solution and tech delivering managers developers and ops so from a business owner perspective the key thing is of course rolling out features faster which is straightforward but significant cloud cost savings would be there from a solution architect perspective now this is really critical this is something which I see is overlooked quite a lot solution architecture is supposed to be the glue of the connection between product and the execution right so you have to work the solution architects would have to work with the product teams with the UX teams now when the architects the solution architects craft solutions to business problems it is vital that you do it in the context of cloud native you need to understand what are some of the opportunities as well as challenges which is there is native and develop it accordingly because you will see a lot of cases where you will develop solutions which are very synchronous which doesn't work in an asynchronous model right and then you will have a nightmare because the development teams will be trying to do it but they can't do it and then they'll run into all sorts of problems so as a solution again it's vital that you understand what is coordinated and how is it different from the previous architectures and what are the opportunities and the change is required from a takeout perspective you need to understand about what are the models which are there which can be used to develop your application design it accordingly so that you can deploy it within the various cloud native models which are there as well as speaking to links for development deployment and maintenance because there's a lot of tooling required in cloud native because you have a lot of diverse set of technologies and a lot of different micro services from a delivery perspective the challenge again which I've noticed a lot is adapting to the cloud native lifecycle right because you used to a certain SDLC and you don't adapt to the cloud native lifecycle that's really important and that will also help significantly in terms of reducing the technical death because one of the things if you saw back to the definition of cloud native was that it could it would run in any sort of environment right you can't predict it up front where it's going to run which means that you as a developer you have forced to think in terms of things like scaling exception handling failures all those things you're forced to think about it up front so those kind of things will help in terms of reducing the technical debt because you're not just focused on cranking out features but you're also focused on addressing some of the non-functional parts as well so that should help in terms of technical death from a developer perspective understanding the development and testing methodologies especially with respect to TDD some asynchronous programming etc as well as the DevOps culture which is straightforward from an abstract 'iv it's about maintaining higher up times of your application and SLA is without getting burnt out because imagine if you have hundreds of mic services and they're all tightly coupled together you will get burnt out because one of the microfiche fails it will bring down significant parts of system as well as fastening your response cycle to the issues which are there which will have to close you'll have to have enough of observability and instrumentation so that you can collect the data analyze the data between the developers and yourself and then come up with the solutions for that alright so moving on to application design guidelines so why did I pick up application first because this is a conscious decision that I put application first because when we talk cloud native there is a I don't know if it's a misconception or not but there's a feeling that I get that always people jump immediately oh let's do Q varieties right cloud native its body infrastructure def it's not my problem right it's the Cuban IDs guy who's supposed to make it cloud native it's not it starts all the way from application design onwards so the first point is that from an application design perspective from an architectural perspective as well as design and development it is vital that we embrace distributed systems and patterns it is so vital that we do this because your applications will be running in distributed mode which means that when you develop your application as well as when you use technologies to work with your application like whether zookeeper whatever it is you need to understand what distributed systems are what are some of the opportunities there what are some of the challenges there so starting in distributed systems a good starting point would be what as far as a cap theorem how many of you have heard of cap theorem yeah so cap theorem is basically like this thing where you can have like so you have got consistency availability and partition tolerance so that's what capi stands for so you can have only two out of these three that's a basic catch you can't have all three right so now if you can only pick and there are caches here so for instance if you take consistency and availability you have a put a possibility that the data might be unavailable if you pick up availability and partition tolerance you might have inconsistent data which is like eventual consistency like asana is a classic example of that if you pick up CP you'll have another part so you will have to make this choice and you will have to understand that you can have only two out of these three things that means you will have to let go of one thing now that's going to be the part where when you do the design when you pick up the components you have to decide that all these three things what do you want to let walk so that's a good starting place for distributed systems the other part is there are some architectures which are there in distributed systems again you can read up about this in more detail so typically you will have a master/slave architecture or a p2p architecture right so in master/slave architecture you will have a primary and you will have a bunch of worker nodes slaves nodes whatever you call it in peer-to-peer you don't have any masters it's just all of them talking to each other gossip protocols are there which are used for communicating between each other typically that will be eventually consistent because you will not have the data available in all the nodes all the time whereas in master slave you will have more of consistency with theta there are few other concepts that you need to be aware of the terms like heartbeat leader election and quorum so quorum is something where when you do a master/slave architecture you need to have like typically an odd number it is just meant for resiliency purposes and how it understands whether the nodes are alive or not because these things are distributed they're spread over you know like software networks or even actual network but typically it'll be mostly over a software-defined network like an overlay network so they will communicate by using heartbeats and if it doesn't get a particular heartbeat within a certain period of time master will assume that it's dead which means that you can start spinning it up and all that a peer-to-peer is different because there is no master there that's a key point so all the nodes talk to each other it's like a ring structure shared nothing now shared nothing is not related to these two per safe but shared nothing means that whatever the so typically you will have always CPU memory and storage right to execute anything and of course you have got the network which is there at the top so shared-nothing principle is that you should share you should not share any of these resources between the different nodes or the different execution units which are there now the reason why it is important to not share is one is if you share this possibility that one of the nodes might corrupt the other node right so you might have situations where one will take down the other yeah but more than that the key part is whenever you want to scale up or down you have complete freedom now because you don't have any sort of shared parts everything is separate you can spin it up spin it out spin down whatever it is so you have got complete control over there of course there's more complexity comes in so these are some of the architectures which are there in that right so this is one of the key parts that I want to talk about because I'm not sure this is well understood and this methodologies went well understood because I've seen this go wrong so many times over so many years now so a lot of you might have heard like you know whenever like you start desire you know design development you should do micro services you should avoid monolith at all cost right monolith is a term which is like almost born like people really hate monoliths right now but my take and I think a lot of people talk about this as well is that you should not see monolith or something which is bad you should be okay with monolith and the reason why I'm saying that is it's always better to start off as a monolith okay so let's assume that you're starting you have no idea about you just you just got a bunch of requirements etcetera so you have to start designing and developing and all that at this point if you start thinking that hey like the typical class the example that they give products orders shipping etcetera those examples look really simple they meant to give you an idea of what but in your requirements is not going to be so simple right so you will start thinking like okay let me start trying to copy like what somebody did like with products and orders and all that you'll go ahead and you'll start splitting this up and all loops great but finally I'm telling you it will be like a tightly coupled swaggity which is going to come out right so instead of that my suggestion would be start off as a monolith there's no shame in talking monolith there's no shame at all right start off as a monolith then sit back relax do your development continue for a while at this point you will start seeing some of the boundaries emerging in your application right a good example of that not related to this is that so let's assume that we have this room right we have this hole here right imagine like you put hundred people inside okay now hundred people eventually they will start forming their groups right people who have like interests or whatever they will start huddling together they'll start forming a group at this point I can say okay that's one set of group that's one set of group now imagine doing that versus me deciding up front that okay five of you you should sit together five of you you should sit together ten of you should sit a bit I think that you have like interests but the people probably don't have the like interests it's the same thing that happens if we start doing enforcing micro-services up front you do not know the domain well enough you don't know the requirements well it out you have you're still working with Assamese on that so start off as a monolith sit back for a while while you're doing development you will start seeing that the boundaries will start evolve evolving over a period of time so I don't like an evolutionary architecture but the catch here is that when that happens you need to be hundred percent sure that you have a TDD approach in place right so I'm saying that don't shun micro monoliths at the beginning both monolith but make sure that you have a TVD approach because now when the boundaries are evolving you need to start continuous refactoring because you need to start adjusting you will start seeing that okay I need to put this within this boundary that within that boundary etc you you'll start seeing this well when you do this how are you going to refactor it if you don't have TDD right you'll have to do it and you'll have to spend a lot of time you not have the confidence that you know things are working fine etc so that's why I would say monolith with TD start there let it evolve do continuous refactoring afterwards now come they will come a point depending on the life cycle of the project how long the project is how complex it is typically in the order of six months or so where you will get a call and say like okay was a deployment architecture right so you'll have to say okay what's the production system going to look like now this is the point where you can start thinking about how do you want to of course how do you want to deploy it but you can start thinking about okay what parts have to scale independently right what parts have to be fault tolerant you can start thinking about it now once you have a clarity only at that point break it into micro services and that too let's assume that you start with your monolith and it's a spy boundary at this point but when you go for deployment you may realize that you want only two of them breakout only those two you don't have to break out the entire file so break it only when absolutely required and then you can test all the coupling whether you do manual chaos engineering or by automation they can try taking on a few other services and see like how does it hold now you will be in a situation where you will have loosely coupled microservices and repeat this process over so this is a very important concept that I wanted to convey and it's called a service boundaries so if you if you want if you go and look into domain driven design or CQRS you will see service boundaries a lot so understanding service boundaries and breaking it up such that there is loose coupling is a very vital point because otherwise humanities or even God can't save you so that's very important the next part is deciding the languages and frameworks so this goes back to a point where I was telling about fear of deciding between what different languages and different frame of etcetera so a lot of people talk about how do you design so some say like I like this particular language because it looks cool or you know looks clean or I have the resources etc which is all fine this is with a secondary point but the key part is the key question that you need to ask yourself is the workload which is there what kind of a workload is it is it a CPU intensive workload or an i/o intensive workload there are differences which will come up based on the the options which will be you you'll be suggesting if it's based on this little bit more sense like for instance event loop as you know there's no J's and all those things for multi-threading you have like Java etcetera it's not that you can't do multi-threading with node you can still do it you'll have to change few things neither is it that you can't do a vent loop with Java you can use frameworks like Waze or etc but understanding this question that when you look at languages and frameworks first things what is a workload like is it a CPU intensive workload which means that there's a lot of transformation crunching etcetera or it's an i/o intensive workloads there's a lot of Network calls etc understand that and based on that you can get your language and framework options the next part is this is more like a observation typically when we talk about design we always tend to talk about the design patterns expecially architects and they tend to talk about Gang of Four or those kind of design patterns solid principles and all that now one thing to remember is that those things are great I'm not saying that those things are not great but remember that those things come from an object-oriented background one second is they come from a time when cloud native wasn't there they come from a time in monoliths were there like significant massive monoliths were there so it's not that those things are bad but I think there's a need also to embrace new things which is happening and that's primarily functional programming right the key takeaway here is that functional programming you can check out any language like Scala or Haskell or anything try out any of these languages and just get a feel for functions partial functions currying etcetera but the key thing is the two things I would like to highlight is stateless and immutability right so stateless is something which says that if you have a function and so like f of X is X into two you just pass in a number you know this multiplied by two and giveth you out right this is a very powerful concept because that can directly be used for doing really simple transformation like sync transformation but most important you can do multi-threading really amazingly well with functional programming languages it's not that you can't do multi-threading with Java etc but the chances are if you don't design it well you will have a shared state if you have a shared state you will have to put a lock if you put a lock you will have blocking so when you talk about stateless immutable functions you don't need locks anymore that is a key part where it helps in terms of performance and then you like of course look at existing ecosystem and trends so there are people who like or one month back the new languages come out looks cool I'm going to do it I'm going to write everything that's not practical either but at the other side it shouldn't be like we will do only this particular thing because there are opportunities to design your micro services like you can have probably Python Java Scala all a mix of things but it has to be evaluated based on these criterias the next point is about synchronous and asynchronous processes so on the left hand side you have a synchronous and right hand side you have a synchronous process synchronous means that it's like one guy is asking there's a request response cycle a synchronous means that you're publishing an event somebody listens to it right often we hear that you know people say like events are cool right event-driven architecture really cool so let's go with the events for everything so you go with events for everything and all that stuff certainly one one fine day you realize that I need an RPC style thing I need I can't do events like I can't fire and forget I need to get information back what do you do at this point you're left with two options one is you create RPC on top of asynchronous process right you create a pseudo synchronous kind of thing right like a fake synchronous kind of thing which will effect in terms of performance or you could have been sparked and you could have embraced both parts right you don't have to be biased only to works event of an architecture you can still have RPC also good examples of this or the latest there were the currently trending framework is obviously G RPC from google protoboys civilization or you can even use straightforward REST API it'll be not as efficient as G RPC of course so there is no particular necessity that Mykel service to micro service communication should be only via events yes it introduces loose coupling and all that but it's not a mandatory thing you can still have think Rinna's communication and if you worried about coupling you can use vex service discovery and locator patterns etc so that's about synchronous and asynchronous when we when we do events it is really important that you pick up a schema library like protobuf is a good example because one of the things that'll happen is as your micro services evolved Mike service one will be worked on by team one Mike services two will be worked on by team two they agree on everything says like Mike service one is publishing something - I subscribe to it everything looks fine till one day team one changes the schema definition of the event boom gone right Mike service to break some pletely so to prevent that you need to enforce the schema and protobuf is a good way of doing it or any ideal based support which is there is good to do it but apart from this it will also help you to reduce your bandwidth significantly because it's binary compression which is happening there so never use Jason directly for sending events across micro service always go for protobuf or something of that sort so that you can enforce a schema all right so this is the next part so this is going to be a section about so we briefly spoke about stateless with function programming we take in the next step forward instead of just functions we'll talk about stateless and stateful components itself in the system so this is the classic analogy which is then the DevOps space so you've got pets and cattle so you treat your pets and cattle differently right so you've got pets we've got a particular name you bring it up they're close to you etcetera I'm not saying cows are not close to you can be close but you have like a distinction between how you treat your pets as well as how you treat a herd right so if a pet disappears or a pet is gone you'll have to bring up a new pet now to put a name to it you will have to take it through the whole lifecycle you know learn how to live in your house etcetera and all that all those things right whereas if a cattle is gone or you need more cattle you can add it instantly and that's why you have those like you know random kind of numbers name is not specific etcetera now why is this important now if you adopt this mentality always what we work on typically is that we work with pets right we have a application which talks to the database a specific database with a specific connection with a specific set of schema like we are so called and we have all these kind of application now if this application dies in production right it goes down you will have to do so many clean up steps you will have to make sure that so much you know whatever initial steps which are there has to be created like all sorts of dependencies will come in right and that hampers how you can because remember again cloud native means that you can't predict where it's going to run it's going to be dynamic right so you will be in a whole lot of pain if those applications go down you will have to make sure that they are cleaned the new thing comes up it starts you need to make sure that you know log six that all these kind of complications come on whereas if you have components which don't maintain any state right just like functions were thinking larger term components they don't maintain any state they read something passed something to them they process something they spit out something right that means you can now scale it if they die you can replace it you don't probably you just have to keep a number like okay I need five of these guys to be around I don't care what the identity of them are so that's what this analogy is about so the takeaway here is that when you are having your components try as much as possible to have stateless components as much as possible it is not possible in all the cases some cases of course you will have databases you will have message queues etcetera which have an inherent state but as much as possible try to use stateless services which means you can scale it take it out it's all very easy to automate it and from a complexity perspective and management now if you are left with no choice but to have state isolate those ones out specifically only those so let's say you have like 50 micro services if you can have 40 of your micro services to be stateless great you're left with only 10 of those micro services whose state you have to handle now when you have to handle remember do a shared-nothing model now once you come to the shared-nothing model you'll have to pick up a DB make sure that you pick up a DB based on the pros and cons of each of the databases which are there don't directly jump between sale X equal or right investigate a little bit more because they'll search databases key-value columnar databases each of them have their pros and cons so investigate that and based on that pick up your database but try to keep it minimal now this is another approach in case you don't want to keep a database kind of thing which is a little bit complicated to manage let's one the second part is when you make database transactions typically there is no audit maintained for that so if you have to go back in time and find out ok my data got corrupted what was the time at which it got corrupted was it five days ago 10 day I have no idea right I need to figure out when it got corrupted that is difficult in database if you just use databases because you will not be keeping a record of the transactions so in such cases as well as to simplify it within a message queue kind of a structure there is a architectural pattern called as event sourcing so even sourcing is something where you treat any state change in the system as an event so something happens it's an event any state transition is an event and all you do is you keep a record of the events that's all you do in the order that the events were right now the beauty is that if you have this particular system let's say like it let's say the application crashes you just play back the events in the same order the application will be back up so it is like stateless systems where you have kept out the events in a separate store you just have to play it in the order you can pause it rewind it go back forward etcetera you automatically get audit functionality as well but does that mean that you have to do events or sync by everything no this is something that you have to make a call do you require like do you have auditory requirements etcetera then definitely you should consider event sourcing now this is something which is so you have a producer you have a consumer producer is producing some events application is transforming those events again sending it to some events and then consumer it's consuming straightforward right it's exactly a given sourcing but the problem is and this is a very key point even sourcing is always done in the micro services world right it's done by cloud native people etcetera stream processing is always spoken about in the big data world now what is happening is if you look at frameworks like Kafka streams of curse streams etcetera there is a new paradigm which is coming in which is mixing of both big data and micro services together and those are called as streaming micro services right so streaming micro services is something definitely that you have to watch out for it is a mix of both the worlds together so here you have a unlimited amount of streams or something which is coming in this could be Kafka or whatever it is application probably has an embedded cough the streams inside which is processing it maintaining a state in some sort of a database and then passing the results back which is another other consumers are consuming out of it now if you go back that's exactly what event sourcing is so there is some mix coming up where the micro services and big data world is colliding so this is something that you have to watch because soon because if you wanted to do stream processing today everybody will say or go for spark or fling right that's what stream crossing is but it is not necessary start thinking about stream processing also in the context of your applications as well micro services so read up about reactive streams kafka streams etc those are movements which are happening into that space the next part is about handling failures so here the key thing is consider that your whatever calls that you will be making will fail because again your environment is dynamic things may not be up when you make a call so handle those things some of the options are timeout and retry is straightforward circuit breaker is a pattern that you can read up and dead little cubes are typically something which is used to hold unprocessed messages so you can play it back again from an instrumentation and observability very vital again because it's dynamic and norman's you have no control over where it is running logging monitoring and tracing especially distributed tracing right because you learn to trace it across your micro services so this is something which is really important to consider as well now I'll briefly go through three of the different cloud native models which are there so micro services we already spoke about micro services you can have your shared-nothing databases etc you will have your message queues typically you will have like an API gateway which is the interaction to the micro services and that's where API gave into micro service communication is a synchronous thing so you'll have to consider some RPC framework you will manage this and deploy this in a cube an IDs cluster etc so that's straightforward service I'm sure there'll be a lot of talks today both service right so that's fine so I'm not gonna power surveil as much design of the model which is there one model that I do want to talk about which is not talked about much is have you guys heard about reactive manifesto how many of you have heard of reactive manifesto oh that's good wrong thing so there is a thing called as reactive manifesto have a read about it and look at reactive programming so you will see like it's coming up in a lot of frameworks like Java etc and it's a way of and even on the client side as well because you have got observables and angular right so it's the same thing so reactive programming is a style of programming where you can consider the same thing just like what was there here so it's reactive programming you have an unbounded amount of screens you are doing transformations real-time on top of that so that is reactive programming the lot of frameworks for that now another model that I want to talk about is what is called as an actor model so actor model is something which comes from that land and the implementation of that in JVM is acha so just have a read about actor model this is not something which is any different from micro-services it's a more of an opinionated way of doing micro services but it is really good in terms of handling exceptions clustering scaling your applications etc but bear in mind that this is a little bit steep from a learning perspective and it is not meant for every use case it has to be used only where the scale or the requirement is so high there's also an interesting paper from PayPal about how they have used our car to reduce from like 10,000 VMs to ATMs or something the active streams is another model is just like stream processing which is there but there is a back pressure concept this is men so that publisher doesn't over is the same like publisher subscribe back so there's a control mechanism so have a look at reactive streams as well again this is going to collide into micro services world so when you're looking at streaming micro services the active streams is going to combine now infrastructure design guidelines so from a compute network and storage perspective so from a computer spective always target immutable infrastructure so what is immutable infrastructure you don't make changes on the infrastructure you always make changes by app or you deploy it so you don't touch anything maximize your stateless components because once you have stateless components you can offload it to sports and once you go to sports you get crazy savings of use our ice and save your savings plans which is there now for stateful components for storage EBS is straightforward consider raid and LVM because this is something which is which can give you which can increase your performance file systems is something to investigate I would also like to point out the last point which is try to use s3 also as a general file system not just as an object storage you can use layers on top of it a good technologies I looks you it is a layer on top of it where you can create like fuse mount systems on top of that so definitely I still is something which has to be looked at from our network spective civilization chattiness but one thing I would just like to highlight is placed in groups and rock awareness because once you what happens is when you have inter region or cross region communication you will be charged for it so as much as possible you should because you have distributed systems they are all sending heartbeats to each other etc all that stuff try to put it within a same region and that's why you can use placement groups and RAC awareness of a lot of the frameworks support track awareness lot of the databases etc you just have to make sure that falls within the placement group you put a label acceptor and manage the lifecycle of that well that's from the infrastructure optimization I'm not going to specifically talk about CI CD because everybody should be aware of it but I would like to highlight dev psych ops or set dev ops it doesn't make any difference whether you call it divorce egg first there's no difference but what I would like to highlight here is that from a security perspective there's a concept that you will hear shift left take security closer rather than just read you some odd list you deploy it go to some testing and then you do pen testing etc so it becomes very complex to change it afterwards so try to bring it into the dev cycle where you can have things like static code analysis package vulnerability container scanning etc right in your CI CD pipeline and that's def psych ops observability so this is a straightforward thing so you're taking from cloud what you're sending it's looking at particular conditions and then you're sending it out to different places as well as you are maintaining a set of all the logs in elasticsearch it does it doesn't specifically have to be elasticsearch but the point here is that if i want to go back in time and look at logs i should have it if i want to get notifications if something has happened like monitoring perspective like spikes on cpu network etc which is not expected i should know about it so that's off so ability event remediation is basically one side it takes something how can i remedy eight it automatically so this is also something which is you can just same thing you can just look for certain events which have happened and then by using something like lambda or your own thing you can apply the condition so this particular case is about security so you can close based on suspected or network connections you can like the logic sorry you can apply the blocking in your NaCl or graph for blocking those connections so this is automated evaluation now without a human being involved you can do it finally I want to talk a little bit about two of the cases this is just to show you what is a potential that cloud native has if it's done properly so this is a case about processing about ten plus billion events per day right so we had a large system which is struggling with scalability latency and all those things so we changed that entire thing and it was all about processing these mini amounts of events per day and we moved it into reacting streams used stateless components and deployed our sports orchestrated by using cuban IDs lambda addiction is something in big data because we wanted to leverage s3 for a lot of data saving and that's the results that we retained so 40% reduction in cost right and they're talking about tens of thousands of dollars here so 40-person is a huge amount right while doing that we have got a 10x times reduction in terms of processing latency we have got 5x times more we could he neatly start with only about two or three billion events now we are at 10 billion events right and awkwardly performance so this is specifically because we have to analyze that data it grows as well so if you do cloud native all the way from scratch and not just at the end there's a significant amount of cost savings performance both plots coming together this is another use case primarily this is about migration of data center like a large data center we used SMS from AWS blockchain base we optimized all the system all the services to be cloud native and again orchestrated by using humanity's instrumented optimized and again we have got a 30% reduction and 20 X in terms of deployment velocity but that's because pipelines were not there earlier but the part that I want you to focus on is about the cost so once you do it right that's what you'll end up that right you will get really great performance like X number of times performance and a significant amount of cost savings as well courtesy of sports plus using s3 etc so you have to find that so yes sir to summarize we spoke about the different models which are there distributed systems embraced distributed system really understand this to build systems always things stateless was a stateful and try as much as possible to use stateless spoke about they have say cops observability cost optimization if there's anything that you would like to discuss with me or with us afterwards like there's a booth so you can just come out there catch me anytime discuss anything oh yeah that's it thank you
Info
Channel: Cuelogic Technologies | An LTI Company
Views: 3,799
Rating: undefined out of 5
Keywords: Cloud native, Cloud native architecture, Cloud Native Applications, Cloud Native On AWS, AWS Cloud, Cloud Migration, Building cloud native apps, Microservices, Cuelogic, Cloud Cost Managment, What is cloud native, Cloud Native for Enterprise
Id: eQPVjaXR4Pk
Channel Id: undefined
Length: 40min 19sec (2419 seconds)
Published: Thu Mar 19 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.