Introduction to Red Hat OpenShift Container Platform

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hi there my name is gautam nagaraj and welcome to this session on red hat open shift the platform for digital transformation so let's begin it's well known that it organizations business these days is to create value for their organization and the best way to create value for the organization is through the development of new applications and the ability to bring new features to your existing applications whether those be cloud native applications i.e you're going green field on the cloud whether you're doing a lift and shift where you're moving your applications from on premise from being monolithic to words being microservices whether that's in the field of ai and machine learning or your standard java and net applications we know that nowadays it organizations need to be able to deliver more applications deliver more features quickly and also be able to basically fail fast and fail often so that they can find the way to work we know now that the fundamentals of these are basically to have cloud native application development to adopt containers to adopt a hybrid cloud strategy and we believe at red hat that red hat open shift our enterprise container platform is the right solution for you so let's dive in it needs to evolve for the digital age so how what is that evolution first that will be in the development process itself we've very much heard about waterfall process where you get the requirements initially document them you spend three months documenting those then you go ahead and go through the phases of designing it implementing it testing it finally delivering that some application of six months down the line and that's all good and well it's built as exactly as per the requirements that were needed but the reality remains that over those six months customer requirements have changed what the customer thought he wanted is different from what he actually needs and once he starts using the platform that's when he recognizes that there is a need for change but what are you going to do at this point it's a costly process where you have to restart the entire process again so that's where the development process moved from the waterfall method to the agile method and what is the agile method so instead of doing one run of 100 you do five runs of 20 for example so where in each of those 20 you deliver a certain amount of capability you hand it over to the customer the customer looks at it uses it gives his feedback and then you can go ahead and deliver the next twenty percent uh while while delivering that twenty percent you're delivering extra functionality as well as doing some sort of changes on the previous run to better feed better fit the customers your requirements and then at the end after five iterations you have something that is close to what you wanted and very much close to what you will continue to use because you are satisfied with what you got that's all great for the developers but what about the operations teams they are left out because in the end this development process does not include the operations teams and that's where devops comes in i'm going to be talking more about that later so the next aspect is about application architecture we've always heard of you know having monoliths and nt or three-tier architecture so that was where you had a huge mainframe that had the entire application stack and then you started breaking that down further in the last decade or so into three tiers you have a web tier an app tier and a database tier but the reality remains that regardless of if one of these tiers was to go down you would have downtime and the fact is that all the application logic remains in the app tier so that's not real division as such what's happened is people have moved from that logic to a micro services architecture where basically you build your services not by the the tier of whether it's web app or database but rather by the business functionality that you can deliver and i'll be talking more about that as well the next aspect is about the hosting so people always have their data centers you know these huge data centers which are basically rooms filled with you know a lot of racks a lot of servers making a lot of noise and you know using a lot of electricity and cooling and being an internal service and then now people ha that was a transition towards a hosted service where basically you would have a collection of organizations all underneath one umbrella basically hosting their services and grouping together and bringing there's a service provider providing that function and then now the the next phase of it which is basically the cloud where you know for you you're accessing the service as a but how is it hosted where is it also the availability the redundancy is taken care by a cloud provider and people are looking to move towards that but there are some challenges in moving towards cloud and how do we solve that not necessarily always in the data certainty rules that is a very critical aspect but there is also the aspect of how will you ensure that you if you're running an application on your premise and you would like to burst you need the capacity to move or you want to move it from your on-premise to let's say azure or from azure to uh ibm cloud how do you actually do that how do you get that portability and that's another challenge and finally what we see is the methodology of how people are basically building applications has moved and that has gone from having physical servers where you have one server one application towards virtualization where you know on one physical server you have many virtual machines and each virtual machine hosting an application and then we're looking at the transition which is the next step which i would say is the virtualization of the application layer where you have a single application on in a container with all its dependencies and that means that you can host on a single server multiple applications without having the os layer in between so let's see how that's done let's go through the different different aspects so to next to start off with we're talking about devops so as i said we had the waterfall method we had the agile method and then let's introduce the developers into it so what's happened is over the years we had a lot of innovation over processing speeds over languages over methodologies but what we see is that there is always a friction when you're moving from developers to operations what happens is developers build the code and operations teams they get handed over the code there's a as a file and given a set of instructions says go and put this on you know the staging or the production environment and then you have this challenge of okay uh i'm going to do that but it's not working here when i try to test it out and then the developer says well it works on my laptop and works in my development environment so you know the problem must be on your side and then the operations team say we have not done anything so you know you guys have to fix it and always this friction has existed and we believe that devops is the way to solve this so what is the definition of devops there are a lot of definitions out there but uh the one i like the most is being able to do things simpler faster and in a repeatable manner and so what are the basic tenants or the core tenants that make any process devops is the following first of all that it's standardized so that basically means that every time you run it a process you get the same output number two that's automated so that there should not be any manual intervention the fact is that you should have to be able to kick off a job where you want to create a development environment for a new developer or make a copy of your production environment and it does not require any further human intervention and finally that it should have the process for continuous improvement you should be able to understand that where is the process taking more time where can it be further optimized where can there be more automation so that you can achieve a truly devops process so further talking about what are the basic building steps of it devops has that everything is code so now it's not always only infrastructures code or applications code everything is code the fact is that you have the way that you want your application or your environment to look built in as code and that means that it is always releasable at any point in time you it's not only about waiting for thursday evening it could be done on a thursday morning it could be done on a sunday morning you have the rebuild versus repair philosophy this is always what causes this friction between developers and operations the fact that we do changes in our environments the developer gets a development environment he goes and updates a version of the java or the operations team goes and secures a different you know something in staging of production and does not reflect that in development so because this is because of the repair philosophy we go in and we do tweaks on the end product what we should do is have a golden image and that is what we will tweak and then at any point in time that is what is basically released and made sure that that works we should always obviously have automated testing we should have some sort of pipeline methodology so that we have the the different stakeholders placed in so whether that could be that you need to be able to check in your application file in a repository you need to do some sort of testing that could be unit testing for your application you need your security team to verify that you have no malware no vulnerabilities existing in your application stack and so that's another gate finally it could be a gate that is about management having their approval looking at the overall process and deciding whether they want to go forward or not so all of these are what we call as a pipeline and that is a basic tenet of devops we believe with this aspect of people understanding what the process is and you having the technology to do that you can achieve this devops principle which will then make the friction that exists between developers and operations disappear what is the next tenant the next aspect the monolith versus microservices that i was talking about the application infrastructure so it's very well known that you know when you have a monolith you build everything together as one big application and release it and anytime a chain needs to be done that needs to be done on that application and move forward whereas a micro service is a different method is a different mindset let's go through it so let's say we have a sample application it's a very simple application probably everyone can relate to it let's say that we are talking about an airline booking system so in an airline booking system you have three major areas i mean we have a lot of them but let's talk about those three major ones first one is a registration so you register to get onto the portal you have an account next thing you do is a service inquiry so you're looking for flights so you do a servicing fire for that finally you do a payment so that you can actually you know book a flight if you were to do this as a monolith what's going to happen is that you need to tightly couple all the different business function into one unit what are the disadvantages of this the disadvantages of this is that you need to build using one technology stack so let's say that your backend you need a database the fact is that because this is one unit it needs to be able to connect to one database and you know you are basically restricting your technology stack that is one aspect the other aspect is that you generally in business have different owners for the registration different owner for the service inquiries different owner for the payment now when you have one single unit anytime a change needs to be done that change needs to be verified across the entire list of stakeholders that increases the the testing time and that increases the number of stakeholders that leads to a delayed release then what would normally be required and the fact the final aspect of it is the scaling so let's say that you're getting a lot of load it's the the start of uh you know the business and you know everyone is registering let's just say that for example yeah so when everyone is registering that means they're going to have a lot of load on the registry but maybe not enough people paying so what's going to happen you still need to scale up your entire application to meet that load either horizontally or vertically so that means that there's a lot of unused capacity within the service inquiry and payment that's not used and the reality in airlines is going to be opposite actually there's going to be a lot of people 99 of people doing service enquiries looking for flights looking for prices looking for availability and you know five percent of those people are doing payments but if you go for the monolithic structure you're going to have to basically size up for those 99 or 95 of people who are doing the service inquiry let's talk about a microservices architecture in the microservices architecture you split your business functions so that each function or business function has its own independent uh set of resources that could mean that it has its own web application and its own database and what that means is that you can first of all independently scale these so as i told you it's going to be a lot of load on service inquiry but very bit on very few on payment you can scale use your resources in a better optimized manner and have them actually for the service inquiry the next aspect of it is the the back-end technologies that you want to use you're not limited to a single set of technologies because of the fact that you have to combine all of them together you want to use uh you know for your payment uh relationship database go ahead and use microsoft sql oracle database sure for your registry you need to register people's passports you need to register the amazon id for example sure go ahead you can have a nosql database a mongodb that's sitting there for your service require you need a lot of caching go ahead and have a redis server in between that's not a problem and the fact is that you don't need to have it for the entire stack you only have it for the business service that you want because of the fact that each service owner is independent they can do their changes on their application stack without impacting the other application stacks as long as they in the sense of a function provide the service that you need so you know as long as i can query your web service and you tell me that you know how this what is the payment amount i'm happy with that you know i don't need anything else from you you can work however way you want to internally so that's the advantage now now let's talk about the other aspect which is moving to the cloud so this is a funny slide i have that you know when people say they want to move to the cloud you know they have this overall vision but the execution remains lacking and there's a very serious issue behind this well and i'll explain to you what that is so basically we're talking about portability of your applications you have an application that your developer builds on a laptop how can you ensure that that same application is moved to your development environment or your staging or your production environment normally you'll have different os layers you'll have different stacks for each environment suppose you're moving that from your production to you have a requirement to have it on a private cloud that's a different stack itself and public cloud is a totally different stack so the portability while people have tried this out it's not existing so that is where the challenge has been when you say that you want to transition to the cloud yes people can transition to the cloud or people can stay on premise but being able to move back and forth between these two that's where the challenge lies so we as red hat we believe that containers are the solution that will help you for these three different challenges so how do containers help let's look through that so we said containers are the solution that's great what exactly are containers well if you look into the slides it'll show you a basic containers are application building blocks made into layers so you will have the base layer which could be you know your rel image and on top of that you add layer one layer two layer three what are these layers so layer one we're gonna take from the infrastructure team these are your os requirements these are basically the the apps and the configurations that you want that should be there within any application from the infrastructure side then we go to the enterprise architect and let's get from the enterprise architect all the middleware runtime details that you need so that means that you know this is the java version that you should have this is the integration bus connectivity details this is you know the the the software and the dependencies you need to run uh a java application and finally let's go and get from the application guys what is the actual application we're going to run so the binary so we take the binary the middleware layer and the infrared layer bundle them all up together get something called a golden container image you know i'm saying golden it's a container image but this is basically the source of truth this is being built and now this can be built in many ways so the fact is that you could reuse the the infra and the middleware layer and then have a second developer give you a different set of application binaries so if you're running different binaries you can still control the underlying infrastructure middleware and infrastructure layers what happens is this is a self-contained application with this you'll be able to go ahead and run this application wherever you want as long as you have a container run time so let's look at them side by side vms and containers what's the difference so as the slide shows you basically on the left-hand side we have our physical server on top of which we put a hypervisor layer so vmware virtualization hyper-v whatever you want zen and then you will basically carve up that physical server into virtual machines each having its own os so you need to have another operating system and then on top of that you will have the application dependencies and the actual binary so for each application you're basically consolidating them from having them on different physical servers to having them on one physical server but you still are not consolidating the operating system you need to have a single a separate operating system system for each one of them and you need to have the hypervisor layer that will do the actual carving of the physical server on the right hand side let's look at how containers are operating so you have the same physical server on top of which you put rel so that is the os the single os that's required and then what rel does with the container daemon which is built in to the kernel so there's no additional applications required it used to be the fact that you could go with docker that's what docker used to do but now we as red hat we have basically our own runtime that we built into the os layer and that is first of all more secure and does not require root access with that container runtime we carve up physically or sandbox the ram the cpu the memory etc and allocate to each container its own space on which it can have the application that you want and the application dependencies that you want and they do not interact with the others the other containers that run on that same host unless you deem so and interact meaning that you know they interact through uh any network calls as such so what happens is that due to this you could have two different web servers running that could have a different dependencies but still coexist peacefully on the same container host on that same physical server what are the advantages let's look at them both side by side so when you look at this you can see that for the virtual machine you get the entire vm isolation that's true but you have high resource usage you need to have disk usage for the operating system and then you have your application stack and the fact that these are static and that you have the os cost if you look at the container side you have the container isolation so you still have your application isolation but because of the fact that these are contained in applications and do not require that os resource usage so you don't have let's say a separate c drive as you would in windows you know you have only the space that you need for your application your memory and cpu releases the resource usage is also dropped because of the fact that you don't have that os requirement and finding the cost savings that you get from having a single os and then having your applications run on those so that's one thing how does this help you if you are moving from a monolith to microservices the reality remains that you need to split those micro services into the business functions that does not mean that you go from a single model to three micro servers it could be hundreds of micro services are you going to be having 100 vms for a single application sack no but the fact is that if you have containers that you could have those hundreds of containers if required and still not have the extra resource and the the licensing costs that come from an additional operating system what else happens the portability because of the fact that you are packaging this all into a single package which means that i can build once and deploy anywhere as you can see i can build as a developer on my laptop i give that entire container image to the operations guys and now i just say please go ahead and run it application operations guys can run it on the staging environment it would run exactly how it did on the development or my laptop and as long as of course the it moves towards the required resources so in development goes to the development database and staging goes to the staging database and those are all parameterized so there's no need to change anything in the application stack there's no need for any changes when you move from one environment to another the fact is that all of those are parameterized and you are giving the package as is so if it runs on your laptop it will run at the same way on the server and again the same thing for the production so that means that you get actual portability the other advantage is suppose you want to port this from your on-premise to your cloud you would even be able to do that because of the fact is that as long as you have that rel layer and rel runs everywhere physical virtual private lab public cloud you would be able to run your containers in any location and that gives you true portability true freedom from lockheed the other advantages build once deploy anywhere and that's what i'm showing you developer builds and then that same image is promoted from development to testing to staging to production this gives a peace of mind well both for the developers and the operations teams to ensure that they know what they're doing so we've talked about containers we showed the advantages of containers of vms how they can help you out but how do you get started so with any container it's exactly like how you took the phase from bare metal to virtual you need to have a hypervisor so you need to have a container host that but do you have a single container host no no application is an island the same way it is for containers you would have sets of container hosts so that you have the high availability features the fault tolerance features the networking capabilities the security capabilities the access control capabilities the operational requirements for you know logging and metrics where are you going to get all of those done so that's where for day one and data operations you need something called a container orchestration solution so if you were to look at the market four years ago the container orchestration solution basically segmented into small groups that yeah you would say basically composed of kubernetes from google pivotal cloud foundry rancher docker swarm and mesosphere and a few other smaller players but if you look at it today the market has consolidated and basically kubernetes is the king kubernetes has the majority market share and has been crowned the de facto king of container orchestration solutions and you would even see that people who previously had competing solutions have moved on to kubernetes and are following that so that means that everyone's following kubernetes we agree with that but what we want to say is that red hat was at the start so we have the first mover advantage we've been with kubernetes since it was open source by google it's in 2015 and we are a majority contributor to kubernetes what does that mean what that means is basically that we not only take from kubernetes but we also give back to kubernetes we are the second biggest contributor to kubernetes and we are the second biggest influencer of the direction of kubernetes and what are the features and capabilities that it has and that is a real advantage towards our customers what that means is that whenever we have a new version of kubernetes we can go ahead and take it from the upstream project which is available for everyone we go ahead and first of all secure it harden it test it and certify it with software and hardware vendors and we ensure a nine year certification lifecycle that means that you can take the upstream version which is always changing and has no single sort of support point of support towards red hat's certified kubernetes which is basically open shift and that gives you the peace of mind of having stability the peace of mind of having security the peace of mind have having support and the peace of mind that whenever there is a new vulnerability that is discovered what we as red hat do because of the fact that we are such a heavy contributor is that we know the code in and out we can go and give you the required fix for that bug rather than going and saying please upgrade to the latest version as long as you're on a supported version from red hat you have the peace of mind that red hat will give you a patch for any vulnerability so whenever a new vulnerability is found red hat ensures that new vulnerabilities are patched and our record has been 97 at a time within 24 hours and 99 of the time within the first week and that shows you the difference between red hat and the competition so what happens when you have red hat open shift container platform you basically have a platform that can run on premise in a virtual environment in a private cloud in a public cloud like azure aws google or ibm and then on that platform you can go ahead and have your traditional applications so if you have applications that you're running today and you want to get the container benefits go ahead and move those into containers you can have your isv products so red hat middleware ibm middleware app dynamics f5 anyone who is supporting to run on openshift you can move their products onto openshift so that means you get your isv products to there and you can go ahead and build your cloud native greenfield new applications on that as well as you know machine learning and ai and that basically means that this is the platform for the future and that is what we are providing you with red hat openshift container platform so what are the advantages for the operations team basically they have automated operations they can go for multi-tenancy they get the secure by default capabilities they have the option for basically chargeback and showback so metering they are able to be uh control the access in their network access they have openshift running and that open shift can be running on premise on a virtual machine environment in a private cloud or a public cloud and also have the seamless back and forth so that gives the operations of the it infrastructure team the capabilities the operational capabilities they're looking for and as you can see it is secure by default we secure both the all three layers which is the application layer we secure the infrastructure layer and we also have it pluggable so that you can get third-party vendors like palo alto twist lock your trend micro uh the hashicorp wall solutions basically any sort of sim vaulting security solutions malware anti-malware solutions all of them are also basically extendable within openshift what do the developers get they get a self-service self-service provisioning portal so that means that they the operations team can define what is available for a developer and then that developer can go ahead and just request for the services of course with quotas and controls as as authorized by the operations team but that removes the dependency of having to wait on another human being they have the capabilities of having the ci cd pipelines out of the box they have the capabilities of basically building their applications to be containerized by default so that the developer can focus on the application and openshift gives them the capability of going directly from source to a container image i provide me the source code of what you want to run what sort of platform you want to run it on python php java.net core and then i will give you a container that is having your source code packaged as a binary and being able to run on it now those are some of the capabilities they also get the the logging and the metrics they can actually follow through for a request to see what happens and the the technology stacks that are supported are various so that is the core advantages that the developers get what are the other cases so where do you want to run this quite frankly you can run it wherever you want you can see over here that we're showing you that openshift runs on azure either as a managed service or using infrastructure as a service and then installing openshift as an application the same can be talked about for aws for google cloud for ibm and of course on-premise as the customer requires it and the supported platforms are quite frankly the majority of all the platforms we see vmware reda virtualization bare metal openstack and the list is extensive so when we talk about customer references uh quite frankly we have more than 2100 customers around the world who are utilizing openshift and the capabilities that it gives them to go and on board their applications and run them in a cloud native uh microservices devops-based culture and if you talk about locally where here we have some public references from this region first of all is emirates nbd and quite frankly their whatsapp banking which uh quite frankly is uh is very innovative within this region i would say certainly uh is running on top of openshift and it shows you the capabilities that they were able to add on top of their core banking functionalities so basically start adding microservices on whatsapp internet banking mobile banking etc that they can keep adding and extending their core functionality with openshift another example we have and this is from out of the region but of course this shows you the the extent of which it does it's lockheed martin and in fact that the development that they had for the f-22 and f-35 they included uh openshift clusters within that for their communication purposes and that just shows you the range of where you can use openshift and the fact that what are the gains that they got from basically utilizing openshift within their environment so to recap why we would like to go for containers is because we want to enable microservices so monoliths to microservices we want to go and have a devops platform so first of all not only having a waterfall and going to agile but then involving the operations teams and that's to basically get a devops culture and being enabled to go to cloud whether that is public cloud private cloud no problems or a government cloud that's also capable and the the fourth advantage you get is to move away from the traditional uh bare metal and virtual machines towards a a virtualized application which is basically what container gives you and the portability that you get with that so to avoid the lock-in so red hat first of all you have the container basis the container orchestration platform quite frankly the industry standard is kubernetes and what we like to say is that with red hat openshift you are getting the enterprise kubernetes container platform and as you can see here red hat is the the second biggest contributor to kubernetes we influence it so we can in fact enable and control the direction in which it goes and we have done that for our customers and quite frankly the comprehensive aspect of how we do uh our container platform that was shown here is that all do you only get a platform no with our services team we make sure to enable you so that you have we have a container adoption program that basically ensures that you get containers into your enterprise in a proper manner we enable your teams with our open innovation labs we get them in and involved and get them into the culture of how to do their warps so it's not only about giving you the technology it's also about getting your people into the process so you get the full triad of people process and technology in your environment i'll be going through the overview of the openshift container platform and giving you an idea of how it can be used currently we're at the login screen and you can see that i'm going to go ahead and use my account my account has been given administrator access so there is rule based access control so we can use active directory ldap and other providers like saml when i go ahead and log in the first thing that happens is that as i said since i'm an administrator i get the admin console we also have the ability to go as a developer if required so here on the initial page we have the ability to create projects so projects are basically isolations where we can enforce quotas limits and give access to the appropriate teams an example of creating projects would be dev test staging and production or it could be created by the group or the function of the group so hr that's one example we can have more such as finance so what are we trying to achieve with this console let me give you an insight what exactly it is so the logic of it is that basically there are different domains within it so you have the network teams who basically providing the network access control security teams in charge of vulnerabilities and compliance operations teams will have to do monitoring and day two maintenance developers who are trying to build and deploy their applications infrastructure team providing the application environments the high availability the reliability and the ecosystem and storage admins providing the persistent storage that can be used by the applications normally what will happen is as any request you will go and individually talk to these teams and they have their own systems in which they will utilize it or they will give the access what we're trying to do here is basically have openshift be the singular console where the network admins can log in and have their control on the network as for ports at network control so the which applications can talk to other applications what are the the methodologies in which you can go have traffic going externally internally the storage admins can go ahead and provide that sort of block of file storage that's available and that can be utilized by the applications that are hosted on openshift security teams can then define uh what is the compliance level what is the the and do the correct inspection on the traffic that is going on between these applications the developers have a portal in which they can go ahead and deploy their applications the infrastructure teams can come can can detail exactly how they would like the infrastructure to look so that the developers have to give the final layer which is the applications but the underlying there is provided by the infrastructure and the application middleware teams and the operation teams have the capability of doing the monitoring on the overall environment so what we're doing is basically bringing all of these functions together and keeping them in within one platform which is open shift and then we're going to be going through this platform now to see all the different capabilities and see how everyone is enabled by this platform so now that we understand how the platform works in general for all the different domains let's look at it from a developer perspective i am going to go and change my role to become a developer i'm inside the finance application and let's say now me as a developer what do i get from this platform first of all you get a self-service catalog so let's go and look at the catalog you can see from this catalog that we have quite a lot of options basically instead of the developer going and selling the infrastructure team i need a new development environment or i need this database or i need this sort of runtime the developer has the capability of whatever is enabled by the infrastrum that they can go ahead and choose from a self-service catalog so regardless whether the database is mongodb mysql postgres maria and microsoft sql and plus others that can be configured or that i need a python frontend because i'm going to be building a machine learning application so basically i can go ahead as a developer and say i want to go and build the python application now what happens is as this is a container platform it can only run containers but me being a developer i my main focus is writing an application so i have written an application in python which is available in my source code repository and this is a very simple application which they basically goes and takes a random color and then creates a page with that color what i'll do is i will go and take the location of that application code go in and say i want to deploy my application so you'll notice now that as a developer what i am doing is basically providing the location of my application code and what language it is in other than that i am not knowing anything of how to build or deploy containers so i will go ahead and keep the defaults just to show you guys and let's go ahead and create our application when we click the create what happens let's go and see basically the build is running so the first step that openshift container platform does is that looking at the logs goes to the location of our source code and copies that source code once the copy is done it will do some analysis and it will build the application [Music] binary so once it builds up application binary it requires some additional capabilities which would be the dependencies and it will take the application binary the dependencies and layer those in to make a container it will store that container image in a registry which is in which is built into openshift and from that registry it will go ahead and deploy an application file so you can see right now it's doing all of those tasks it's built the binary right now in this case it's getting the required dependencies which it's done so if this location you can see it got the required dependencies collected all of them and then it created a container image that it pushed to the registry once it pushes that to the registry you will notice that there is a container up and running now how do i access this application well there is capability to keep an application internal so that it's not externally available but also we can expose a route which is a networking function where we give a url and that url will take us to the container application so it's taken the the purple color in this instance as i said it's a random color but this shows you how easy it is for a developer from building his application code to having it up and running on openshift container platform so now we've looked at it from a developer perspective let's look at it from an admin perspective as admins we would be interested to give it a high availability to give scaling to control the routing etc so let's move to the administrator view if i was to look at the workloads i can see that i have one container that's been running and that's servicing them let's go and say that we want high availability now so instead of a single container i want multiple in the traditional sense if you wanted to do this you would have to create an additional vm you'd have to create a load balancer you would have to install the application and then you will be set to have high availability how can we do this in a container platform and our open shift just go ahead and click to increase the number of ports so you can see right now that i am scaling it from one to two and two to three so i've scaled my application up does that mean now that i can see the different container containers servicing my application no and there's a reason for this the reason is that from a network networking perspective the default configuration is to have a sticky session so once you connect to this application you will be serviced by the same container and again and again let's go ahead and do the change so this would be basically a networking admin a networking admin i'll go ahead connect to the route so the external route that was available i can go and say edit and this is how easy it is for me to do it so basically i have to add in two values let me go ahead and show you those two values one is the balance which i have to set it to brown robin and the other one would be to disable the cookies so that it will take the settings once i've done that let's go back and check the application url external url and you will notice now that i'm connected to a different application in each sense so i'm just going to go one two you can see is actually one two yellows and one purple so this shows you the round the load balancing in the round robin let us now go and investigate the high availability so now we know that we have capability to access all of the applications let's talk about what happens if a container was to fail or if one of these servers that are saying servicing that are hosting this container was to fail so let's go and simulate a failure what will happen is this container would fail immediately true but then immediately then the the orchestration platform would create a replacement container which is what you see up here and this container will be created immediately to replace the one that failed so there is a controller always looking at how many pods do i need which we have set to three how many are currently running and if it's less than three it will go ahead and create it so that gets replaced you'll see now that we have it available and if i was to talk about the availability you would notice that there would be no drop at all from the load balancer it's going to ensure that when that application service is back up and running so we have it up and running in this case you will see that it is available so that is in the context of high availability and load balancing so there is an inbuilt load balancer available and we have the capabilities of the networking to change the routing specifications to make it round robin or not now what happens in case the developer releases a new version of the application so i'm going to go log in and i'm going to edit this so once i edit this file i will change it from the options that we have and make it red and green commit the changes when i commit those changes my source code has changed but now i need to reflect that in my application so how do i do that let's go back and go become a developer and in the developer view i would go and it would be a single button to say dear openshift shift i've done some change in my source code can you please go and get the new verse so once i do that action it will start a new build which it's started over here and if we were to view the logs it would be similar to what happens before which is that it goes and gets the new source code clones it copies it and it will update the container image to become version two so we have version one which is what we have currently running and now we're going to get version two but the the interesting the thing that i would like to point out over here is the way that the transition happens we have a capability called rolling updates rolling update means that right now if i have three pods running when the new version is available and completed i will have a replacement of these three ports or three containers that are running but the transition will be seamless in that i will go ahead and have one new container running and then decommission an old one and then do the same so that i have two of the new version and one of the old version and three of the new version and one other thing and zero of the old version and so that way there is never a service loss for the end user you can notice here that i still have my application being serviced at this point and now you can see that there is a change that is happening and if i was to show you the change you would notice that still we are getting service from the older containers and at a certain point we are transitioned over to the newer containers and the old so you can see that as i keep refreshing the page now it has transitioned to the new container platform it seems that we have all of them to be red so the the randomness has just made it so that it has become all red so let us just wait and see if we get one of the other colors so it does not seem to be the case just to make it more interesting for ourselves let's go into the admin view and we can go ahead and just delete one or two of these that are running so that we can get a color a part that is of a different color now it's as if i'm showing you the high availability again i do have one container running and while it's running and then the other one the newly created one comes in into the load balance and is available so there is a seamless transition so that you are able to do changes in your application code without affecting the end user front so now after we looked at our previous demos which was basically around the operations and the developer let's make it take it all together and explain what we can do the area of a ai artificial intelligence machine learning is really taking off and with openshift we believe that we are the perfect platform to cater to your ai and ml needs uh quite frankly with open red hat openshift container platform you have the capability to gather and prepare your data that could be sometimes using jupiter notebooks you can have different data scientists developing models again on their platforms and then once these models are available we have the capability to deploy these models using business rules engines or any other methodology that you can think of and embed them into the applications and once we embed them into the applications of course we can have that constant monitoring and feedback and seeing making the uh the model better and better over time and you can see that basically we understand regardless of whatever stage we can cater to those requirements so the demo that i'll be showing you right now is taking uh all of these together mixing and merging them and then giving a very useful uh very relevant use case for you so let's start so basically what i've done is i've created a new machine learning project and to start off let's go into the developer view and once we're in that developer view the first thing i want to do is show you the capabilities that come with the object detector so let's go and say that we want to make a an image classifier and i've been showing you from catalog let's go ahead and say that we have a container image that's accessible to us and if i put in this container image name the location we can go ahead and deploy this i'm going ahead and deploying this right now and since this is an image it doesn't require to build i have already given it the image and it's already pre-built so it's like a go template that will it will scale up and create the application from so i'm just waiting for it to complete once it completes then we can go ahead and open up the url now this application is currently uh having two capabilities one is an api access so this is the api access where we can you know provide it over here for certain files and it will return back the output but generally we would like to do it in a more easier methodology which is basically the web app so that means this is like for human use so now we have this capability available let's go ahead and upload a certain image and to show you what it is possible for this to do let's go ahead and take the car and bike image i have go ahead and browse to it upload it and then now once we do that what it's going to do is uh it is going to analyze this image and then provide what it believes are certain uh objects that is filtered now uh it's already detected it's a car and a motorcycle and you can see the precision or the probability rate that our model detects it to be we could even you know decrease the probability rate and of course this will not make a difference because the model is very accurately identified what it is but you can see this sort of filter once i hit 92 percent then it cannot know for sure this is a car and so it will drop out so it is all about understanding what probability threshold we want to keep and to keep on improving this let's go ahead and i'll show you another case which is the case of the multiple cats just using some images we get from the internet here we have a picture of four cats and now after it does the detection at the threshold of 78 can only detect one of them to be a cat but if we were to decrease the threshold once we hit the 50 percent or as you can see 55 that's when it will detect another one and then once we hit the 43 it will detect another cut now this is where if you leave the thread probability to go lower it start detecting other things like right now it has detected a dog it has detected other things that are not relevant so that's why we need to try and enforce that we have a higher probability but of course then the threshold decreases some of the objects that could have been labeled so this is just one example how quickly we can get an object detector up and running using open shift so obviously the model has been pre-trained and we have this available to go ahead and inject wherever we require now let's take it a step forward let's say it's not only about object detection let's do some speech analysis so how do how would this work let's go ahead and add an additional application and our application is going to be a speech to text converter so i got the url for it the location of where i have this container file i will go ahead and put it and then let's have it up and running so what will this do this will basically take in as input an audio file uh currently the model is for english it's killed of course b for arabic as well and once it's up and running we provided a file and it can go ahead and output what is the text what is the speech in that file so we have it available now we can go and go and access the api this is of course normally what a developer would do but this is for us to just try it out now i do have some uh files here audio files let's just listen to them quickly so for your power is sufficient i said for reference so it says your power is sufficient your power is sufficient i said all right let's just try that out so this is the file that starts with eight four five five we browse and find that file we can go ahead and select it and let's execute so basically now the application is analyzing the file that has been provided and we should get as output the the text that was set so your power is sufficient as i said it's as i said so of course there is some gap but that's to be expected with a smaller chain model and it will get more and more efficient as we use it further and give more and more feedback let's try uh so what did can we do with this so we right now have taken some audio and we have converted that to text let's do some analysis on this text let's analyze whether this the text that came out is uh a positive note or a negative sentiment so there is something called a sentiment classifier so once we go ahead and bring up that sentiment classifier so you'll understand now that basically i'm bringing up different different applications that will form the the different steps within a more complex application so i'm just spinning up the other next application which is going to be a sentiment classifier and the logic of the sentiment classifier is that it can take into input the text that comes from the uh speech to text converter and if and if we're worried about you know how it'll handle the speeds we of course have the capability of putting in queues and api management uh overhead tools so that we and that is coming available from open shift so and red hat so now let's go ahead and say this is to try it out so in this case i'm just going to put in some text so let's say 2020 has been a hard year has been really affected by the lockdown so this is of course something that's true let's go ahead and see what happens when i execute that the output that we'll get is a sentiment classifier which tells us whether this is positive or negative so the classifier has identified there is a 0.07 percent chance that this is this is positive that's very valid and it's a 0.99 99.9 percent chance that this is negative so so now we know that when someone is taking some speech we get the text we can analyze whether it's positive or negative what else can we do we can basically add for the negative aspects of our speed if anyone has having any negative we can let's do a further analysis and that's where we will be basically putting a toxicity classifier and that would take only the negative comments and then with those negative comments we could basically analyze whether uh how toxic is it whether it is extremely bad and offensive whether it is having some sort of obscene portrayal and whether there is a threat to someone whether whether there is an insult listening aimed at someone or whether this this text is uh showing hatred hostility or violence towards any race religion gender etc so this once we have this up so we have basically the the toxic comment classifier we can then go ahead and put some text in and i'll be showing you an example of the text this is of course just for demo purposes but it gives you an example an idea of what can be strung together when we combine everything so uh let us say uh i do not like john so this is a very simple text and if i was to execute it what would happen we get some output from the classifier which says that while it might be a negative sentiment it is not having that sort of toxicity or obscene or it's not a threat or an insult or anything related to identity hate so if i was to change this and say i will punch john so of course you know this is acceptable for ref for demo purposes but if i was to put that in then we get a different identifier and that is that this this is a toxic uh comment and there is a threat aimed at someone so you will see that the percentage is increased so it's a 51 chance of being a threat 96 chance of being a toxic comment so this is just uh a demo we're showing you of the capabilities that come of how quickly we can bring in certain applications how we can basically combine these applications together and we get what it is that's required of course this is a very generic demo and anything that has to be further enhanced is where we can come in as red hat and show you a more customized demo and show you the capabilities of openshift that we're just covering at a high level in this presentation endeavor from there i would like to thank you all for the time that you spent with us and i hope it was very educational please do get let us know if we can help you further with any of your requirements specifically around containers microservices and devops thank you very much you
Info
Channel: Red Hat with Gautham
Views: 30,064
Rating: 4.8764477 out of 5
Keywords: OpenShift, Kubernetes, Red Hat, DevOps, Microservices, containers
Id: dAWPuqZwlOA
Channel Id: undefined
Length: 62min 0sec (3720 seconds)
Published: Thu Oct 01 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.