Distributed HyperConvergence Pushing Openstack and Ceph to the Edge

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
okay hello everyone so I think we're gonna get started now it's it's actually about time to to start so thanks for thanks for having us today pretty good turnout I'm Sebastian I work as a septa engineer we're we're oh I work for reddit yes all right yeah it's one of the slides anyway but yeah okay are we ready hi everyone my name is Joan Cohen I also work for that I'm a product guy and I've been doing this with Sebastian said for quite a while today we have the honor to share the stage with a new member of our team Newman old member on the work side but new member on stage Julia Denton and I work on the integration of self in OpenStack using triple so the our deployment tool for over the stack himself all right so let's get started so I hope you'll like the pictures but I think it basically tells the story when we started OpenStack and I'm proud to say that I'm one of the pioneers probably at least in the storage aspect of OpenStack you know this is my eleventh of the summit feel it or not we have designed OpenStack to be an open-source option of AWS pretty much all right I'll go back so what we designed OpenStack to be something like an open source version of AWS we're so looking at the main data centers to service our clouds right and the regional design of OpenStack was not necessarily to go all the way to the edge but here we are today in 2018 trying to do exactly this right and the reason is a lot of change so we're gonna try the next 40 minutes not just show you what's happening and the current release of staying where we're putting the effort and investments as a community but also where we going after this summit and pretty much for the next several summits right where he's gonna be a focus area to try to make it happen so we'll start with a quick reality check talk about some of the use case talk about the landscape talk about 5g and we changed things and then we'll go deeper to understand the edge factors when we talk about edge guess what it's a heavy - mouths word the heavy you used and it doesn't mean all that the same times all the things there's when we say edge there's punching to a van head side and a far eight size there all the way down to the IOT device and I can tell you right now we're not looking at all of the edges cases right we're very practical we're going to talk about the distributive requirements of edge from the edge factors 5g by the way comes by the standard with the distributed cloud out of the box and then we got to talk about where we are today Sebastian's gonna talk where we're doing done over the last year's we showed you what we can do together with co-locating SEF and OpenStack together in a hyper converge fashion today what we like to do is to show you as we push OpenStack to the far edge what does it mean first storage right how can we get more basically storage in a box closer to the edge and still deal with the image synchronization data availability on that stuff but with lower latency and lower footprint obviously we're gonna mention containers we're going to talk about the what we're doing and staying really specifically and then what's the roadmap is this as I mentioned for going forward I want to start with some observation we started this journey as I mentioned OpenStack working in a cloud right but something that happens outside of our walls open walls should I say is like we're getting smarter and we're getting smarter every year in terms of our capability in fact next week on this stage we're gonna have smart countries conference right so you all of you guys are using smart phones right we're talking about smart countries already and this is taking place in this in Europe here in Berlin next week and so it's not just the endpoint devices we're gonna care about is like how again are connected to the biggest story if you want to go one layer down smart cities believe it or not one of the use cases to 5 G's networks is smart cities and if you go back to your the way you got here today cars our cars are getting smarter at some point they don't need us anymore right you can just enter a car with self-driven we have AI we have machine learning it could all alone we have a lot of companies investing now in this capability as you know my own car is a hybrid car that I have some of this function I'll go rake and drive on its own huh how many your don't like to take car to work and actually bike to work raise your hand okay the green people all right let's do a quick reality check so we're talking about augmented reality capabilities this is one of the uses that's coming our way but we still want to take that journey and this is the digital transformation journey right you heard the keynotes in the morning some of the segments like we had like Digital Bank right from completely online and doing the disruption but the way we're gonna communicate is different but one of the things I want to you start to adopt in your way of thinking is this bike regardless of the new capabilities we're gonna gain with the new services our gun is gonna move the motion is still there and the way we're gonna move from one point to another is gonna affect the service and where we gonna get the service we are in a cloud infrastructure open cloud infrastructure business this bike will take me to the next stop where probably can be a natural park guess what there's limited antennas in the area there's maybe one closed point of presence that I'll be connecting to I'm gonna get the service from that closest antenna in that natural product when I'm going to drive my smart bikes right that's what we need to solve in the pragmatic way so I mentioned 5g and I mentioned that part of the standard of 5g is the distributive cloud by nature and as we drive our bike or cars smart cars from one point to another the user experience continuity is key I cannot drop my service I am on and manage my floating IPs correct doesn't matter where I take my mobile device and that mobile the device can be a drone at some point so we still need to care about the service obviously 5g comes with endless numbers of number the same cloud tenants and endpoints we used to support in our day one when we designed OpenStack now we need to support thousands of units connected to our clouds right so and again I also need to maintain the rail ability right everybody in the telco business know that they have five nines nothing has changed nothing has changed from compliance nothing have changed from security and in fact security is the worst one right because at some point if I have a there's a chick-fil-a use case I'm not sure if you've seen that right you can have an edge box in every retail and someone can come to your retail end point and take the box and leave as they label if the box what if they taken with them how do we secure the box what actually leaves on the box alright so that's the new questions you need to start asking yourself and obviously we're not gonna start with tackle each one of the edge use cases a lot of us our nav players or enterprise players with edge use case some of the edge customers I'm serving today are actually public sector and this is like we need to that have that new capabilities closer and closer to the customer premise but again as the use case it's not just retail as I mentioned it's like your home smart homes smart cars and so on and so on we already talked about the virtual reality aspect to new services so if I'm a consumer then I really want to get the ten times faster speed that doesn't matter where I am with my new device right I also would like to get the service not just faster I want to get the new services it's not about doing things faster it's what actually unlocks with listen.you capabilities IOT is around the corner with this new service capabilities and when we look at the edge it comes with a lot of constraints 5g specifically like latency right the distance is less than a hundred kilometers this is what we're trying to solve right bandwidth is limited the resilience I have to make it autonomous as I said that box may be disconnected from the network for hours days weeks and suddenly pops up again in the network how do i push new application updates to it over-the-air updates how do I maintain them metadata than you made it out of the images and so on right I mentioned the regulations that are not going anywhere actually enforced at the edge and obviously I need a way to do that all day too right I need to do the way to do upgrade to the box right so nothing goes away from that requirements one thing that I have alluded to is the scale right we're talking about ten to a hundred scales of sites and each one of them basically serving tens of nodes right and I'm gonna visualize this for you so one of the things I heard in previous session we have them the working edge people have the assumption that we're providing services from the centralized site all the way down to the far edge and I have to correct that misunderstanding this is not what we're trying to do what we are trying to do is actually provide you that service if you're in the r8 edge flock right I want you to get the service from the closest point of presence to you and not from our free right or c1 for that matter so that's what we're trying to do I need to way to deploy images obviously from the centralized control plane to dish ads boxes but the service is actually much more limited it's a smaller problem to solve right and this is another great visualization of what I just said I'm not trying to take that one-to-one ratio that we had in there or racial data centers and apply it all the way down because guess what in the previous session someone mentioned the i/o I'm not gonna push all that IO down all the way to the far edge there's no point right and when we talk about edge there's basically three factors you need to bring in right the first one is deployment and I mentioned all the day - that doesn't go away it's actually but get more complex how do I do upgrades and updates of that edge the form factors are not the same obvious as you seen here it's not the same edge form factor if I running on a provider premise where we see edge cloud central office that can be a POC a POC point of presence or if a branch office and so on and it's not endpoint that same endpoint is the end service that can be liked by the thousands and at some point millions of devices so the fern aspect is workloads right we heard that we still serving legacy in traditional workloads in our cloud now we need to basically construct the same workloads or at least strip them down to be able to run at the edge and have a snooze felish with you guys they're not the same some of them are Clyde native some of them will only run kubernetes at the edge connected to our OpenStack cloud some of them will run bare metal only or containers deployed on over Burma they're not the same so when we talk about the deployment it is actually connects to the workloads what will dictate it is the workload if the workload these status stateless then we're probably gonna have a cache in that box with sometimes some of my customers talking about like half a terabyte this that's it so how many images you can have cached and that's it but I guess what the workload is stateless if the edge box is dead doesn't matter right we have another one that provides a service we're not trying to do dr between edge to edge right our distribution models from the people presents to the edge this is a great framework that our Keano at stack working group put together and i really like to adopt it in there was every how what would treat today also when we talk about storage so I would argue that OpenStack from day one was doing pretty much the cruiser the large pods and tricycle the medium pods this is something that all of you guys are doing today nothing news day where we can handle it today when we talk about the first edge right use cases we're gonna solve in OpenStack we're talking basically unicycle pod and satellite the last thing we're going to deal with is the rover right this is already the think about the customer premise thousands of hundreds of thousands of devices when we talk about Ceph hyper-converged where do we need storage co-located at the box right this Statesville application that needs that capabilities where you want co-located now I'm not sure if you know but we're already containerized OpenStack right Oh misuk can be deployed as microservices today we can already deploy Seth containerized so that we solve that problem as we go to more closer to a distributed model we need to do more of this we need to co-locate as much as we can in a containerized fashion our deployment to solve that right now this is just a quick show of how the different use cases math to the deployment right so as I said the natural core and regional core stuff we do already today right most of the public carriers public providers in North America for example that use OpenStack and the majority are using APIs are already doing it today what we're trying to address in the near future is what we call the distributed computes that that's pretty much the first line we're gonna do it and it's getting tackled two of the boxes that I mentioned earlier and that we're gonna finish our talk going into the last one so as we start to talk about the storage we have a better understanding what the form factors but the things we need to carry care about I want to hand it over now to Sebastian to talk about what what the meaning of basically of running stuff at the edge thanks John so now we're gonna dive into some architecture considerations as I shouldn't mentioned as we move to the edge they are really fundamental changes that would be applied on your platform so we're gonna go through some examples and we needs to be considered and when is it to be done in order to properly like deploy to the edge so well first and foremost I guess it's a given now but we really have to start implementing hyper-converged infrastructure so we have been talking about this for 4-month years now and this this has been like the real enabler for this kind of setup so once you go to the edge then since since the requirements are really different from traditional platforms that there is no such thing as high high performance the big computation workers or things like this so we the way we deploy to storage the way we configure it will be completely different and then we have to do HCI so basically it's GI consists of collocating compute and storage resources on the same machine so in this particular case since again we don't really have any big performance constraints or requirements then this gives us a better hardware to realization which is a real nice thing to have the types of the type of applications that we will find when you when when you go to the edge but doesn't really require any performance again so from the vnf from all the things like cache changes this will definitely happen on the edge but these are really lower for most services that will be running this is a little bit on the side but it's also really handy to deploy this kind of infrastructure because if you want to do like a PLC or pilot then it's it's it's fairly minimal so everything can be contained into a single box or or in three boxes depending on how small you want to get but that's also really really convenient to get your hands on the environment the service the service API is to interact with them to configure your applications of them and also explore the different interfaces that are available once you go with that kind of stack you as a reminder we in this particular example we are really focusing on self and stuff again is a unified to a system that provides different interfaces to access your data that can be through object blog or orifice system so again that's that's a really good way to get your hands on the new technologies if we dive a little bit into what a distributed computer actually is typically this is the typical representation of all the services that we'll be running on those distribute compute nodes so again you will find that we have compute resources so all of your VMs and on and the serve services and your precise services all of them are being containerized so the thing one of the major thing that is changing from the traditional wave that we do OpenStack environment is that in this particular example we have the master also being demand and the managers are also running so if not familiar with Saif the monitor is the brain of the cluster the manager is responsible for gathering info and managing maps and storing them and the OS these are the objects or achievements which are responsible for basically writing reading and replicating healing the cluster so typically when we deploy OpenStack we we have demands and the manager is on the control plan because they are like services or controlling staff but in this particular example because the part is at the edge and the control point is at a different location then all of the staff cluster is being configured on that particular machine obviously when you once you have this kind of setup again you have the better resources realization everything is a container so everything is being isolated through namespaces so and you can all basically get all the goodness of containers for for performing upgrades and even rollback if necessary so what does distribute HDI look like so this is kind of a high level diagram but but you Judah will be diving on on a more concrete diagram and a couple of slides but typically the how that's gonna be represented is that you have a centralized site where the control plane is running so basically the control plane represents all the services api's there was no storage being involved into this component and then you have different sites which which represent the the edges and decide run the HDI knows that we just discussed so yeah this is what basically summarizing everything I just said so one of the one of the challenges that we will be facing once when deploying this kind of infrastructure is that we have to find a way to distribute cloud images because again if we if we go back to this particular slide the control panels over here but the storage in a computer is out on this side so it's kind of a tough question because even though we have the control plane that is detached from the computing storage resources then we still want to have this ability to to have cloud images being replicated across all the edges and not being necessarily centralized on a control plane because remember when you put and when you put a VM then you have to if you're far on the edge stand then you have to fetch that image then there are things like latency obviously and that would be involved in it into the process and this might take awhile so we we had to we really had to think on what to get a read design but it's an it's an initial step and again children will be diving a little bit more into this but this is primarily one of the biggest challenges we have at the moment is to be able to replicate images or at least have them being available so at the moment we will continue to have drones images on the control plane and these images will basically be fetched on the computer computer computer storage nodes which is not necessarily the best thing but because there are there are so many ways you could you could implement this but at the moment we there we don't really want to disrupt too much to where things are being implemented so the original piece design would be to have images basically being stored this is this is what this is basically like this once we can get glance having multiple backends so once we have that we can go further and with Ceph really be able to add images point to a particular location which represents in this case an edge and help to glance okay this image is part of that that edge and then replicate them through another way but I'm I'm diverging here yeah but yeah I know this isn't glance but it's it's an initial it's an initial step and as I was saying this is I think it's the way I see it is that I want it to be implemented for seven but obviously that's not the case because we have to have it in a way that is really generic so not everyone is using SAP although I would love to see everyone being using it but at first because the API is really generic we have to implement this in a way that any back any particular back-end could consume this the same way and I have to do some experience in the end sort of different way to put it I guess so this is a this is a another diagram which basically captures how the setup is going to look like so again you will have this decentralized place where all the the OpenStack controllers are all running what they typically come by by three and then you have all the edges the remote site and this is basically a zoom out of what I just showed where you have the VM cos these and the months and then you can have as many as many parts but edges basically as as you want but then I would like to hand it over to Giulio who will be telling telling us how to get there what are the new challenges that we would be facing what's the state of the integration what other working groups and discussions that are currently happening about how these concepts are translated into tribal and what's the current status of things and a little bit about our ideas for the future and I would like to start from what has been discussed before in another session from the H working group because we are trying to join forces obviously and so the edge computing group has a few use cases one of which that the the group mention it was the mobile service for five to use cases and came up with a diagram about the idea of how the specific use case could be implemented with you know OpenStack I would like to start from that diagram and also maybe compare with it and see how the repo is approaching the same issue and I will also use or at least I will try to use the same terminology which is in the edge working group diagram even though I'm probably more familiar with the triple o terminology but let's see so this was the diagram that was proposed like at the previous ptg and it's split mainly in three layers basing mostly on the scaling factor so there's there are expectations that on the five sides you might have like something like a hundred different deployments in this what would define as the edge side you might have something like ten moments and all of them are federated mainly because of the authentication and because of the images in one main data center trying to match that with what is happening in tribal we should be implementing something that for the edge sites looks very much like the existing triple controller so that's where all the OpenStack services in ABI are the point and we should be implementing another let's say set of rules which are defining what the far edge site is and this looks a lot like a triple o compute node plus what has been discussed by Sebastian storage persistent storage which eventually with self means collocating compute and self so we would be implementing more or less something like this so in the fight site we would have never computing donations grants avi caching and cinder volume and all the chef's services obviously we're talking about the wine in containers eventually across a relatively small set of nodes like let's say three and in the edge side which is where all the OpenStack services are we would have the entire set of the abis schedulers database yeah an orchestration like heat I want to look a bit more hot how we are approaching the storage issues because this is focused on how we are using self and the how HCI is beneficial in this scenario so there is a mainly two issues one is with the persistent storage or cinder and the other one is with images and for cinder we opted for going active active in the age side so you might have like three instances running at the same time which it's something that relies a lot on a correct implementation of the back-end driver first in there and we kind of committed on making safe one of the drivers that you know RBD I should say one of the drivers that initially is tested and behaves correctly in active active configuration this was extremely useful to avoid pushing the need for pacemaker in the far edge side which really didn't want you because of the additional hardware requirements we also will probably work on a set of custom rules in three blow to make sure that you know if you have ten compute nodes in one of the five site which is eventually not that small you don't really need to scales in their volume on all ten of them or the set pointers you know ten of them so there will be probably at least a couple of rolls while for the five site I would like to point out that the way how we are grouping together resources is that safe which will be an isolated safe cluster in every far edge site is going to be I'd say the locality with the Nova nodes and the cinder node is mainly given by the use of availability zones and not regions so in the control plane you will see different availability zones but it's true for each side and because of how it is implemented in Rio you will be able to scale independently the central site from each and every 4-h site so there are no changes to the fire site not given the number of nodes that require changes in either the control plane or the other forage sites for image things are a bit more complicated that's something that Sebastian was approaching earlier so idiot leaf or a bag and likes if you would like to use yeah mechanism that allows you to they duplicate that line not to copy the same image over the very edge site which they're approaching with RBD metering but let's say the tree below is not there yet at least not for the stain cycle so what we will see in the same cycle is more like plants using caches locally so that every image which every fights die every far each side needs will be initially pulled over from the central site but then will stay in a local cache so closer to the actual compute nodes this is also to because it plays well with two interesting challenges one is on on one hand we want all the images that are in the central site available to the far edge sites but we don't really want to replicate demo because we don't have as much storage and the other issue is we would say that the local cache is currently well supported and driven so like we could get it done relatively quickly and get it actually working in stain while using multiple backends that usually was pointing out it's too much first time let's put it that way so it's a building block what we are putting in place now with caching it's a building block we obviously want to get it better it's just not happening in this release and this is a diagram this is similar to what Sebastian was showing it's just a little bit more details about how the services distributed and so what you have at the top is the EDL deployment of our control plane together with the under cloud this is not very different from what it was already today except there are not compute nodes and there can optional be a safe cluster if you use it for other reasons but and again it's not necessary while in each and every remote side we will have a set Buster some compute nodes cinder working active active end Lance cache plants caching up I am well if you have questions I'm happy to discuss this later after the session at zoom on the issue that we have with the images is that currently the the previous diagram would require plants to pull the image on each forage site when the images needed the first time some people asked why we're not like pre-populating the glance cache this is a bit like going back to the same issue of we don't have enough food storage or we don't necessarily use all images in all forage sites so yes pre-populating would help because an initial deployment you would already have image locally available but it has some drawbacks yeah so this is how your forage site would look like and I'd like to hand it over to Sean again to discuss a bit more what is happening in the future thanks Julia so we saw a lot of movement I promised you right this is like a complicated problem to solve but we're getting there and we're getting there step-by-step and you can actually help us so I want to start talking about the near-term roadmap is weld the long-term considerations so some of the work lights of are temporally edged forage disconnects I mentioned that earlier right some of the use cases that box may be disconnected from the net for hours days weeks suddenly pops up how do we push the updates to it jewel pushed around like initial right and we need to get all of this cached images there for the begin with its populated so that's an initial would like it the chicken and the egg problem how we bootstrap that with the working edge a group is focusing in that aspect as well but some of the cases we've seen because if the workloads are completely stateless in some of the cases of the edge the far edge is like we don't have storage requirement as I mentioned so it's just purely compute node maybe running a set one where a containerized book load at the edge or permit or as I said topic and virtue of Ceph monitors using container resource allocations right that we have a whole resource management that we solve of HCI in the main center right because it always gonna be with HCI you fighting an in memory you finding CPU right so we still need to deal with it but in a different way so that's something that we will continue to look at and we need the ability to deploy multiple safe clusters with field this is what have you seen that we're working already in the same release and finally the cache which is how we deploy by default right today if you deploy glance the cache was not even enabled by default right that's you as we go and deploy edge roles and we have a new role and by the way in the previous session of the work edge working group people someone asked when we gonna have like a stripped down version of OpenStack running only the services we need I have a news item for you that's what we're doing right have you seen at the ED site only that specific services that needs to be there I'm gonna be there we don't need the full-blown a cloud at the edge and finally the glance image synchronization and replication mechanism that's pretty much this diagram we need to improve that way because that's gonna be key for our workload delivery at the end of the day it's all about the workload all about the workload how we can refresh that new workload metadata and so on at the ED sides so if I started earlier in show you can see where we focus on it and and the distributed HCI initial focus is at the unicycle pod and satellite as we go deeper we are gonna address also the right side which is the rover right this is the think about the end remote side closest to the customer premises we but in order to get there we need to go step by step right and going back to what I said I read about the last one right now we distribute compute node model that Sebastian have showed we focus on this is we're solving the satellite we solving the unicycle footprints but we also want to start dealing with the distributed Rover right that can be deployed by the thousands and so on and the objective is multiple standalone sever deployed from a single location right connected to a central assad's on demand and resynchronize the metadata right this can be a standalone server maybe just one box alright let's put it on the table with compute sometimes with storage or without not all of the workloads we require like stateless stateful and state and obviously the limitations AJ is a consideration but in some cases the way we're going to design a ed services I really don't care if that single box that dies right because I have other ones to take care of the service the key point what I said ever earlier when I showed you the goal of edge I want to maintain as I move with my mobile device maintain the service the experience should be what I care about right and that's what we're trying to achieve with every one of the spit prints so to summarize OpenStack at the edge we have more than one deployment it's one edge that we care about and obviously we already figured out to deploy large clusters we've been doing it successfully over the years now we're looking at the close edge distributed node and finally we're going to get to distend the loan use cases which is like one box right and I can tell you that our ecosystem hardware providers are actually building but new boxes now so don't think about the irregular pizza boxes 1u servers anymore there are not we're gonna talk about stripped-down OpenStack stripped-down hardware as well for that use cases so that overall transformation is happening now and the good news you can take be part of it right that's the key takeaways we're taking gradual steps it's the first step we're taking about it's happening now so if you want to join us the edge working group is where we have our C obviously channel mailing is it so on you can follow up us any one of the specs that we just touched upon today this is not science fiction we're actually working on this and the good news we started this already at the stained ptg the gathering and the ephah pads are amazing so I the reason I put it there because you see you hear all the voices in the room and it's they're not still consistent but what's gonna happen if every ptg we're gonna go forward we're gonna get consolidation and we're gonna get prioritization what we can do next right and that's what it's all about and I want to take the opportunity thank you for coming and open the bar for questions and I wanna invite again like two distinguished colleagues Joe and Sebastian please use the microphone if you have any questions so in your last slide you had to control planes the main cloud yeah so you had the control plate in site a and in site B is there a plan for synchronization or those sites always gonna be monolithic so I have another slide into my backups lover I took it off specifically that deals exactly with death one of the UCS we have is the central component of to complain place right the reason I didn't put it intentionally we're not there yet so there's a we still have to solve the initial use cases before we go deeper to the because it's a different set of problems but it's not doesn't mean we don't think about it right the point I was making about the edge the ptg we list all the requirements there but we need to start somewhere this is how I will call the advanced use case already but it's not something we prioritize to start with yeah now I agree with your roadmap start small and thank you any additional questions control plane my question is of the window with for example rabbit and Q connectivity when you have latency issues on your network how does it work because it's not affected by a disconnect but yeah the cloud is not operable I'll say you cannot really go and create a new workload on a cloud which is disconnected anyway so yeah that's how it seems it doesn't work the workload which is already there remains active yes there is nothing impeding the log all staff was there to you know deliver service or compute nodes to stay to keep the guests up so the existing workload remains active but the fireside in that case probably you should wrap it for each site or right so there are many different we would have a similar problem with the database as well we could have a similar probably the scaler there are and there are let's say there are pros and cons to every different solution and one of the requirements that we had that we really wanted to you know not need to deploy pacemaker in the fight site so for the database that would be impossible for rabbit probably is more reasonable because it doesn't really need pacemaker but still adds load on on a node or a relatively small set of notes which in tier is just delivering a service so but yes we could we could we could play with it and my take is 300 is very good at that like you have a very flexible way of customize your rolls and distribute services differently and so this is actually relatively easy to try with met people in part you are thanks last question yeah hi is there any intention to reuse the image cache or part of the image cache code for Nova into the cache in that you wanna plan in glass yeah I think this is this is definitely what we want to improve because currently we're not we're not anymore - we're not able anymore to take advantage of the coconut cloning from SAP because all the caches the cached images or again files in the file system where we got rid of that but the initial implementation has just flat files so then you should have you believe VM you would they will all be backed by this particular file which is not ideal and to go later is to once we fetch the image just directly put it into self so wants to put your VM stand that's no way we would like to take the benefits of Ceph obviously when we can but again there's like things we need to do first so we'd like to thank you again for coming we're available here and you can follow up isn't Twitter and feel free to join our discussion in the working group and so on and have a great summit
Info
Channel: Sniper Network
Views: 194
Rating: 5 out of 5
Keywords: ceph storage basics, ceph storage cluster, ceph storage proxmox, ceph storage cluster installation 2/4, ceph storage installation, ceph storage kubernetes, ceph storage demo, ceph storage gui, ceph storage cluster installation, ceph storage openstack, ceph storage architecture, ceph storage ubuntu, ceph storage youtube, ceph storage openshift, ceph storage appliance, ceph storage install centos 7, ceph storage redhat, ceph storage security, ceph storage vs san, openstack, cloud
Id: _JAQLqJBMGM
Channel Id: undefined
Length: 42min 49sec (2569 seconds)
Published: Tue Aug 27 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.