Implications of 5G and Edge Computing on OpenStack

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
good evening everyone this is kundan kathirvel from AT&T and i'm hesi boxer from erickson today we want to talk about implication of Phi G and H computing in OpenStack there's a lot of discussion going on about H computing last a few days and also there's a lot of silo work happening across like multiple companies and multiple open servers about the edge computing so we like to bring a perspective of you know like what is telco world is thinking about when it comes to the edge computing you know what are the use cases and I just came up the work session you know there's a discussion about how the use case is going to look like so we like to present you guys what the use case is going to look like and what do we think about the solution and I think you know then we will take the questions to address you know greater question regarding the edge computing this is the next era in the computing so we talked about the centralized cloud then we talked about you know like cloud for the CDN which is basically providing a content delivery network closer to the edge so this is talking more use cases in that particular area of you know what is needed to be done in the edge computing from OpenStack perspective and how do we satisfy the use cases defined in the PI G in other areas so the best way to predict the future is to really create it this is a quote you know like ie when I did a Google I got this code and I was pretty impressed by this code cuz when we talk about something it's better to create it so that you know people can envision you know what need to be done from the technology perspective so so this slide is to provide the use cases related to the telco and I'll start on the IOT part so there's a lot of innovation happening from IOT and a lot of companies already started supporting the IOT so where is IOT stands is that there is a connected car and now sensors been actually installed everywhere in the home in the stadiums even in the you know agriculture field everywhere the sensors been used so they can lock a lot of information and that lot of information need to be processed quickly so that the information is you know like processed and given back to that application so that the application can make some decision or or use that you know data to make the processing related to that so that specific use cases is what you know like primarily IOT is looking for the other use cases that AR we are so there's a lot of innovation happening in the air we are and everybody knows that you know there are glasses which people using for the AR augmented reality and there's also virtual reality is also you know like it's really picking up so especially a or we are records a video processing in a very quick way unless you know people will see a jitter when the videos has been sent to the device and you will see you know like there's a lot of jittering when we try to process this actually in a centralized to cloud so usually when we talk about centralized to cloud you know if you take a public cloud is usually deployed in you know up to 25 location you know some clouds in up to 30 location but it is not really in the range of you know 2,000 or 3,000 location so these AR we are application especially you know image processing for example face recognition of the AR application you need a quick processing at the edge and this centralized based cloud will not satisfy that specific need of that particular application then the other use case we see with with the telco is that the virtualized mobile network so there's a sort of boxes get implemented or installed in the cell tower and they have a processing needed to whatever the processing has to be done in terms of supporting the 4G LTE network but the Phi G is taking that into making them as a virtualized virtualized application so this is innovation happening in the telco industry to take the hardware platform into the virtualized platform so last couple of years you know we hear about the NF e application that is basically taking a networking application like a firewall load-balanced routers and converting them into the into the virtualized wall now what's happening is that the innovation is happening in the Phi G and the R an area to virtualize that content so this is also requires you know like closer to the edge so in this case the edge is the cell tower itself so it has to be closer to the edge so that this processing get you know like happen very quickly so that that specific application work so the other aspect is that you know the the equipments get installed in customer location for example there is a Wi-Fi is been given in a specific location for example a stadium or in a customer home so these devices the wireline access it is also getting ritualized it is getting containerized so these records a quick processing which cannot be installed in their data center so the concept is really you know bringing the cloud very closer to the customer or very closer to the end-user that is what the edge computing is about so edge computing in you know because this term is like really convoluted from the people because it's not a one defined word that everybody use it you know some people call as a distributed cloud some people call us like you know for computing so bottom line is that you know really taking the computing power closer to the customer that's what the H computing in this context what we are talking about so how many liquation we really have to install so this is really depend upon the application if the application you know like cane withstand the networking latency of you know processing in the centralized cloud then it could be hosted in the centralized cloud as we stated but if the application really needs is something a quick processing from the network latency perspective also quick processing of the application itself then it has to be installed very closer to the edge so there is a lot of study has been done you know I have seen myself that there's a lot of professors and the the universities they have been studying you know like how much closer this edge has to be but this need to be a still work need to be done because there's no common definition yet in terms of you know like how close of this edge need to be you know some people say as you know it has to be installed in a cell tower some people say has to be installed in a home so that particular thing of where it need to be installed it has to be flexible in terms of you know where this computing is going to be installed so that's why I'm showing it's like a 2000 plus but this is not limited depending upon the use case this would be a ten thousand plus location so this slide is to talk about what we what we have with respect to the typical OpenStack cloud deployment and what do we really need from the edge perspective so typically OpenStack cloud it's supported x86 compute and it's supported about the 50 plus sites what we talked about and a local control plane which is basically installing the OpenStack locally in the data center and also the the virtualization layer needed to virtualize that particular compute host you know it is a very generic operating system it's very generic you know like a KVM and all the ways which is pin put in places like it's very generic and this will not cut the deal when we actually go to the edge the edge has to be right and light but that is a key for the edge so what does this really mean right it is not just going to be x86 processor it will be any type of processing you know if it could be a PG a DSP processor it could be any type of processor why because when we try to install something in a customer home when we try to install something in a cell tower it will not be x86 in all the cases and that we talked about the two thousand-plus location but we from I generally use case perspective you know it would actually need up to 10,000 plus location so the zero touch provisioning is also one of the key aspect when it comes to the edge why do we need a zero touch provisioning the real reason for the zero touch provisioning is that in the data center usually there is operation team sitting there and mostly everything is automated and then still the operation came team can go to some manual work or can they can publish it something with it but edge if it is getting stalled in a stadium or if it get installed in a 10,000 plus cell tower it is not easy to have some force and go every time and every day to actually do a troubleshooting or you know like to install the application itself so that's why it needs a zero touch provisioning we know that from OpenStack perspective when we install it there is a lot of complexity involved in terms of installing the OpenStack and in this case you know like we do really need to think about the zero touch provision because we cannot have especially if we take telco we cannot send a truck which every time to the cell tower - you know troubleshoot something or you know install something so that's why the zero touch provisioning is very key and also what are the option right so it has to be it need to be both away meaning that it has to be regionally supported meaning that OpenStack being in regional as if is going to talk more about that and and controlling the computer host you know like distributed across multiple location the other option is that you know like a local control plane meaning that the OpenStack is very thin OpenStack getting installed locally and controlling that computer was in that location why do we need these two option the reason is that you know like there are location where only you can accommodate like you know couple of servers and there is no way to put a heavy control plane in there for example if you take a cell tower if he put like to compute there is no way to put another - server or a three server for a control plane so it has to be very thin and in that case regional will be helpful but if there is a location that you want to really high availability and you can have like two more servers accommodated there for example central offices then in that case you know like you can have a local control plane in so the key is less computer overhead we also need to have a thin operating system supporting this you know virtualization layer from the container and the VM so that is also a very key when it comes to the edge so this is a summary of other use cases what we are seeing you know I just explained this in word this is a picture explaining you know what are the use cases we are really seeing so the first bucket is virtualized to mobile network we can see the virtual ran elements then virtual Phi G components the second bucket is the wireline axis in this case virtualized to wireline axis virtualized network apps then we talked about the AR we are drones in this case high bandwidth media content that is a very key with respect to the AR we are on drones because they do the image processing they do a phase recognition they process something with respect to the image so they have they really need that high bandwidth media content and also the content delivery at delivery at the edge this is one of the key thing is that you know the edge is currently on you know the this CDN is being set up right now is like you know maybe 60 location across the globe but it is not too close especially the the PI G and the new applications like AR we are come in you need a content delivery processing the caching you know very closer to the edge the other part is the IOT so IOT and fog gateway need edge and the security management the edge and this is also another interesting use case because now the application processing is happening at the edge so there will be a lot of emergence of the security technology because you need to make sure that your application is not getting compromised so that is also one of the key use case so this slide is to talk about you know as I stated it is not a one solution fit for all but it could be one open stack fit for all but where do you deploy them it really varies depend upon the application so currently use case what we see from the telco wall you know the edge could also be deployed in a data center you know because if that data center is close to the user then the edge could also be deployed in the data center so we should not think that you know like a data centers are completely eliminated out of the edge that is the wrong assumption so the edge can the real reason for the edge is like something to be closer to the customer if this data center is closer to the customer then the edge also can be implemented in the data center and the other the the second part is like the central offices people who know the the telco world usually the cell towers are actually all connected to the central offices so this is where the 5g ran virtual network functions are being managed and installed and this is another use case you know like a VC the reason why I see current use cases this is something we need it now this is not something you know like needed on another 5 years or 10 years so this is happening now and the customer premises said you know this is also very important use case that you know having a customer provider edge at the customer place like universal CPE sort of a thing it's also records the edge at the clock at the customer premised the near future use case the reason I say near future is that it's also happening maybe not this year but very soon it this is happening is there having the edge implemented in the cell tower okay thanks gunden so let's look at some of the business drivers for the for the edge cloud so so basically what's happening with the proliferation of the 5g technology and some of the IOT devices that Condon mentioned the user experience is demanding you know more or the higher bandwidth content at the edge and then also with the some of the air VR applications the demand for very low latency edge processing is also coming up and at the same time you know while you have the control plane as well as some of the applications running at the edge together there is a need for security in terms of the isolation making sure that if you're running content and the control sharing the same virtualized resources the security aspect is is looked after in the proper manner so so if those are the ones that are driving from a user perspective then I guess the operators would basically look for you know delivering the edge cloud based on some of the key incentive from the from the revenue generation perspective so for you know there are will be new services opportunities coming up with IOT that's already there it's it's generating new businesses but if we you know I guess have more capabilities that are available in the edge then that should drive up further you know services and businesses same for AR and VR once you have the latency requirement delivered at the edge the AR and the VR technologies are supposed to to grow in a bigger way much more than you know what we have seen in in Pokemon go or the likes there off in terms of the of the other aspect of the equation we we must have to to reduce the cost or at least invest in a way that are more cost-effective so the the virtualization for example for the 5g ran and the core part is an incentive for for having a cost reduction that can be put on the edge a lot of these components are today either sitting on the sell side or in the ran aggregator location or even in the core many of those then can be virtualized put on to a cost optimized hardware platform so that would basically bring the the causes for for cost reduction and of course the thin cloud at the edge like Condon mentioned that it has to be right and the light as he coined the term here so it's not necessarily a heavy data center type of cloud the the form factor has to be smaller than what it is today so that would basically create a case for cost reduction so coupled with in a cost reduction and higher revenue the there will be a use cases that would drive the edge use cases so this one is an example of how distributed workload could help in the in the telco type of applications so what I'm showing here is using the Green Line is the normal scenario where if you are running an application today through 4G network even in the in the 5g network with virtualized ran components sitting in that in the distributed data center towards the edge and then connecting to the EPC or evolve packet core in the centralized data center and then you're connecting to the internet and this is a video application for example so there is a significant amount of latency that would be created by this if you are to go through the edge and then then traverse all the way to the to the centralized data center and that would not really fulfill the the low latency requirement that we have for many of these applications that are driving the edge use cases meaningly the IOT and and so forth right so in the red line use K or the lead line data path if you see that we have in local applications running on on top of the virtualized EPC and the virtualized ran at the edge assuming that we have the small form factor and a lighter version of the operating system and so forth we would be basically reducing the delay in two to three to four milliseconds in the in the range that that would be acceptable for those applications to be to be meaningful so so now looking at the technical aspects of it that how do you really architect the edge components do we have the central control versus the decentralized control you know how much of it do you put into the edge and how much of it you put into regions and you know how much of it you put into the center so we have to kind of go through the light and and right balancing act and as Condon mentioned that we really have not the industry has not figured out what is the right way of doing it the basically what we have to do is we have to keep an open mind here and and keep a flexible option to deploy either at the edge or at a regional side and then following that with with a somewhat high level architectural view of what we could be a proposed solution is that if we kind of slim down the OpenStack so so today's open it stack that way we have it it's a it's a somewhat heavy it has a lot of components and we kind of slim it down into a version that would be only required to do the things that are necessary for the edge both in terms of the you know the Nova Neutron or even the Keystone aspects a very necessary in a slimmed down version of the adult nice stack that would be deployed at the edge and then controlled by the or or the connected with the centralized OpenStack so that's that's a potential one option and then the next one is that we you know control the edge or support controlling the edge using a regional OpenStack so so that would be kind of similar to the what OpenStack we have today but again with a limited scoped out version that would only be applicable towards supporting the edge use cases that that we have been discussing and meaningly here is the IOT the AR here the virtualized ran and the and the wireless access so these are the couple of you know variation of architectural view that we think that the industry should look at or as a community we all should should look at to support the the edge cloud use cases so it's really come down to okay we got OpenStack in every location and how do we how do we orchestrate this across like 10,000 plus location right so when we need to orchestrate across 10,000 plus location you need some Federation among the authentication and how do you distribute the images to a ten thousandth location how do you upgrade them so the use cases itself is divided into two parts like the infrastructure layer which you need to support the BM and container and then you have the upper layer that need to support the design orchestration control and policy and analytics so today in the AT&T integrated cloud in AT&T what we do is like we use ecomp and we open source to e-comm so that's what the warren app in this particular picture on the left-hand side and this is the open source and a lot of contribution coming from the community on this and what it can do along with the OpenStack it is really powerful when these two things work together and what happens is that one apps it's one layer about this location or the OpenStack which is on the bottom layer in this case it could get distributed to 10,000 plus location we still need to do the use cases what we talked about in the OpenStack make it right and light and then when up on the top can provide the orchestration across like multiple location so what can be done with this is that it's filtered down into this file use cases so today one app can talk to the OpenStack in a heat template why do we need to heat template instead of calling like a lot of API and you know you send a one heat template whether it has the information about the computer it has information about the network also it has information about the storage in a you package routing into a one template and you send it out to the OpenStack it creates that particular B and F or the VM you need it so in this case one app can talk to the OpenStack through the heat template and also it can support the multi OpenStack of regions support the third thing is support nested vnf setup this is also very critical what do we mean by a nested vnf setup so the in the previous picture as if talked about you know like some application stays in the edge and some application stays in the data center so in this case even the edge evolves and it get deployed in multiple location the way it is going to work is like you know if a tenant need a workload and he is let's say that he is a cellphone user or is a a are we our user so that workload when it need to be spinned up you know some workloads are going to be at edge and some workloads are going to be in a central office some more clothes are going to be in the data center why do we need to spread the map because there is no way to accommodate in an enormous number of workload at edge because you know if it be deploy in a cell tower or we do something in a customer there is no way to deploy a thousand compute at the customer home or there is no way to deploy thousand compute at the cell tower there is only particular level of packaging can be done with respect to the computer so they need to be a distribution across like multiple location so then it comes to the challenging of scheduling how do you do a scheduling where do you place the you know workload do you place it on each cloud or do the placed on you know like something in central office or do we place it on the data center so that cific analysis are about in a making a real-time decision around what need to be done that is something one app can actually do but also there are some open standard API need to be developed in this particular case when I say open standard API what does that really mean right when a user comes in you know he should not be worried about whether he is putting in this telco number one telco number two or this public provider number true so they need to be a whuppin API you know that is where this you know like one app and OpenStack and help them is there you know like one set of API that can be called in doesn't matter you know like this provided versus that provider as long as the user is ready to you know like want to create a resource and there is a financial arrangement between the user and the provider the api's are exactly the same you know you go to this provider or other provider so that is something you know community has to develop because that would give a flexibility of placing the workload anywhere across any different provider the other other aspect is that you know supporting a complex networking you know how complex this networking is going to be for the edge this is a multi-million dollar question right now is like do we advise a porous ROV do I support you know overlay do I support all sort of Sdn configuration to me the answer is really yes the reason is that the end of the day the application the networking application or you know like any to start sort of application need performance needs security all the characteristics what we are planning to support or what we are supporting in the data center it's still needed in the edge cloud so it does need that complex networking so how do we solve it we still need a thin and right and then also we are talking about all the use cases so this is where this balancing act commends like so that's what I as if was talking about is like we do need to see what is it really needed use cases and how thin it can be and support those use cases at the edge and also stretch them across you know like other data center so that they have a way of communicating between that application and another good example is like a or VR you know let's say that a cop having a a are glass and he is actually going and looking at a person and immediately the application is doing a processing by looking at that person doing a face recognition and now it got the identity of that person but it has to compared with you know like millions of records which cannot be brought everything into the edge so some selective algorithm also need to be worked upon so it's really a mutual work need to be done in terms of what application has to do in the edge as well as what the infrastructure has to do so but here we pretty much focused on the infrastructure part like what is one app has to do what is OpenStack has to do but there's also application level what need to be done when in terms of supporting the edge the other hook the one is this is what the policy-driven metadata driven this is what I talked about that OpenStack open API what it does expectation of this API is that you know like it is really policy driven which provided I have to place and they need to be a flexibility for the tenant to say that ok I want to be on this thing and I'm ready to pay this much so that he get placed into that particular cloud so that open API could provide that you know like policy driven meta driven metadata driven you know placement of this workload so this slide is to summarize you know for the OpenStack community and about where we are today with respect to OpenStack and where we need to take OpenStack in terms of the edge of in the edge computing world so as I stated in the previous slide you know currently we only support x86 servers but we here we have to support x86 servers and peripherals so this is where peripherals has a different type of processor they are not x86 processor in all the cases and for example the virtual ran is the one good example then the control plane slicing we talked about you know how big the slicing has to be the thin and thin means that you know like we have to get rid of some of the things you know like we usually support in the data center the heavy layer of pass heavy layer of you know like anything as the you know service those sort of a thing you know has to be thin down a little bit and go to opinionated you know specific use cases supported at the edge because that is really needed to keep the control plane sizing you know like less and also we talked about you know like weather plays OpenStack regionally or whether we place the OpenStack you know like in the in the data center itself so it is really depend upon the application need but they need to be a flexibility when we develop the OpenStack to support the edge the other thing is complex install and upgrade process in the current OpenStack this is one of the key thing you know like everybody is using OpenStack and you guys may aware that you know like when we try to install OpenStack and when we try to do upgrades there are pain points there and a lot of people and including s actually solved some of those things using the automation stuff but still there are pain points and when we go to the edge of OpenStack we really have to care for the zero touch provisioning there is no way here you know ten thousand locations without a proper automation and 0h provisioning this edge cloud and OpenStack could be deployed the other thing is like container and and we have the both VM and container world playing together now and in the morning I talked about in the session I talked about you know how do we support a container using the OpenStack so in this world it's going to have a play of both you know people could ask like hey we are in the edge why do we need a VM because this is a question you know like every time we get asked is that there is a specific need for it because security plays a key role and container is still picking up on the security and especially some application related to the government for example cannot be in container and there are specific government guidelines today and days to adopting into the container so they still need to be industry wise some work need to be done but that's why you know like both container and VM need to be supported in terms of you know like edge and the software availability is another key right so we usually the OpenStack deployment even if you do all sort of redundancy n it's about the three nines of availability in a in a single install OpenStack in this case when we deploy in the edge as I stated you know like there is way to you know send the person to fix something and we really need to have a more availability and when we talk about a more availability there need to be a less bugs and they need to be you know like more more compact application scenario in terms of you know supporting that high availability then we talked about the scale deployments so it really we're not talking about 100 data centers anymore we are not talking about 200-300 data centers anymore we are really talking about 10,000 plus location so that's also need to be considered ok we can take a few questions given the timing and the people who want to ask questions so please go to the microphone and ask the question okay yes so what is the relationship is it a platform as a service this is H cloud platform as a service it's going to be not that many use cases will be recurred in the platform as-a-service need to be supported at the edge but as a user when I come in I need you know like a specific BN or specific container with let's say that I need a database you know like in order to process my video processing or I need specific you know like a thing as the customer I do not want to see that you know I need to install everything after getting the VM then I will lose that efficiency of you know getting the edge it closer to me so when we create the VM it has to have that you know like the package is installed to it that's where the pass comes in is that it's really a mixture use case but we also have to keep in mind that it has to be thin so that it can be quickly created and quickly processed at the edge hello my question is about court there is a court project which has different approach to the same problem and it contradicts the predict mu now explained with edge computing so do you see is the court and eh computing will be integrated together or it's a different strategic are you talking about card C or D project course C or D is here so called the as I stated everybody is trying to attack a hedge the thing is like it's also low situation right now right and each each open-source entity is trying to focus on specific area so it needs some connection to it but you know as far as I understand you know like card also address some of this aspect they also focusing on the some of the central office application running as a virtual mission or the PG application running as a virtual mission but in terms of the infrastructure they're still relying on you know or the container and the OpenStack to support it so whatever we talked about you know like slimming down the OpenStack running in a regional control or you know running it locally in a small thing you know that has not been a drastic car buy the card as far as I know can I add to it so I mean cord has the wireline part today for example the the the virtual olt there is a demo that has been going on for some time the industry has seen but I think the the area that that where the industry needs to come together is what Condon is mentioning about the gr0 touch provisioning that's what cord is using X OS today so we need to see and and in one other slide that Condon shared it it has own app so I think the industry is converging towards creating an automation or automated provisioning that would help the the use cases for edge thank you so for that one there is M cord for mobile as well that address this similar things now my question is do you foresee multi-tenancy in your blueprint to be used in this topology such as MVNOs to be hosted as a as a tenant or just this is purely just AT&T no way this is not about just AT&T ok so this is not just AT&T use case I'm representing you know I am just stay this is a telco use cases and this could be applied to any other use cases there is a gentleman on the previous what session he talked about you know deploying in every stores for example you know there's a multiple stores and different location you can install the same thing so there's a use case is not just limited into the telco itself so if I open structure does resolve the use cases we like to see this as a result for the everyone perspective so definitely it is not limited for AT&T or a telco world we see this is a generic set of use cases the reason why we are sharing is that we are showing that okay these are the use cases from a telco world but definitely other use case has also been involved my point was using hypervisor or not just bare metal work coat for one tenant or multi tenant purposes yeah I wasn't that session there are different use cases but in this case for what you're proposing with the at least not a slim power but at this reasonable power on the edge to deliver these services but still still it quite expensive equipment to invest in multiple thousand locations so if this is going to support most attendants EB 2 hypervisors or just bare metal just a teeny load zone that was my purpose of my question sure thank you I think thanks for a clarification on that question it has to be multi tenant today even in the data center law applications when they get deployed they get deployed under different tendency so each application like for example if you take a firewall you know the CSO Department actually do a firewall thing and other department does it something else they've been given a separate tenancy so in this case you know like it from a user perspective also from a telco application perspective also you do need a multi multi tenant concept there is no way to dedicate a specific server for a specific application anymore right so that's a notion of the cloud itself so the multi-tenancy is by default need to be supported yeah you talk a lot about the quantity of data center and a lot more a lot more data center at the edge but how do you see the quantity of server because as suspect at the edges will be very small and in data center very big so the overall spread of the DD server will be more in the data center more in the edge global point of view thank you I think it is really depend upon the provider and the space power cooling and other aspect of it from a cell tower or from you know deploying something into a customer location they are usually bitten you know we're not to compute or three compute that is the maximum scale but when we talk about you know like the next hop which is like a central office or a wire center they could usually accommodate you know like 100 subset words in them and if you really need to deploy a thousand servers for example for some processing you know there is no way to deploy in cell tower one less until there is a multiple cell tower or a multiple location so edge by default it is a less number of compute but that less number what is the exact number it is really depend upon the provider how much they can accommodate and package into that specific location I would like to just add that I think one of the things that I mentioned was the revenue and the cost model is what really going to drive that how many servers you're going to put in what location and how many location this is really dependent on or it will be dependent on what use cases are having more uptake in the market and what does what services that operators are rolling out right so those are the things that are still unknown but but I guess what we know today that there will have to be a smaller form factor in many of the cases especially on the virion and the core virtualized core for the 5g cases will have to have a much reduced footprint right right thank you everyone we really appreciate the call you guys good joining it thank you
Info
Channel: Open Infrastructure Foundation
Views: 10,175
Rating: undefined out of 5
Keywords: OpenStack Summit Boston, Architect, Containers, Public Clouds, Telecom, container management, infra, operator, Container orchestration, Haseeb Akhtar, Gnanavelkandan Kathirvel
Id: 9d5JtONGQSA
Channel Id: undefined
Length: 39min 42sec (2382 seconds)
Published: Tue May 09 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.