NetApp Cloud Volumes and Cloud Data Services with Eiki Hrafnsson

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
my name is a key drop zone I'm Icelandic but I've lived here for a few years I want to talk today about cloud volumes net up cloud volumes and I want to also take you through sort of what we're doing in the newly formed Cloud Data Services business unit here within net up and what I hope to bring to the table here is you know update your understanding of of net up and cloud and hopefully for those who are watching at home also so when we talk about cloud I probably should you know show you my credentials here I came over to net up just about only eight months ago I'm co-founder of a company called green cloud I might have met some of you through the years last seven or eight years which I've been traveling the u.s. but essentially in green cloud we built a public cloud in multiple data centers and we helped developed some of the core technologies that are used today in private and hybrid clouds and I've been in this for a long time my this is actually my first day job I've never done anything else but my own startups I'm I turned 40 a few weeks ago and I really love to do software and specifically automated software and what we're doing here at net up is really exciting to me and and when I say that I'm a data fabric believer you guys say on there what it really means to me is that I think data management and and just data in general is the key ingredient for us to get to that elusive hybrid cloud multi-cloud applique strategy that that companies and new development teams are trying to get to and some of them are get getting to it quicker than others but we think we can really help everybody get get to that data fabric that they want green claw was acquired last summer it was a seven year old company before that we did some things like participate in the core development of cloud sack from the beginning we've contributed to OpenStack kubernetes we've been we ran a public cloud for four years and which included compute object storage and users from over 80 different countries and about three years ago or so we pivoted the company and started selling the software that we made to build a public cloud and we call that Q stack since we did that we've also been working a lot on application orchestration so that's what we're the kubernetes part comes in but for net up they really they because of their cloud strategy that's the cloud strategy that I have a feeling that we may have influenced a little bit through the years of talking with net up they saw an opportunity in getting the tools that we have or products getting the experience people that we have and build a foundation for the next generation of net up services so that's sort of what they were gaining out of it what what we were getting out of it getting out of it you know not just from the acquisition side but there had to be something keeping us here so what for personally for myself having co-founded the company I was not in a hurry you know to get into a a day job like I'm a startup guy I need to have some degree of uncertainty and chaos because that's kind of where the fun stuff happens and you know settling down in some boring enterprise enterprise-e company was not really what I was looking forward to but to my surprise and I was super happy when we started talking to this new this new business unit off net up I was so happy to see that we actually do have a cloud first leadership here net up all the way up to the top and the things that we are sort of starting to show the world now are really exciting from a cloud perspective because the possibilities that they open up are are truly new and the whole industry has been sort of trying to get this get to this point where they have an easy way of or the easy button you know to get to the cloud an easy button to get to performance and all that so we have a vision and I'm talking like in past science we have a vision in green cloud what DevOps should be NetApp has a vision of what data management should be and we figured out that by combining our two we would have these new cloud data services that I'm going to show you a little bit about little on today so essentially what became of green cloud here is that we are now net of Iceland I actually work out of Seattle but we're kind of distributed but the teams have been have been growing and now the product that we have before q stack its components have become what we call the service delivery engine for future net up cloud services this is a new business unit I can tell you sort of my perspective on it we have a lot of ingredients here to make a lot of new cool stuff and the way I think about it is that you know when AWS web services were starting they started with s3 then they came out with ec2 with s3 and ec2 you can build everything else everything else that Amazon has come out with basically you can build with easy to necessary we are trying to get to that point in NetApp where we create those basic services and api's that fit into the data fabric view that net up has and will enable us to roll out new and new services and incorporate third-party services and integrate to really make a large solution and ecosystem around these services that we're building it's all the goal of helping companies transform and I'm not talking about like in a fluffy fluffy nice sort of PC way of talking about things I'm talking about like really doing it really presenting solutions figuring out how they need to do it and how we can help and try to automate that and make it available to the masses so this business unit is really leading the the Cloud Data Services strategy the sales and we're working directly and physically with the hyper scalar so the big public cloud providers so we've got our teams inside insight like a juror like AWS we're working working very closely with these providers the ones that we have announced in the ones that we haven't announced yet but we're also relying on the sort of excellence of operations from all of the all of the net up teams all of the net up business units and the solutions that are already there but we here in this business unit we are here to actually support build and maintain these services so we're not just you know going to pump out some products as somebody tries to install we're actually going to run it and support it and lastly all of this all everything we're doing actually towards evolving the concept of data fabric and I know data fabric is something that you know to me when I came into net I was like what is what is this data fabric everybody like it's the second word you hear when you come into net up and to me like it was a bunch of components that were not necessarily orchestrated fully right so right now with the green cloud team net of Iceland as we're as it's called now and this larger business unit I think we've become like the Pirates of net up this is the flag that Steve Jobs put up at the Apple headquarters in 82 we I think we we might become like the Macintosh team of net up and that's that's kind of what I hope for and be that change that net up needs so the data fabric we talked just a little bit about that net up state of fabric isn't today a product or a service however the concept of data fabric is that it's a it's it's really concentrated on your data and the services that help you migrate work with it and effectively make your company more productive informed in terms of you know knowing that your data is safe knowing that it's following compliance rules and effectively making your company more profitable now that's all very nice and dandy but data fabric really has has to be like the guiding light for all the date all the services that we build and that's kind of where we are coming from and I think you'll see and and you the picture might start in your head now that you know with our new multi cloud data services we are really sort of visualizing and making the data fabric tangible for you and this is we're trying to set the stage for for the future services and the future of the company in the cloud so when people talk about data fabric what they really mean is you know all these products here this is this is sort of you know coming into net up being it for eight months I'm still catching up on all the products I mean there's so many products here and there's so many talented people working on these but we've got to start simplifying a little bit and so when you look at our products here and then you look at the services we'll roll out you start seeing that we'll pick and choose features from these we'll make new products but overall we're just trying to get to that goal of making things easier for the companies that trust us with their money so the way to think about it now is that you know net up is a it's not a storage company it's a data management company and without sort of blowing the lid off like all the cool stuff we're working on let me just tell you a little bit about where we will play in the market and and this goes goes towards the it's really what I'm talking about is what we will do in the cloud so if you look at these categories of data management these are the these are the categories that NetApp will either directly or in partnering have solutions for customers first one data volumes data protection data integration and orchestration data and cloud optimization and then data security and compliance and I want to talk about two of these today but just to give you an idea like where our current products fit you'll see that you know data volumes obviously there's the on-prem but I'm specifically going to talk about the cloud so data volumes in the cloud mean cloud ball yems the services and Tapp cloud the appliance in public clouds on the data protection side we we've got out of alt and if you don't know what Alto bald is basically it's getting your snapshots backed up in public clouds in s3 among other things and SAS backup is a new product or relatively new product from net up that used to be called cloud control for office 365 and there we are sort of making a time machine for your SAS applications so if you think about backing up office 365 because an employee that that it's leaving the company accidentally deleted you know the most important files right since this is a SAS service it might have a feature to find out what version we were working on before and get back to that but since this file has been deleted you might have deleted all the versions of it you can't get that stuff back but that's really just the beginning of SAS back up because SAS backup can reach into any type of application you have as a service so think about Salesforce think about you know with various different services that you would want to get the data from not just for backup but for processing it for doing something with that so this becomes a key in that particular part of the fabric and the third thing that I want to tell you a little bit about is data integration orchestration and this is something that's really came about with the acquisition of green cloud here we are going to talk about applications specifically Cooper needs based applications api's for orchestration and then a product from net up that's been around for not a long time either similar to SAS backup called cloud sync which is a file based sync service that allows you to get data to and from whatever data source if it's file based we can get to it if it's s3 and it has safes whatever all right we're just going to jump into the couple of things I'm going to talk about talk about data volumes I'm going to talk about applications orchestration and integration so you're probably all familiar with on tap Cloud a does anybody here not know on top cloud you probably all read okay great so on top cloud is or became the the softer version of on top you know a while ago so excellent mm-hmm oh I I know how you feel I've just been here for eight months so alright so at the Coronet up you've got I mean obviously not up started as a shared file appliance like solve the problem of file sharing multiple reads multiple rides long time ago the company's 25 years old if I'm not mistaken and having like built having built a public cloud from scratch with you know millions of dollars worth of equipment in multiple data centers and running that for thousands of users I can tell you storage is hard it's very hard you can try clobbering together multiple different solutions and believe me we tried pretty much all of them most of the stuff that we at green cloud when we built our public cloud was like really cutting edge and those companies ended up being bought all of them none of them worked none of them worked and and it's only when we started to use net app that we really saw some real stability and real performance at the same time and it's kind of strange that you would get that from from from an older storage company because they really have followed the differences and the new types of workloads better than people think so we're talking about like cutting at data scientists working here creating storage solutions that are super high performance and at the same time are doing deduplication so you don't actually need as much storage as you would think because they can figure out like oh you already stored that I don't have to store it twice and they do encryption at the same time so you you're already complying with some of the standards like you know gdpr or or Phipps or and things like that so on tap is like the software that's on top of these boxes but we also get a softer version of that so you can run on tap as a controller in public clouds so you know a lot of people including me myself would ask that why would I do that like why don't just I just use use EBS right like why don't we need a storage controller like so the first thing I thought was like well maybe maybe it's just for those NetApp customers because they need to somehow like get to the cloud and there's probably a good route you know there's a read about it there so snapmirror thing you can actually like clone data from from your site to the cloud if you're running onto a cloud but then when I looked into it I saw that that wasn't like the only thing it's actually cheaper and higher performant and more features to use an on tap cluster in AWS than to use EBS actually when you get to a certain scale the on tap stuff is just for free and it's it's way more performant and got a whole lot of more features that then you can ever get from EBS so it's interesting and and this got me excited and this kind of probably was the basis for the belief net up that we could do something like cloud Williams which instead of having a box or an app or we've got this service that's on demand is API driven it doesn't look like that up at all nobody has to know it's not up it just works and it's super fast and you get things like instant snapshots instant restore these things just don't exist in the public cloud today like you can't do these things in a public cloud believe me I have tried most of you probably tried some of this stuff as well so going from those appliances to or the physical appliances to the virtual appliance in the public cloud and now to a fully managed service and that's what NetApp cloud ball yems are so what we're talking about here is file the file storage so shared file storage meaning like you get a multiple servers reading and writing from the same volume so you don't have to clone data all around so that's right away that's a thing that you you save a lot of resources we were offering NFS version 3 NFS version 4 which basically means you add like authentication and stuff on top of that and sips or SMB so I'll go through the aft after I demo it I'll sort of go through the the use cases so you understand better what this applies to but a big part of it is you know performance we're talking about super high performance in the public cloud with shared storage multi read multi write this hasn't been done before so can I ask you a quick question there go ahead obviously there's already NFS ready for Azure which is backed by better yes this seems to be another NFS e and sis are sorry and SMB type service as well so what's the difference between the two in terms of their integration point with the cloud provider themselves you're talking about the same service so Renetta call volumes on Asher is a native service offered by Microsoft but in the background it's us and Microsoft that run the service and we write the code so why that why the name nets are cloud volumes and then a different name for the service in yes old up is that because it's their branding yes is decoupled from that app right if so go buy it from Microsoft is so if you might don't want that app storage and yeah so so it let me let me explain so there's there's a name for Microsoft Enterprise NFS that's provided by net up there's also cloud volumes for AWS provided by net up top volumes is the overall name for all these and the reason why we have a common name from is because we are going to be multi clout with this so whether the services running on Azure it's running on AWS or somewhere else I'll show you how we will manage pretty much all of them it's about it's not America actually between between two different providers and and yes and cloud environments yep yeah so prominent features here obviously an open API we've got a restful api but we also got fully integrated native cloud api's what that means is that like for example in the Microsoft service its ARM based so that's we basically we sit behind Microsoft's API gateway or a API proxy which means that you get all the security features you get all the limiting features you get all the billing features these services are billed by the public clouds not by Netta so that's that's a huge difference as well because this just fits right into your cloud spend and you don't think have to think about all I have to go and make a contract with NetApp or something like that you can just view the as you know super high-performance BBS ef-s you know so so that's that's the question I was about to ask you do you think does that mean EFS is being treated as not not being a success or do you think that's being seen as a very low level product just for a small amount of file sharing I can't you know I don't have an opinion on that yeah I'll talk a little bit about us later but mostly on the usability side of it okay but just on the API piece you mentioned there that that's behind Microsoft's as your API yes but when you're doing this an AWS then it's your own API yeah yeah and actually it's it's it's it's a feel a level above that as well because when we're doing the AWS one and subsequently when we roll out you know more regions and and providers we are talking about you know having an API that can share a tokens so access an authorization between all net of services not just cloud Williams so let me just go ahead and demo this for you okay so cloud volumes is available right now in both AWS and Azure I'm gonna demo the AWS version and I'm gonna show you a couple of screenshots from the azure version both are in and well as ur is in private preview right now and we're going to into public preview in May and that means that you know we're gonna accept more customers right now we've been basically you know having companies small and large kick the tires try it you know run difficult workloads on it run small workloads on it let's see what happens and we've been learning from that and and figuring out sort of where we go with the service in the near future on AWS because that's that's really our our service and controlled completely by us we are doing more of a continuous release so that service gets updated quite often and not not necessarily rolled out in in sort of private public preview GA timeframes so first of all I log into AWS and I did you probably saw I pasted a marketplace link there and this is a hidden or or private private marketplace page since we're in private preview but I wanted to show it to you anyway because similar to any other service in AWS you sign up to it through a marketplace you sign up to it by clicking subscribe doesn't cost anything you can start using the service right away so I'm gonna go ahead and do that now that I've subscribed that's it we've notified NetApp that you subscribe to it and then when I click this link here if this was the first time I subscribed it we just for money directly I'm going to be taken to what we call cloud central now cloud central is where we unify all for api's but it's also where we unify all of our cloud services and right now there's there's only a handful of services in there but this is where we're going to be rolling out the next version of cloud volumes the multi cloud stuff the orchestration and another other things I'm gonna log in here and this uses a single sign-on so you know if you have different if you have different services they might have their own site to it and their own console but you get the idea like single sign on the single time will actually works with the net up net up single sign-on so if you're already a customer will you know no you don't have to register we just let you in but we're definitely you know want you to register if you're a new new customer and give us a little bit of information about yourself yes sir so you know if you want a technical detail we're using off 0 so a fact that we we could Federation with pretty much anything okay but right now it's it's basically you know the net app SSO and and a registration so when you create an account here that becomes your master account and what you see here is the the UI for cloud volumes only so if I wanted to do for example cloud sync I have an option here on the side to jump between these different services all right so what I'm gonna do now I'm gonna create something like a 10 terabyte volume there's there's a hundred terabyte limit on the size of the volume now but we're working towards multi petabyte per volume I don't think what symposium the limit is it something with the providers is the software no no it's just it we just decided to you know it's a it's a virtual limit that we decided to start with and it has to it has to ask really it's because of how we architect it on in the background on the backend versus you know we you know rather than roll out like hundreds of petabytes right away we want to roll through public preview and up to GA on these these limitations so it was more of a gaining limit it's a gated limit I got okay so now I subscribed I logged in I'm gonna go ahead and create a tent our bail bond so I'm call this I'll just call this machine learning I'll give it a name it's going to generate a unique volume path for me I can change that to pretty much anything related gates or hungry condescending bull heart you might sort of find these doesn't sound Nordic or replayed lomi lomi trusting boss yeah like right yeah yeah so you might recognize these from like the docker docker world right these are famous famous scientists and authors you know combined with some some fun adjectives anyway the reason why we do that is that we you want each volume to have a unique path but you also want it to be humanly understandable because you don't want to have like a volume that's called you know 32 letters and dashes how are you going to communicate that so but better UX here anyway in my account I only have access AWS but here you can see I can select AWS as the region and this is just in a single region right now I can also select the time zone that's kind of the reason why you want might want to do that is because when you set snapshots and backups and that sort of thing you might be thinking like I want to make backups in the time zone of the customer not where it's stored you'll see that I can create a volume directly from a snapshot this could be from any volume that I've created before not necessarily from this one or not if I wanted to actually revert I can do that as well and I can select a service level if you saw it before basically in the private preview we are running at the running at a premium service level and they're called different things with between a cloud providers but that equates to roughly about 3,000 cups per terabyte I can set the quota here this is really just the size I'm not gonna do exactly 10 terabytes I'm just gonna wing it here should be like that and the second thing I have to set here actually don't need to change but what I can do here is actually configure the volume for what protocol I wanted to expose and this is actually a multi protocol select so I can have a volume that's NFS and sits at the same time which is you know a huge benefit if you're running both windows workloads and Linux workloads at the same time for the AWS service we won't only opened up for NFS but this is kind of like an ingress rule as well so here I can add an additional security measure saying like I only want this instance or this IP or this hostname to be able to talk to the volume this is in addition to what the cloud provider can give you and then lastly because we're all about automation I can actually tell it to create a snapshot policy for this volume right away so that I don't have to manually do that and this is something that happens asynchronously mean that this is happening all the time and it will let me know the service will let me know if if something goes wrong or if it goes right essentially I can do things like you know keep two shots you can see the explanation here is is updating dynamically so it says you know keep every snapshot every hour on minute 53 and keep - let's put 53 because it's 52 now so I'm gonna see if I can create a volume that may automatically makes a snapshot of itself the wallets doing that one of the things that is done keep thinking about okay no it's still creating yeah so it's setting the network then it's creating the network path so this runs in the B PC and now it's available yes all right so one of the challenges that that I commonly see is I mean this is great in a in a vacuum but the reality is with cloud you have ingress and egress charges right the data transfer charge knows not here it you don't have well this wouldn't happen if it's all contained within cloud you wouldn't have it but what if you're I'm kind of going back to that bi-directional connection between something you might have on Prem versus leveraging cloud for storage right or so so the cloud right I mean the can option to the provider might might impose some egress charges there are no there are no ingress charges so I can stream the angel data now I can stream data to the cloud into cloud volumes but the egress charges make up for the difference yeah well it a lot of that depends if you're going out of the cloud or not right I mean not everything will go back and forth right but that aside differ for any other it it doesn't I'm just wondering if there's if there's maybe something that starts to to limit the amount get smarter about the data that gets transferred yeah such that you can limit those charts and to be clear I mean right there you know the amounts of data come with it you you wouldn't you wouldn't just you know try and push those bits over the pipe like we definitely are I'm no idea if we announce it or not but we're definitely thinking about like a snowball type situation where you where you can take a box and basically ship it to call volumes then oh and we were talking with NetApp I mean I've this was a couple years ago at least or we started talking about data fabric at the time that the idea was that there be an on tap for cloud instance running in a non Amazon or non Azure data center but they had a high bandwidth connection to an Amazon or an android data center and that would allow customers to avoid egress fees no that they wouldn't have to worry about what they're doing their data as Esther is still available this is one place in it or is this a new service that's gonna be offered alongside that I'm not familiar with that storage all but yes are we talking about an NPS NPS is talk so net a private storage so that's that still a service that that's available this is not replacing that no okay we don't meant it that yeah to the but to the point of all this that that whole network egress charge thing really breaks the multi-cloud thing which for all of the cloud providers is like hell yes I'm gonna dig a moat because I don't want Amazon absolutely doesn't want you to leave and it doesn't want you to leave either I don't think it does because if you're using snapmirror to replicate the data yeah whatever you update when you're in the cloud is going to be the Delta changes like you then will pull out because it's not Mirabal yeah keep the sync between the two so bright it might not be as bad as you think all the first set of data that you replicate you've got to be ripe you got to put but that's funny going in isn't it so that's fine from wanted to each of the to take two providers send the initial load to each of the two providers and then that way the only thing that ever goes because you know the other things like you've changed so you don't care yeah you've only got a very small problem with that is if you start creating data in one event mitre is different scenario yes if it's all um dates are being created then you you're gonna have to push that across that and that's the whole you kind of already had this issue already like if I created other in a data center here and I've got another one on the other side of the planet moving that over a network takes a long time but it is possible so and if I've already like if I'm leasing like if I buy the line I don't pay 3-bit it runs over the line I pay for ten gigabit network connection and I just pay for ten gigabit I make a periphery there and like I think the key there is you know how do we drive down that cost because you know the club provider is going to set set a lot of the rules on there like everybody's gonna have to play by their equals right yeah so how can we minimize that and you made a great point with you know using something like the product or snapmirror is going to help you with like dedupe you're not going to be transferring all that data but just the changes there are other options like cloud backup which we'll have you know d duped and encrypted archives in something like s3 from any any any one of these data points whether it's on prime cloud volumes or somewhere else and those also take up much less and you know cheaper than doing that the pure a file based copy they do it's just the like the prospect of the data fabric where I have my data wherever I want it to be yep gets killed by this egress thing because I can't I have to be careful about where my data is because of things because of eagles so I can't just have all of my data everywhere unless I want to pay a lot of money for it I've debated this for a long time do I want my data everywhere do I want my metadata of my data everywhere and then on access pay the egress to have the data transport with some delay the is the real use case for most of these multi cloud solutions to have the bits and pieces about what data do I have an enterprise or is it literally that I have to have my data everywhere and it's it's it's it's a tough it's a tough challenge no I don't think I actually don't I think it's more poignant than that I don't think you can answer that question in a vacuum period true I think you have to understand what's what is the application how is the user interacting with the application where is the where does the data need to be to be able to support that application then that starts to govern your options in terms of how you manage the data when I think of data fabric from just real quick the the last piece to that is I do think as you start to move into a cloud-based world you can't bring those traditional ways of managing data forward so I agree so what my from thinking the thinking at this from a large enterprise and being a service provider to intento internal customers one customer will have one set of requirements another one will have another dependent application what I can level said and say is that I would love to have my data everywhere so that I can serve each individual client as needed I could turn knobs to say ok in this region of the world I have two petabytes of storage and our replicate data as needed in this other part of the world I have one terabyte of cash or whatever the solution is but the key is and I think this is one of the gaps I see in net apps data fabric vision is I don't see how you guys are facilitating that metadata conversation of how do I how do I solve the problem of knowing where my data is in the world and what data is it and that is I think a key to part of becoming a data right come and I can I can tell you with high confidence confidence that that exists is exactly where we're going it's more than just managing the bit yeah and managing the I mean managing the bits is one thing getting to the right application is getting to the right compute getting to the right services that offer you new value from your existing data is it's the important part that we need to solve first yeah regarding the egress sight I mean it depends on where the data is if the data is centrally located at the customers place and there's no ingress charges effectively he can send it to all of the cloud cloud volume services that that he can find and pay nothing for that right so it only happens when you're talking about egress between cloud providers because within cloud providers the egress depends on region zone like there's there's a ton of things to think about there all right I'll get back on track sorry for that but I like the conversation and I hope we can continue that later on so I made a volume great took probably about 10 seconds doing this with any type of service that's out there today would take me probably hours if not days to get set up and now I want to actually use it so I'm gonna mount this volume here in an easy-to instance so let's go back real quick here just so you see me go all the way through and lock here in - you see - and instances demo and here's my IP address now let me just put this here side-by-side so you see how easy that is clear this SSH I'm gonna just log into this Ubuntu server running in u.s. West one I'm in since NFS I'm gonna use an FS as a protocol so since that is something that you need to use sudo for here I'm logged in the sudo so I'm a root user now we can see there's there's nothing there so if I was doing it from scratch what I need to do now is copy that command paste copy that command paste now based on the the level of performance that I selected we would show you different mount commands for this optimized for the workload that you trying to do so this might be for example high performance database that I'm going to use this for so copy that enter and my volume is ready for consumption and you can look for those who know you know know something about NetApp and I've seen that up volumes before an NFS shares you'll notice the tell-tale dot snapshot folder there the cool thing about that is snapshots within that are within on tap there there's such a cool thing for everything from development to making your applications way faster - you know replicating big datasets bringing them to everybody at once you know making a development environment from a production system there are so many usable things you can do with this because it's instant right it's not a backup people many of you might think snapchat is backups not a backup there's another service for that but snapshot is something like a you know like like time machine on your Mac you can go back in time or you can create some new from that particular from that particular day day or hour that you made that snapshot so let's see if there's something in there right so you remember I set the automation to make a snapshot 53 minutes past the hour right so now that I've done that instead of just restoring like making a snapshot and then creating a whole new volume from that maybe I just need one file so if I wanted one file from that I can simply go ahead and copy a file from within the snapshot because the snapshot itself is really just a file structure though that makes it even more capable of helping out operations helping out development in you know getting back to that state where everything was fine or maybe I just want to try that test one more time with the script that I had or maybe I need to you know maybe my Hadoop process failed on that note and I want to restart it we start the job on that node so I can go back to that now I'm going to talk a little later about how to get data to that but let's continue can I shake a little bit yep call me cynical but we do how is this just not putting a starch array on in AWS data center and putting a nice UI around it how we're really starting to see where this is innovative and doing something that's you right so I mean good question it is and is not a storage array in a public cloud what what's really new here is that you have not been able to consume file based storage at this performance level and API driven at all in public clouds that means that some applications not just enterprise applications could not if that that yeah that couldn't be there before right so that's that's kind of the big thing and what you could have done before was you know stand up on top cloud you know launch that appliance but you need a net up guy like well you know you need somebody that knows how to set that up knows how to configure it and all that just to get to that point where you can get shared multi right multi read sort and that would be possibly subpar for performance because it's absolutely it's not powerful for this is native yes yeah I think there's one other bit as well which we you haven't touched on yet but you might be going into here and that's if you look at the way you provision an instance or whatever you get to select where this is where the storage is going to sit and you connect it through the actual API is and the Andy and the GUI directly hmm with with this the storages of like a first level feature you can add to an instance sort of mean it's not it's not a separate from it's not like in a VM and separate which means any functions that you use on that VM like snap shopping or other things can be driven by the standard platform API and it will directly talk to uh storage as well so it doesn't have to be done as a separate task so the to become much more closely integrated and I'm not sure whether that's what your intent to show as part of this because well as far as as far as like integration you saw aw yes that's that's our service is completely our service it's running you know next next to Amazon like in each region right now with direct connection versus the Microsoft version of that which is you know part of a sure like this is a service that's part of azure so you can find it in the marketplace you created you can I'm just showing you the the UI obviously because you know it's hard to show visual nice things to a large crowd with command line and api's so the same thing I just did with aw yes I can do on Azure but behind all of those subscription rules my Active Directory access all of the things that you know make as Roger natively so here we got you know multiple volumes in different resource groups there on might be in different regions here some of them are large some are small but the same thing applies like it's as easy to use and Azure as it in as it is in AWS and will be in other providers as well would you see that then turn up in your cloud volumes that NetApp powerful so Europe before yes eventually we haven't we haven't we haven't opened that up yet yes that turns up at the moment if you choose to I mean if you choose it because you need to you need to essentially like allow us that access so we can you know combine the two so that's where the off zero thing comes in where you where you essentially or federate the the API calls for these you apiece yes yeah so the AWS you have to do in there because you don't yes yeah smb3 share will be from within the net act NetApp cloud volume context is there any kind of really really no well NFS v3 versus saves there I'm obviously a big difference I mean you want to have safes for speed and performance and compliant C with Windows that's that's pretty much what are you using it for SMB obviously can be used in Linux and all that but that that's your go-to protocol for the Windows workloads on the NFS side NFS 3 is really unauthenticated so it's really behind the networking security that you have here when you start and when you start a cloud volume you are using it over a private IP and only the VP see if I talk like an ad in AWS terms only the VP see access to that only your VP see and that's protected all the way all the way to the volume even though this is a multi-tenant service and that the way the reason why we can do that is because of the software based networking that's built into it on tap and and NetApp boxes and and the way that you know AWS and after they they structure their networking networking security now you know having said that you you know you want to be able to use these things equally wherever they are right I mean I'm not gonna focus more on that here but essentially what we're doing now is just bringing the services to people where they are so if they're in leisure they want to use this in a sure if they're an AWS they're gonna use in AWS and I compiled a few use cases for this and I'll just let you look at it for a little while lob line of line of business so you know think about specific applications that matter to the company you know we've got some buyers they're the key concerns and you can see like in all those categories or most of them at least you know that you're thinking about performance they're thinking about you know real integration with the cloud provider that's important as developers and we haven't been playing in that field before I would say like application developers are really going to like this and I'll show you in a in a few minutes why I think they'll like it but it's important to to realize that like nat up is used all over the world today to speed up things like like continuous integration continuous development like build servers like that that's where we've been playing we haven't been playing in the public cloud doing that but with like nat UPS plugin for jenkins it makes builds way faster you know we've been playing internally with like hadoop with Hadoop and volumes way faster uses less resources look they're very very interesting use cases here that will talk about more when when we can sort of talk about the customers that that that are that are using cloud Williams now but we see a broad spectrum of buyers for this service and I can tell you that the these are the sort of interest in trying out cloud volumes on the different hyper scalars has been really really high so we are trying our hardest to actually put these services out in a public way as soon as we can but you know you can imagine there's there's only a certain amount of hours in the day so we're trying to get there as fast as we can and you go over a sum or developer oriented use cases yes so there are a number of ways that you would consume cloud volumes as a developer one of the ways is net up has created a darker plug-in called Trident and what Trident does on premise is that it automatically creates the internals that you need to be able to create volumes so in net up-speak there's something called an SVM that's like a container storage container for the volumes that you can create on top of that right so Trident it it's like a controller or some like an engine for things like kubernetes that automatically creates those volumes for you when you describe your application so Trident will for cloud volumes do the same in the public cloud as it does on Prem so we'll be able to deploy applications and automatically create persistent volumes based on the profile the user wants for the application so if he's you know building a high-performance database based application he might choose you know the extreme level for Paul volumes to for the database but you might use you know EBS or you might use something slow for the application layer or the or the presentation layer off the application is answer your question yeah all right so is this I'm just trying to kind of wrap my head around this and yeah I'm not as sophisticated as other people in the room but I kept thinking about is is this really a story where the core storage offering from an Amazon or Microsoft so Amazon Cloud versus an azure cloud yep is almost too basic of a building block and so what's missing is that almost like a layer between it and where the application or where the developer really needs to tie into it it's more it's a rough on it I guess what I'm what I'm really getting at is are we missing that refinement of the AWS and Azure storage building blocks and that's really the problem that this is solving is providing a much more refined way with some management it's it's in there too it's that and it's it's actually like a number of factors I like it how you put that but it's actually a number of factors it's I mean it's the ability to do shared files multi rate multi write that just not available today right EFS I'll show you why you won't don't want to use that versus call volumes so it's a new feature that hasn't available before and people have been working around it like crazy like you create all these you know all these schedulers and stuff around storage to get around the problem that you can't just point things at the same storage like it creates new DevOps problems to solve when we can just bring this to the table and then you don't have to think about that like it brings a new thing to well and it also brings new features like just the instant snapshot and clone it makes a huge difference in multiple applications like give you an example like for for in in development you have a build server you're building an artifact you're basically doing like a full build off your application with NetApp and some plugins for something like Jenkins or you know get lab or whatever you want to use you can actually create a point of time with a snapshot that you can build on top of so you don't have to start from scratch right and when you build an artifact and it's finished you can snapshot it and then you can instantly grab that and create a new volume for it just for that build process if you want to and make the whole thing way quicker so and you know using that using replication between sites if you've got distribute develop development teams there's a whole lot of things you can do with this that you just you could not do before so guess what where I'm going with this was more of we never had this problem with the on-prem structure mm-hmm but this problem now exists because on Prem you're never writing directly to a drive you're writing to an NFS volume right or a Civic volume yeah where I see that kind of inter layer that's missing in the traditional cloud yeah because it's super hard to do that's I'm believing that that is the reason why it isn't there and underneath a lot of these services at least from like then that maybe the smaller public clouds all that they actually are using this like they're using that up they're using NFS underneath they're thinly provisioned volumes and all that they're not exposing you to the end users because it's hard I mean it's very hard to build it's hard to maintain like we've as a company and say we because I'm obviously been sipping the kool-aid here for eight months we have figured that out a long time ago and and that's interesting part of like now we can actually bring that to the table I think there's a lot of new applications and different ways of architecting big applications that we're going to see you know I'm trying to figure out a new that it creates a a new role in cloud DevOps that kind of didn't exist a new role or new that said not unless their new role but a new opportunity yeah to provide a centralized within a you whether it's a pass or development platform a new layer data services from a device perspective build even building more capabilities that are alike your build process that was a great example around just shared something simple that we've had an enterprise for what 35 years of war so the bringing to pick cloud and have given a cloud skill is really interesting yeah I like to show you guys a little bit on on sort of where we can help this is I'm this is not a scientific test I just wanted to show you because we were talking about ef-s before this is just a file test which which tests random read random right it took cloud volumes 52 seconds to finish this it took EFS 16 and a half minutes I thought this like might be a fluke let's try it out like like I say I'm not really a storage storage guy I'm a developer I like optimize things performance you know I want things to run smoothly so I thought like let's try it with against an SSD a local SSD or an EBS SSD on an extra-large instance here I'm using SSD on an m3 2x large and cloud Williams was ten times faster to to complete the run to read run to write so not only and by the way if you put like one terabyte on top Williams versus one terabyte of provisioned I ops SSDs enabled AWS cloud volumes is still cheaper is that using an EBS optimized yeah instance yes it's it's a high optimized I can remember after my storage of the most compute but its high network that's that's just the general what's called the general-purpose SSD yeah so it's not and on the cost side when I when I said like did that cost comparison I just took like three thousand I ops a terabyte and compared the cost of of provision I ops SSD which is their most expensive and fastest version versus the general-purpose versus call Youm some forget the service level the premium service level I'm just curious about the architecture that sits behind cloud volumes yeah is that using AWS primitives to build volume service it's a giant black box I can't tell you you know let me tell you that working question good why it's quad volumes cheaper I think that might lead to you be that I don't that I don't I mean I look at it and I was at the standard service that probably doesn't use stuff that's as performant as what you're using Amazon's keeping all the margin there right I assume that with cloud volumes there's some something going the Amazon something going in that app why is it cheaper how do you compete like we we are new to this game we need we need to compete we need to make a service that's highly available highly performant so the trick here and I don't think it's going to be a secret but the trick here is that net up has a number of storage efficiencies that it can gain from like D do compression all that to make the service cheaper that so the data services that right so you're taking advantage of their own data services a gig is that her mower is that that is wrong that is what you would see if you do LS minus L in your location so it's effective yeah so I'm paying 30 cents a gig on there okay so that's out yes however so so so we are the service Cheaper by by owning the efficiency that we gain from the multi-tenancy right right so thought but you are gaining the instant snapshots and clones with the efficiency and so you there is a risk in that app though in Amazon I guess maybe not as much with Amazon that if you're storing stuff that has sucky that's my word of the week reduction capability you know so if it's already compressed or that's already that's not in good form that you're gonna end up not making much if anything it have to balance out over the over the customer base yep you're saying you know that app isn't gonna make anything I'm trying on individual customers or face it's not if individual customers are using snapshots but they're just going like all images and it can't all unique images right they can't be compressed can't be D do then so I'll make I'll make kind of a bold statement and say I think you kind of have to do that because at the end of the day that someone else is gonna figure out a way around that oh why not just cut it cut them off at the past and just say look here's the reality we're gonna get there let's just let's just get there and all of these other pieces and I just want to understand the business model yeah so I'll tell you it's Jevons paradox right you use it more but lower the cost you lower the cost to the point that more people start to use it and more people start to use it even the reality oh that's right oh that's right yeah yeah you could follow the amazon model which is just make sure you keep growing fast enough that as long as everyone keeps clapping the ferry stays alive yes so another reason why what I can say about the pricing I'm glad you think it's too cheap that's great oh you have to think about like like maybe I wasn't clear enough in the beginning when I say like Amazon started with s3 and ec2 and you can build everything else on top of that I think of Cloud Wallops the same way this is the we're building the foundation for a ton of other services that add value and also create a lot more revenue for NetApp just update to Olive is a couple of thing yep can I mount or is there an option in the future to mount a volume that I'm presenting into AWS also into Azure or once here's all themselves so this is from our perspective you can do whatever you want what you would need to do and I'm not condoning it is that you would need to open up a network directly from your service I can't do that where I'm seeing the app for the the volumes in your cloud service so there's like because it's a protocol it's same with like EFS you can mount EFS on your desktop at home you know it's gonna be terrible performance but you can do it I'm just wondering about the egress and ingress things we were talking about if I be where I'm melting that a volume lives NetApp and I'm melting that separately from each service so the volume of cloud Williams it lives in each hyper skilar yes I can tell you that so you wouldn't be I don't necessarily do that native it is technically possible I wouldn't you be mounting it from a device I would ride but you know just out of curiosity but yeah I noticed that the the mount path it was a public private IP address yes always they're using VPC peering to create that connection or do you have to reserve a block within your VPC assume it's it's different between cloud providers some some opt for peering some up for actually peered subnets no not all of them support that but so yes the reason why the reason why I don't go like now it's like this in this in this and this not because it's a huge secret but because it might change like right now we're in probably preview some of it is running in like Direct Connect over Direct Connect and on your own VLAN but net up bears the cost of that like for testing purposes some might actually like live in the data center and be right next to the other services so I can't speak to like is that why pager without which but yeah is that highly available so say in the AWS yes it's that IP address if I lose a Z that's still going to work in the other availability zones so it's the the fault region the the the the the fault region is the region like it's it's a regional based yeah like it's real something so yes ok yep so the actual implementation is spanning availability zones you can't lose an AZ still of connectivity but you lose the region yes and you're making the IP address highly available as well so yes so underneath you're doing a similar thing to s3 you're replicating that so we're I mean technically we're using features from that out that has some software based IP low passing yeah yeah sure all right let me I'm running out of time I think but I've got a really cool thing to show you guys so two things when I talk about Orchestra orchestra well three things orchestrating and integrating obviously we had to go with like open api so we got an api for everything not just this but also for cloud saying even for cloud central so you can like manage some of the user stuff for sass backup for eventually for alto bald as a service which we're calling cloud backup so that takes care of like the the developer centric or the or the automation operations engineers right they want to be able to do this you might want to tie this into your own tools write your own dashboards and all that so you'll be able to do that how you get your data to the volume you could really achieve in many ways we've already introduced one way which is cloud sync and with classic you create a data sync relationship and just for the fun of it and also to show you sort of how we have integrated let me quickly show you what that looks like to copy this again logged out a little bit timeout the service here I'm back in so now I have a volume here but I actually I want to get data to it so I could get data to it in a number of ways one of them is with cloud sync which is a file based sync service so because it's you know has that token-based api in cloud central we've actually you know integrated it directly here in this UI but i want to show it show it to you like on it in its standalone form and then I'll come back here and and show you the relationship that I've created so let's go over to the cloud sync portal and when I get here the only thing I have to do now is decide where I'm going to get my data from so I have a number of data sources and FS e FS sifts or an s3 bucket I'm gonna go with s3 bucket now this is going to use my credentials that I that I gave cloud sync when I when I signed up for it this signing up for it part was basically what I did in I did that previously in the other in the cloud data volumes UI essentially what you do you launch what we call it broker it's basically a VM that acts as the intermediary in the sync process so I'm gonna select an s3 bucket and I'm going to send that data over to my NFS server this is my broker it's a VM running in the same V PC and this is completely automated I don't have to tune this or anything it just comes up what's the role of that broker it just acts as a gateway in yes so it's you know it really it's the simplest form here because now I'm kind of you know doing stuff with in AWS to AWS resources but in some cases where you're taking stuff off Prem on-prem to the public cloud you you would launch a broker within your own data center and the broker is so you don't have to poke holes into firewalls and and basically you're just orchestrating your orchestrating get getting the files D tubing them and doing some stuff and then checking with the remote to see if anything's changed and only sending the changes so it's kind of like our sink but you know a little bit more specific for these purposes like s3 and all that but you need that component to be able to get past those firewalls and and and without sort of risking security and I needed here in this case because I wanted in my V PC I wanted my BP C is completely blocked for everything else so I've got a bucket here a key bucket select that let me just go back into cloud Williams because I actually I wanna have both open so here I'm going to just copy that IP address so you can imagine now it's just copying that IP address and I'm going to stream that bucket into my club volume this could be coming from on Prem it could be coming from a completely different hardware vendor essentially from anywhere so now it's discovered my exports it's discovered my Cloud volumes I'm gonna select the one that I want click continue create the relationship and you can drive this through an IPL iOS you yes yeah so I wanted to show you this here because in the cloud Williams UI for AWS we also have it there and and we can do that because when I log into cloud volumes I can get a token API token for this service as well so I don't have to like re register for cloud sync or anything like that I can just use it and that makes orchestrating the full NetApp cloud suite you know this as easy as something like AWS api and secret keys for everything that started I'm probably replicating about a little less than a terabyte of data there and let's let that finish that last 1% always takes the longest isn't always right the windows uh algorithm free cap so what we can do I can go and check here if we've if we've received some data for my other vendors in the future what's on the list so with with cloud sync or with what's in general so in general what we want to be able to do obviously is you know collaborate with multiple different solution providers to optimize their solutions for caught volumes and also create our own versions of applications that might be popular or useful for our customers and make it easy for them to consume those on a cloud sync side we're already you know sucking data out of all different types of render storage systems and public cloud stuff like like I like s3 or EFS and yep go ahead we can relate AWS and I sue with snapvault or snapmirror if I well understood yes cloud volume are based on untappd that's right so do if we want to create Michie cloud if we want to have real-time queries on a multi cloud do we have to create a multi cloud through snap fold and snapmirror which I understood that it was for Beckett of with cloud sync all right so so the the snapmirror for the backup that I was talking about was really a single product within that up it's called alt vault yeah so and that allows you to backup like massive amounts of data and turning in turning it into a lot smaller amount of data in s3 as an archive as an updatable archive all right yeah so we haven't we haven't rolled out the snapmirror capabilities for cloud bombs yet like we will so it's that's definitely on our roadmap it's it's definitely one of the key things for us but right now the only the only data mover service that we have directly working with call volumes is causing but yes it's not Maris is coming it won't be probably won't be called like snap mirror it would just be called in this case you're going to manage an API get away it means bitching about yes so so the so the UI that I'm showing here is multi-cloud like this is multi-cloud capable so for example through the API on on cloud central you can create an on tap in Azure or in AWS and the same thing will go for for call volumes and and all these things so I see that it's basically done copying here the one percent I don't know what that is but let me see so I've got my machine learning copy here of hot dog not hot dog images and I can see the data is all in there I see and now let's say that I wanted to actually use this in an application so going from here than to an actual application you know maybe a machine learning application or maybe just you know database application how can I do that and and Nick brought up a really cool topic before on the developer side like you know we can use trident we can use number of different ways to actually consume this and NetApp thinks that you know one of the ways that people would want to consume this is through containerized applications so let me show you what that looks like and this is this is something that we want to release as a service but right now it's an internal feature and it's actually kind of how the services are built already so let me switch here to another environment but just to be clear here we're talking about potentially having two different apps running on two different servers two different containers both accessing that at the same time yep yeah that's really cool yep or different containers within the same app because as you will see here there's a number of ways of looking at an application what we can do today and what we want to roll out as a service is to manage and fully orchestrate a plication x' based on kubernetes so where we're going this and this is kind of like a preview for that is that if I as an application architect or I as a developer or even a consumer of an application I want to get quickly to an app that's fairly well known but I don't have the time or or I want to customize it quickly we have built an application orchestration layer for that particular purpose so think about what I'm going to show you as like the the super quick start for getting ready in the cloud with applications containerized applications something that automatically uses call volumes without any sort of configuration on your side all right and that's because we have those API capabilities so this here is our application or precision layer right now I'm I'm starting a git lab cluster so you don't know gitlab it's it's made out of multiple components actually it's a it's what we call it three-tier application we means that it's got you know a database it's got a front-end and these things need to be scaled separately like they can be scaled separately and the need to be scaled separately so for example when you're running something like git lab or Jenkins you've got build processes and the build process is they need compute all right then you compute to compile code and what if you could simply automatically scale the cluster based on the number of builds that you want well you can do that but how are we going to provision all the volumes for it on demand you're gonna call the storage admin and have him figure out how to do it and now he has to figure out how to do an AWS as well and you know if you configure the app and all that know you use an application template that knows how to scale and we're lucky enough to have participated in a small way I would say even though we've been working with Cooper Nene's for a long time to discover this fantastic framework for building not just micro services like like a lot of people are using it for but for really stateful applications and and all the way up to Big Data machine learning down to you know high performance databases you know I could for example just run my sequel the Linux version of my sequel on this and have cloud ball use underneath so the way that we do that is by using a format called helm and what help gives me is pre-made applications that have a defined set of sub components that can scale on their own and just give an example I'll start a component here that's called tasks like that so task is a distributed it's a platform for for distributed machine learning so I had a machine learning data set there hot dog not hot dog if somebody watches Silicon Valley but task is something that I can just if I have this up and running and I pointed to that volume now I can start creating just analytics and just machine learning algorithms on top of that so to run tasks the only thing I need to do here is select it I can tune it if I want to there's a number of things that I can tune here it's got a web UI and all that oops I clicked there click the hide cut back sorry for that but in reality all of the applications that come up here they are supposed to be sort of you know choose and run right so I don't have to continue at all but when I click Launch what this does and this will essentially be running on your account so let's say I chose an application that I wanted to run in AWS the application might be like made specifically for AWS so you might not run in all the different cloud providers but you might have applications that are completely agnostic to where they run but what happens when I run it is that it creates the storage it creates those call Youm for me it creates servers if I don't have a cluster before it creates low bouncers and it completely manages the scaling of the application based on whatever custom scaling properties I want to give it so right now it's it's in pending mode but one of the things you can see here is that it's got a number of services so if any of you are familiar with with kubernetes services all kind of like you know web addresses for a particular application there might be internal or there might be external and some of the things that are challenging with something like kubernetes on the public cloud if you're running your own clusters and you need specialized things is actually configuring load balancing configuring storage but here you don't have to do any of that simply launch an application once it launches it pulls all the docker containers and I can actually peer into each stalker container specifically if I want to I can even log into a container with a console so I can debug this on the floor I can manipulate the replicas and the scaling of the application and once it's running because we are we're tracking all the health information and everything here once it's running I can simply click services click on that external service load balancer IP and now I have my application running and ready to use on top of cloud voice how does this fit to the that sort of data fabric strategy that that piece you talked about because this is to me this is a massive departure from what you're talking about previously you're like you're kind of NetApp storage and enabling that's right storage and data in the cloud and now you're talking about orchestration and that's all these are what this really is is pure kubernetes we're not manipulating cooper new use in any way you could choose to use the coupon IDs cluster that like a KS or you know the the native kuben ease services off the cloud provider what we've done here though is handle the data management piece for you and one of the biggest reasons we want to do this is so that you as a company can do things like snapshot the whole application stack at once create a test version of your full infrastructure with one click that's why we're doing it we're making it super easy for companies to get on board with containerized applications with existing templates for applications and to be able to use this new technology without having to dive into the fine fine details right away so marketing could go in here click on WordPress get WordPress up but that's not a huge volume for the data side of the company but in my use case before when I brought a data set to the public cloud into a cloud volume now I can suddenly use that either through this or I could use a data connector like we have for HD inside or data bricks and plug the data into those value-adding services so this is this is kind of where we're we're heading and and I think probably everybody in here is is thinking like wait like there are storage company why are they doing applications we're not a storage company we're a data management company and that's that's the message that I really want to get across and we're gonna do some pretty interesting things is that still an interesting thing to get across and translate so you're a debt management company yes that doesn't necessarily mean you need to do application orchestration right I understand you're you right you're adding some certain bits to it but it is yep get rid of the storage piece yeah but right data management doesn't necessarily leap to with data management therefore we need to orchestrate kubernetes true III you're absolutely right but think about it this way if our customers can bring data to the public cloud in call volumes that's performant and we wore third parties partners can bring a template for an application they need and it's automatically connected to the cloud Williams that you have that's a big big volley proposition for them nobody's going to do it for us right we have the connectors like Trident for different systems you can use it in meso so you can use it in plain talker but we have a vision that we will be able to create much more value faster if we push this thing out first right so we're not you know we're not trying to go ahead and compete with different application orchestration layers we just want to make sure that this valley proposition is a reality for customers of call volume and and and an additional value to cloud volumes once we push it out as a service so it is in fact part of that just skip these it is in fact part of that data management service vertical called data integration and orchestration I sort of see that you gave an example using your service to spin up the helm chart and use cloud volumes but the cloud volumes API is open for anyone to use yes it doesn't have to be no not at all web portal doing all the spin up that was VM usually round her while doing it native cloud services yeah but you could you could have an example which is here's my CloudFormation template or something whatever that builds up these kubernetes or spins up some ec2 instances and connects to that cloud volume and shows up but you let up as take an ass what not just one step that's an several steps further to build your own orchestration piece yeah and I'm not saying that's a bad thing it's just it's a big leap from his NFS and a cool way to spin up NFS and these advantages and performance - yep now we can orchestrate and builder these things and that might just be lost in that bullet point because I don't think anyone reading data integration and orchestration - application a but reads that as we will spin up kubernetes and helm and all these that have those pieces yet of that yeah I mean that's I think for from from my personal expect you know you build things that you build things that you need like you build things that are not available or are not you know answering the point that you have right the orchestration piece here on the data because what you're seeing now is the application side and I can't go too deep into like what's next after that but think about it like this is you know all data management services and we're getting to that elusive you know data fabric vision that that we want people to have as a tangible tangible thing if I explained it correctly but we're trying there and that's why I come showing it as a preview we're trying there to answer a need that people have on application and and development side particularly but also as a solution to some of the optimizations that we know are possible but that knowledge is not you know available to everybody today so instead of trying to teach it to everybody let's create those templates so that you get that benefit right away so that's that's kind of what I'm trying to get to but oh yeah we were talking about snapmirror and egress and ingress and how you manage it those kind of pieces like people trying to get their head around how they might use this and do these things what you you kind of pick this in place going well this is how you could use yeah one yeah one one way to use it is is that yeah will spit that all up for you and you can just start using it and you will do the optimization and things underneath and we you don't really have to worry about some of the core partner our pilot yes all right I think I've exhausted my time and probably everybody's patience good it was good yeah thank you thank you very very much for listening I'm a startup guy so I might ramble and go in different places but I hope you used to feel my excitement for what's coming in net up and and think that we can actually do cool cloud stuff if you that's a preview when it's just like what you've just shown us is about something I think going so it will it will definitely call ask with some of the other products that we're pushing out so we've we've got a number of interesting things that not only will you know like the snapmirror of things and that but also things that will help you understand why you should move stuff to club or you should move stuff back like we were thinking all of that thinking about all of those you cases and and so we won't roll it out until you have a critical mass of other services that that will follow and and sort of we use internally to build this so soon but not like right now that's for all the stuff you just demoed that's you know cloud ball is available right now robot sinks available right now SAS backup is available right now so pop back up the application side and more things are the compute or castration that orchestration pieces are available now that know that so the compute orchestration the VM orchestration metal and all that it's that part is basically cusack what what NAT I bought with with Icarus is no green cloud and we've not we've not released that yet can I ask um you started this presentation with a reference to being the Pirates yeah of Nats hat yeah what's its two sides to this what's the response of the wider company to this kind of innovation yeah and there's a generation of people who have been in this company for a while who are used to net up behaving in one particular way this is quite a big departure one that was definitely in the past what kind of cultural changes are happening internally to the result existence to this sort so that that's kind of the surprising thing like everybody is on board with this because it it's not a singular like step away from we're not stepping away from on-prem or flash or anything like that it's all part of that data fabric and like I really like to show you like all the stuff that we've cut planned but it will it will start to make more sense when we when we start to show you the you know open all the packages and all the presents that we that we're trying to make it's a customer sandbar the roadmap slides yeah I know well maybe maybe maybe over later I don't know cave and the venture capital nope but to answer your question like NetApp has recently gone through like a complete change internally so the business units have been you know the company has been carved up into these business units and the Cloud Data Services unit is just focus on cloud first and like that's our only mission cloud first play with the big high purse killers and but we have to end all the products that net new people who are new to net up they have no clean oh no it's it's like the ingredients for the club business unit are you know tied tried-and-tested software like OCI on tap insights that's like a monitoring analytics solution that's that's widely used things like Alta wall which is the snap snap mirror or backup to s3 mm there's SAS backup lock control there's the whole cue stack team that's now almost I think it's almost doubled now since you know the acquisition and there's a number of different teams that that were in another other places in that up but have sort of come now together underneath the umbrella that is cloud central and we're all working towards a single controllable API and orchestrating all these things together mmm so to be able to do that we actually rely on things like the like an on version of on tap that's specifically built for us that has way way shorter build cycles or release cycles then then the sort of public release upon tab has so we are working at a different pace I would say than than the rest of the sort of traditional products in the company but at the same time we couldn't do any of this do we if we didn't have you know on top and and that the excellent software developers that we have in all the departments so politically correct answer and the actual reality is that we're all trying to get to the same place because people know that you know we as a company unlike most of our competitors have realized that you know we have to be a car company yeah I must admit I was a bit skeptical at the beginning but I've gradually warmed more to yeah I've gotten long but you have to just look at like if you look at the like the financial reports and all that we could see like the cloud business is is going up poor net up but you know since Anthony came on board since yangtze by my VP and Brendan and all of the sort of leadership within within the cloud business unit started working on these services we basically you know George Korean and and the the had had directors of of net up they had to change the structure of the company so we're aligned to be able to do this so we got you got your next next generation data center HCI stuff you got on tap and and we've got you know SolidFire and all these but we're also trying to get to that common common platform and it's it's gonna take a while but I think we're our release release track record now since joining has been pretty good and the the sign up that you've got on AWS and you one could sign up for that currently yes so we we did once we when we announced the private previews we did the signups for that the private preview for a sure is is functionally over and we're now gearing towards public preview so the signups that we would collect now with would go into the public preview pool right and I'm not like I'm not the right person to ask like how many companies we're gonna or companies or individuals will whichever we're gonna bring into that but there's anybody can sign up and we do have on the billing side so there's a slight difference now between well we offer two models really now we were offering bundles so we got what we call controlled availability where we prefer customers that want to have like capacity pool that they might use for like as many volumes as they want and that can like pre-buy that during the during the discounted sort of preview time that we're working with them and then we have the on demand or the metered billing which is the 30 cents per gig and it's 30 cents per gig because that's the premium one and then we'll be rolling out a lower performance tier one maybe two lower performance tears because I know some of you might be thinking like if this would be usable for for backup or dr absolutely like we will have a version or a service level for those applications to mmm peeper this might sound like a silly question well I know VMware has announced a partnership with AWS yeah what's the chance that an ESX host in any of us could mount an NFS volume to net out could do that yeah we can share the same VP seeding and do it and that opens up like interesting things like do you want to virtual desktops or I think there's there's a number of things that you can do once this is like one when you have this as a possibility that's if you can open up to many many new things these ranchers know for you you could do I scuzzy as well if you really wanted to yeah there's one of the NetApp things it does ice cuz yeah I've not blare volumes but the contact file on the ice a cloud does ice cozy so you could totally redo everything you do on-site in the cloud for some insane reason there's an advantage that I mean you can laugh and scoff at it but it's part of what creates it's part of what creates the friction to legacy footprints adopting cloud I think this is yeah I mean maybe I'm just again I'm looking at this from from I think a pretty different perspective than most people but I think this is really interesting as you know as an enterprise guy who's is looking at ways to kind of change things up but seeing the hurdles and what creates that friction I'm seeing ways that this just kind of smoothes that along I mean you've got some work to do I'm not saying this is just a slam dunk but it sure seems like you're heading in the right dress that's great to hear may I ask a question it's out of the presentation your if I were understood you're working under a tree down projector project it should be storage Orchestrator so what will be the difference with kubernetes because kubernetes is going to be at sea okay okay structure for containers and so what about only storage Orchestrator I don't really understand the the goal of the tree dome projector oh yeah okay so essentially the the downside of doing containers applications is that stores kind of isn't an after-the-fact vault onto it when I say that and the reason why is because containers in general were thought of first for stateless micro services so pretty much something that you know does something but it something else is taking care of the persistent data that it needs right because we don't want to get we want to get things up fast we want to run it multiple places we want to have low balancing and all that right then we come to like the need and wanting to have stateful application it's like databases like some processing that you you know leave a result behind and you want to use that later on then you had to figure out how to get volumes on talker docker being the first container engine first yeah everybody has been in cloud obviously no there's there are different container engines but they were the first to package it in a usable format so there have been a number of sort of container orchestration engines built like there's there's docker swarm there's mesas there's you know Rancher with cattle and there's there's a there's not a ton but there there are a few orchestrators out there now the problem with like deploying a containerized application is that you have like lots of containers because each container is just supposed to do like one thing real well alright so how do you run all these and contain them and give them data and all that that's where the schedulers Orchestrator came in and to figure out a way to bring them data protists like Cooper DD started figuring out things like as an application developer you are different from the storage admin or you're different from actually what you call the the the application admin so the application admin he would create persistent storage for you and then say like here's an NFS address you know put that into your application and we'll be fine what tridon does is that instead of doing that manual process which is can be faulty it's not automated once you have a volume connected through something like try then you can snapshot it automatically you can build policies around it right so what trident does specifically for kubernetes is that when i define my application i say okay I want my sequel database and the volume for that that I need I don't care what I don't necessarily care what format today is in I'm gonna define that it's read write or read multiple read once depending on how I'm gonna access it I just say I want a 10 give volume so instead of having that storage admin figure out how to give me 10 gigs of volume I create what's called a persistent volume claim and that just is part of the application definition and once that's in the system kubernetes with the help of Trident will go ahead into clavin's API created create a big enough volume for you and not just that it will also mount it for you and attach it to that container so taking away like multiple multiple steps to make it super easy to consume is that answer your question yeah so it's similar to what I did in the terminal before but completely automated I was reading that very quick about this project so could I consider this tool to manage decentralized storage no it's it's not a it's not a storage driver it's simply an Orchestrator I can't yeah so it doesn't create and he's it only can use api's so it doesn't create any sort it stopped like a distributor object storage or anything like that
Info
Channel: Tech Field Day
Views: 6,621
Rating: 4.909091 out of 5
Keywords: Tech Field Day, TFD, Cloud Field Day, CFD, Cloud Field Day 3, CFD3, NetApp, Eiki Hrafnsson, Cloud Data Services, Data Fabric
Id: VZKk7sI0lKc
Channel Id: undefined
Length: 114min 2sec (6842 seconds)
Published: Fri Apr 06 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.