NetApp Cloud Volumes on Amazon AWS and Azure

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
Club volumes is available right now in both AWS and Azure I'm gonna demo the AWS version and I'm gonna show you a couple of screenshots from the azure version both are in and well as ur is in private preview right now and we're going to into public preview in May and that means that you know we're gonna accept more customers right now we've been basically you know having companies small and large kick the tires try it you know run difficult workloads on it run small workloads on it let's see what happens and we've been learning from that and and figuring out sort of where we go with the service in the near future on AWS because that's that's really our our service and controlled completely by us we are doing more of our continuous release so that service gets updated quite often and not not necessarily rolled out in in sort of private public preview GA timeframes so first of all I log into AWS and I did you probably saw I pasted a marketplace link there and this is a hidden or private private marketplace page since we're in private preview but I wanted to show it to you anyway because similar to any other service in AWS you sign up to it through marketplace you sign up to it by clicking subscribe doesn't cost anything you can start using the service right away so I'm gonna go ahead and do that now that I've subscribed that's it we've notified NetApp that you subscribe to it and then when I click this link here if this was the first time I subscribed it we just for me directly I'm going to be taken to what we call cloud central now cloud central is where we unify all 4/8 guys but it's also where we unify all of our cloud services and right now there's there's only a handful of services in there but this is where we're going to be rolling out the next version of cloud volumes the multi cloud stuff the app orchestration and another other things I'm gonna log in here and this uses a single sign-on so you know if you have different if you have different services they might have their own site to it and their own console but you get the idea like single sign on the single time will actually works with the net up net up single sign-on so if you're already a customer will you know know you don't have to register we just let you in but we're definitely you know want you to register if you're a new new customer and give us a little bit of information about yourself yes sir so you know if you want a technical detail we're using off zero so effectively we could Federation cleaner pretty much anything okay but right now it's it's basically you know the net app SSO and and a registration so when you create an account here that becomes your master account and what you see here is the the UI for cloud volumes only so if I wanted to do for example cloud sync I have an option here on the side to jump between these different services all right so what I'm gonna do now I'm gonna create something like a 10 terabyte volume there's there's a hundred terabyte limit on the size of the volume now but we're working towards multi petabyte per volume I don't think what symposium the limit is it something with the providers is the software no no it's just it we just decided to you know it's a it's a virtual limit that we decided to start with and it has to it has to ask really it's because of how we architect it on in the background on the backend versus you know we you know rather than roll out like hundreds of petabytes right away we want to roll through public preview and up to GA on these these limitations so it was more of a gaining limit it's a gated limit I got okay so now it's subscribed I logged in I'm gonna go ahead and create a tent I belong so I'm call this I'll just call this machine learning I'll give it a name it's going to generate a unique volume path for me I can change that to pretty much anything related gates or hungry condescending bowl heart you might sort of find these doesn't sound Nordic or replayed lomi lomi trusting boss yeah like right yeah yeah so you might recognize these from like the docker docker world right these are famous famous scientists and authors you know combined with some some fun adjectives anyway the reason why we do that is that we you want each volume to have a unique path but you also want it to be humanly understandable because you don't want to have like a volume that's called you know 32 letters and dashes how are you going to communicate that so but better UX here anyway in my account I only have access AWS but here you can see I can select AWS as the region and this is just in a single region right now I can also select the time zone that's kind of the reason why you want might want to do that is because when you set snapshots and backups and that sort of thing you might be thinking like I want to make backups in the time zone of the customer not where it's stored you'll see that I can create a volume directly from a snapshot this could be from any volume that I've created before not necessarily from this one or not if I wanted to actually revert I can do that as well and I can select a service level if you saw it before basically in the private preview we are running at the running at a premium service level and they're called different things with between a cloud providers but that equates to roughly about 3,000 cups per terabyte I can set the quota here this is really just the size I'm not gonna do exactly 10 terabytes I'm just gonna wing it here should be like that and the second thing I have to set here actually don't need to change but what I can do here is actually configure the volume for what protocol I wanted to expose and this is actually a multi protocol select so I can have a volume that's NFS and sits at the same time which is you know a huge benefit if you're running both windows workloads and Linux workloads at the same time for the AWS service we won't only opened up for NFS but this is kind of like an ingress rule as well so here I can add an additional security measure saying like I only want this instance or this IP or this hostname to be able to talk to the volume this is in addition to what the cloud provider can give you and then lastly because we're all about automation I can actually tell it to create a snapshot policy for this volume right away so that I don't have to manually do that and this is something that happens asynchronously meaning that this is happening all the time and it will let me know the service will let me know if if something goes wrong or if it goes right essentially I can do things like you know keep two shots you can see the explanation here is it's updating dynamically so it says you know keep every snapshot every hour on minute 53 and keep - let's put 53 because it's 52 now so I'm gonna see if I can create a volume that may automatically makes a snapshot of itself the wallets doing that one of the things I've done keep thinking about okay no it's still creating yeah so it's setting the network then it's creating the network path so this runs in the B PC and now it's available yes all right so one of the challenges that that I commonly see is I mean this is great in a in a vacuum but the reality is with cloud you have ingress and egress charges right the Dre data transfer charge notes not here it you don't have well this wouldn't happen if it's all contained within cloud you wouldn't have it but what if you're I'm kind of going back to that bi-directional connection between something you might have on Prem vs. leveraging cloud for storage right or so so the cloud right I mean the can option to the provider might might impose some egress charges there are no there are no ingress charges so I can stream the angel data now I can stream data to the cloud into cloud volumes but the egress charges make up for the difference yeah well it a lot of that depends if you're going out of the cloud or not right I mean not everything will go back and forth right but that aside differ for any other it does it doesn't I'm just wondering if there's if there's maybe something that starts to to limit the amount get smarter about the data that gets transferred yeah such that you can limit those charts and to be clear I mean right there you know the amounts of data come with it you you wouldn't you wouldn't just you know try and push those bits over the pipe like we definitely are I'm no idea if we announce it or not but we're definitely thinking about like a snowball type situation where you where you can take a box and basically ship it to call volumes then oh and we were talking with NetApp I mean I've this was a couple years ago at least or we started talking about data fabric at the time that the idea was that there be an on tap for cloud instance running in a non Amazon or non Azure data center but they had a high bandwidth connection to an Amazon or an android data center and that would allow customers to avoid egress fees you know that they wouldn't have to worry about what they're doing their data as Esther is still available this is one place in it or is this a new service that's gonna be offered alongside that I'm not familiar with that storage at all but yes are we talking about an NPS NPS is talk so net a private storage so that's that still a service that that's available this is not replacing that No okay who don't meant it that yeah to the but to the point of all this that that whole network egress charge thing really breaks the multi-cloud thing which for all of the cloud providers is like hell yes I'm gonna dig a moat because I don't want Amazon absolutely doesn't want you to leave and it doesn't want you to leave either I don't think it does because if you're using snapmirror to replicate the data yeah whatever you update when you're in the cloud is going to be the Delta changes like you then will pull out because it's not Mirabal yeah keep the sync between the two so bright it might not be as bad as you think all the first set of data that you replicate you've got to be like you've got to put but that's funny going in isn't it so that's fine from wanted to each of the two take two providers send the initial load to each of the two providers and then that way the only thing that ever goes because you know the other things like you've changed so you don't care yeah you've only got a very small only problem with that is if you start creating data in one event mitre is different scenario yes if it's all um dates are being created then you you're gonna have to push that across that and that's the whole you kind of already had this issue already like if I created other in a data center here and I've got another one on the other side of the planet moving that over a network takes a long time but it is possible so and if I've already but like if I'm leasing like if I buy the line I don't pay 3-bit that runs over the line I pay for 10 gigabit network connection and I just pay for 10 gigabit I make a priority there and and like I think the key there is you know how do we drive down that cost because you know the cloud provider is gonna set set a lot of the rules on there like everybody's gonna have to play by their equals right yeah so how can we minimize that and you made a great point with you know using something like the product or snapmirror is going to help you with like dedupe you're not going to be transferring all that data but just the changes there are other options like cloud backup which will have you know d duped and encrypted archives in something like s3 from any any any one of these data points whether it's on prime cloud volumes or somewhere else and those also take up much less and you know cheaper than doing that the pure a file based copy they do it's just the like the prospect of the data fabric where I have my data wherever I want it to be yep gets killed by this egress thing because I can't I have to be careful about where my data is because of things because of eagles so I can't just have all of my data everywhere unless I want to pay a lot of money for it I've debated this for a long time do I want my data everywhere do I want my metadata of my data everywhere and then on access pay the egress to have the data transport with some delay the is the real use case for most of these multi cloud solutions to have the bits and pieces about what data do I have an enterprise or is it literally that I have to have my data everywhere and it's it's it's it's a tough it's a tough challenge no I don't think I actually don't I think it's more poignant than that I don't think you can answer that question in a vacuum period true I think you have to understand what's what is the application how is the user interacting with the application where is the where does the data need to be to be able to support that application then that starts to govern your options in terms of how you manage the data when I think a data fabric from just real quick the the last piece to that is I do think as you start to move into a cloud-based world you can't bring those traditional ways of managing data forward so I agree so what my from thinking the thinking at this from a large enterprise and being a service provider to internal internal customers one customer will have one set of requirements another one will have another dependent application what I can level said and say is that I would love to have my data everywhere so that I can serve each individual client as needed I could turn knobs to say ok in this region of the world I have two petabytes of storage and our replicate data as needed in this other part of the world I have one terabyte of cash or whatever the solution is but the key is and I think this is one of the gaps I see in net apps data fabric vision is I don't see how you guys are facilitating that metadata conversation of how do I how do I solve the problem of knowing where my data is in the world and what data is it and that is I think a heap a part of becoming a data right come and and I can I can tell you with high confidence confidence that that exists is exactly where we're going it's more than just managing the bit yeah and managing the I mean managing the bits is one thing getting to the right applications getting to the right compute getting to the right services that offer you new value from your existing data is is the important part that we need to solve first yeah regarding the egress site I mean it depends on where the data is if the data is centrally located at the customers place and there's no ingress charges effectively he can send it to all of the cloud cloud vol services that that he can find and pay nothing for that right so it only happens when you're talking about egress between cloud providers because within cloud providers the egress depends on region zone like there's there's a ton of things to think about there all right I'll get back on track sorry for that but I like that conversation and I hope we can continue that later on so I made a volume great took probably about 10 seconds doing this with any type of service that's out there today would take me probably hours if not days to get set up and now I want to actually use it so I'm gonna mount this volume here in an ec2 instance so let's go back real quick here just so you see me go all the way through and lock here in - you see - and instances you demo and here's my IP address now let me just put this here side-by-side so you see how easy that is clear yes SSH so I'm gonna just log into this Ubuntu server running in u.s. West one I'm in since NFS I'm gonna use an FSS as a protocol so since that is something that you need to use sudo for here I'm logged in sudo so I'm a root user now you can see there's there's nothing there so if I was doing it from scratch what I need to do now is copy that command paste copy that command paste now based on the the level of performance that I selected we would show you different mount commands for this optimized for the workload that you're trying to do so this might be for example high performance database that I'm going to use this for so copy that enter and my volume is ready for consumption and you can look for those who know you know know something about NetApp and I've seen that up volumes before an NFS shares you'll notice the tell-tale dot snapshot folder there the cool thing about that is snapshots within that are within an tab there there's such a cool thing for everything from development to making your applications way faster - you know replicating big datasets bringing them to everybody at once you know making a development environment from a production system there are so many usable things you can do with this because it's instant right it's not a backup people many of you might think snapchat is backup it's not a backup there's another service for that but snapshot is something like a you know like like time machine on your Mac you can go back in time or you can create some they knew from that particular from that particular day day or hour that you made that snapshot so let's see if there's something in there right so you remember I set the automation to make a snapshot 53 minutes past the hour right so now that I've done that instead of just restoring like making a snapshot and then creating a whole new volume for that maybe I just need one file so if I wanted one file from that I can simply go ahead and copy a file from within the snapshot because the snapshot itself is really just a file structure though that makes it even more capable of helping out operations helping out development in you know getting back to that state where everything was fine or maybe I just want to try that test one more time with the script that I had or maybe I need to you know maybe my Hadoop process failed on that note and I want to restart it we start the job on that node so I can go back to that now I'm going to talk a little later about how to get data to that but let's continue can I should close it yep call me cynical but we do how is this just not putting a starch array on an AWS data center and putting a nice UI around how we really starting to see where this is innovative and doing something that's deep right so I mean good question it is and is not a storage array in a public cloud what what's really new here is that you have not been able to consume file based storage at this performance level and API driven at all in public clouds and that means that some applications not just enterprise applications know that yeah that couldn't be there before right so that's that's kind of the big thing and what you could have done before was you know stand up on top cloud you know launch that appliance but you need a net up guy like well you know you need somebody that knows how to set that up knows how to configure it and all that just to get to that point where you can get shared multi right multi read sort and that would be possibly subpar for performance because it's absolutely so powerful for this is native yes yeah I think there's one other bit as well which we you haven't touched on yet but you might be going into here and that's if you look at the way you provision an instance or whatever you get to select weathers where the storage is going to sit and you connect it through the actual API s and the and the and the GUI directly mm-hmm with with this the storages of like a first level feature you can add to an instance sort of bean it's not it's not separate from it's not like in a VM and separate which means any functions that you use on that VM like snap shopping or other things can be driven by the standard platform API and it will directly talk to the storage as well so it doesn't have to be done as a separate task so that you become much more closely integrated and I'm not sure that that's what your intent to show us part of this because well as far as right as far as like integration you saw AWS that's that's our service is completely our service it's running you know next next to Amazon like in East region right now with direct connection versus the Microsoft version of that which is you know part of a sure like this is a service that's part of azure so you can find it in the marketplace you create it you can I'm just showing you the the UI obviously because you know it's hard to show visual nice things to a large crowd with command line and api's so the same thing I just did we aw yes I can do on Azure but behind all of those subscription rules my Active Directory access all of the things that you know make as Roger natively so here we got you know multiple volumes in different resource groups there on might be in different regions here some of them are large some are small but the same thing applies like it's as easy to use and Azure as it in as it is in AWS and will be in other providers as well would you see that then turn up in your cloud volumes that the net a cloud portal Europe before yes eventually we haven't we haven't we haven't open that up yet yes that turns up at the moment if you choose to I mean if you choose to because you need to you need to essentially like allow us that access so we can you know combine the two now that's where the off zero thing comes in where you where you essentially or federate the the API calls for these you apiece yes yeah so the AWS you have to do it there because you don't yes yeah yeah so what you tell us the difference between what in that NFS share and an SMB 3 share will be from within the net act in that app cloud volume context is there any kind of really really no well and if SB 3 vs saves there I'm obviously a big difference I mean you want to have saves for speeding performance and compliant C with Windows that's that's pretty much what are you using it for SMB obviously can be used in Linux and and all that but that that's your go-to protocol for the Windows workloads on the NFS side NFS 3 is really unauthenticated so it's really behind the networking security that you have here when you start and when you start a cloud volume you are using it over a private IP and only the VP see if I talk like an ad in AWS terms only the VP see access to that only your VPC and that's protected all the way all the way to the volume even though this is a multi-tenant service and that the way the reason why we can do that is because of the software based networking that's built into it on tap and and NetApp boxes and and the way that you know AWS and after they they structure their networking networking security now you know having said that you you know you want to be able to use these things equally wherever they are right I mean I'm not gonna focus more on that here but essentially what we're doing now is just bringing the services to people where they are so if they're in the azure they want to use this in a sure if they're in AWS they're gonna use in AWS and I compiled a few use cases for this and I'll just let you look at it for a little while lob line of line up business so you know think about specific applications that matter to the company you know we've got some buyers there the key concerns and you can see like in all those categories or most of them at least you know that you're thinking about performance they're thinking about you know real integration with the cloud provider that's important as developers and we haven't been playing in that field before I would say like application developers are really going to like this and I'll show you in a in a few minutes why I think they'll like it but it's important to to realize that like nat up is used all over the world today to speed up things like like continuous integration continuous development like build servers like that that's where we've been playing we haven't been playing in the public cloud doing that but with like nat UPS plugin for jenkins it makes builds way faster you know we've been playing internally with like hadoop with Hadoop and volumes way faster uses less resources look they're very very interesting use cases here that will talk about more when when we can sort of talk about the customers that that that are that are using cloud Williams now but we see a broad spectrum of buyers for this service and I can tell you that the the sort of the sort of interest in trying out cloud volumes on the different hyper scalars has been really really high so we are trying our hardest to actually put these services out in a public way as soon as we can but you know you can imagine there's there's only a certain amount of hours in the day so we're trying to get there as fast as we can and you go over a sum or developer oriented use cases yes so there are a number of ways that you would consume cloud volumes as a developer one of the ways is NetApp has created a darker plug-in called trident and what trident does on premise is that it automatically creates the internals that you need to be able to create volumes so in net up-speak there's something called an SVM that's like a container storage container for the volumes that you can create on top of that right so Trident it it's like a controller or some like an engine for things like kubernetes that automatically creates those volumes for you when you describe your application so Trident will for cloud volumes do the same in the public cloud as it does on Prem so we'll be able to deploy applications and automatically create persistent volumes based on the profile the user wants for the applications so if he's you know building a high-performance database based application he might choose you know the extreme level for Paul volumes to just for the database but you might use you know EBS or you might use something slow for the application layer or the or the presentation layer off the application does that answer your question yeah all right so is this I'm just trying to kind of wrap my head around this and yeah I'm not as sophisticated as other people in the room but I kept thinking about is is this really a story where the core storage offering from an Amazon or Microsoft so Amazon Cloud versus an azure cloud yep is almost too basic of a building block and so what's missing is that almost like a layer between it and where the application or where the developer really needs to tie into it it's a rough on it I guess what I'm what I'm really getting at is are we missing that refinement of the AWS and Azure storage building blocks and that's really the problem that this is solving is providing a much more refined way with some management it's it's in there too it's that and it's it's actually like a number of factors I like it how you put that but it's actually a number of factor it's I mean it's the ability to do shared files multi rate multi write that just not available today right EFS I'll show you why you won't don't want to use that versus call volumes so it's a new feature that hasn't available before and people have been working around it like crazy like you create all these you know all of these schedulers and stuff around storage to get around the problem that you can't just point things at the same storage like it creates new DevOps problems to solve when we can just bring this to the table and then you don't have to think about that like it brings a new thing to well and it also brings new features like just the instant snapshot and clone it makes a huge difference in multiple applications like give you an example like for for in development you have a build server and your building an artifact you're basically doing like a full build off your application with NetApp and some plugins for something like Jenkins or you know get lab or whatever you want to use you can actually create a point of time with a snapshot that you can build on top of so you don't have to start from scratch right and when you build an artifact and it's finished you can snapshot it and then you can instantly grab that and create a new volume for it just for that build process if you want to and make the whole thing way quicker so and you know using that using replication between sites if you've got distribute develop development teams there's a whole lot of things you can do with this that you just you could not do before so I guess would where I'm going with this was more of we never had this problem with the on-prem structure mm-hmm but this problem now exists because on Prem you're never writing directly to a drive you're writing to an NFS volume right or a civic volume yeah we're I so it seems that kind of painter layer that's missing in the traditional cloud yeah because it's super hard to do that's I'm believing that that is the reason why it isn't there and underneath a lot of these services at least from like then that may be the smaller public clouds all that they actually are using this like they're using that up they're using NFS underneath there thinly provisioned volumes and all that they're not exposing you to the end users because it's hard it means very hard to build it's hard to maintain like we've as a company and say we cuz I'm obviously been sippin the kool-aid here for eight months we have figured that out a long time ago and and that's interesting part of like now we can actually bring that to the table I think there's a lot of new applications and different ways of architecting big applications that we're gonna see you know I'm trying to figure out a new that it creates a a new role in cloud DevOps that kind of didn't exist a new role or new I said new role but a new opportunity yeah to provide a centralized within a you whether it's a pass or development platform a new layer data services from a device perspective build even building more capabilities that are like your build process that was a great example around just shared something simple that we've had an enterprise for what 35 years of war so the bringing to pick cloud and have given a cloud skill is really interesting yeah I like to show you guys a little bit on sort of where we can help this is I'm this is not a scientific test I just wanted to show you because we were talking about ef-s before this is just a file test which which tests random read run right it took cloud volumes 52 seconds to finish this it took EFS 16 and a half minutes I thought this like might be a fluke let's try it out like like I say I'm not really a storage storage guy I'm a developer I like optimize things performance you know I want things to run smoothly so I thought like let's try it with against an SSD a local SSD or an EBS SSD on an extra-large instance here I'm using SSD on an m3 2x large and cloud Williams was 10 times faster to to complete the run to read run to right so not only and by the way if you put like one terabyte on Bob Williams versus one terabyte of provisioned I ops SSDs on a load AWS cloud volumes is still cheaper is that using an EBS optimized yeah instance yes it's it's a high optimized I can remember after my storage are of the most compute but it's high network and that's that's just the general what's called the general-purpose SSD yes oh it's not and on the cost side when I when I said like did that cost comparison I just took like three thousand I ops a terabyte and compared the cost of of provision I ops SSD which is their most expensive and fastest version versus the general-purpose versus call Youm some forget the service level the premium service level I'm just curious about the architecture that sits behind cloud volumes yeah is that using AWS primitives to build volume service it's a giant black box I can't tell you now let me tell you I was that working question good why it's quad volumes cheaper I think that might lead to you be that I don't that I don't I mean I look at it and I was at the standard service that probably doesn't use stuff that's as performant as what you're using Amazon's keeping all the margin there right I assume that with cloud volumes there's some something going the Amazon something going in that app why is it cheaper how do you compete like we we are new to this game we need we need to compete we need to make a service that's highly available highly performant so the trick here and I don't think it's going to be a secret but the trick here is that net up has a number of storage efficiencies that it can gain from like d-do compression all that to make the service cheaper that so the data services that right so you're taking advantage of their own data service as a gig is that yes for mower is that that is wrong that is what you would see if you do LS minus L in your location so it's effective yeah so I'm paying 30 cents a gig on that okay so that's out yes however so so so we are making the service cheaper by by owning the efficiency that we gain from the multi-tenancy right right so thought but you are gaining the instant snapshots and clones with the efficiency and so there is a risk in that app though in Amazon I guess maybe not as much with Amazon that if you're storing stuff that has sucky that's my word of the week reduction capability you know so if it's already compressed or that's already that's not in good form that you're gonna end up not making much if anything it has to it have to balance out over the over the customer base yep you're saying you know that app isn't gonna make anything I'm trying on individual customers or face it's not if individual customers are using snapshots but they're just going like all images and it can't all unique images right they can't be compressed can't BD do then so I'll make I'll make kind of a bold statement and say I think you kind of have to do that because at the end of the day someone else is gonna figure out a way around that oh why not just cut it cut them off at the past and just say look here's the reality we're gonna get there let's just let's just get there and all of these other pieces in that I just want to understand the business model yeah so I'll tell you it's Jevons paradox right you use it more but lower the cost you lower the cost to the point that more people start to use it and more people start to use it even the reality that's right oh that's right yeah yeah you can follow the Amazon model which is just make sure you keep growing fast enough that as long as everyone keeps clapping the ferry stays alive yes so another reason why what I can say about the pricing I'm glad you think it's too cheap that's great oh oh you have to think about like like the maybe it wasn't clear enough in the beginning when I say like Amazon started with s3 and ec2 and you can build everything else on top of that I think of Cloud Wallops the same way this is the we're building the foundation for a ton of other services that add value and also create a lot more revenue for net up but just don't date to Oliver's a couple think yep can I mount or is there an option in the future to mount a volume that I'm presenting into AWS also into Azure once the assault themselves so this is from our perspective you can do whatever you want what you would need to do and I'm not condoning it is that you would need to open up a network correctly from your service I can't do that where I'm seeing the app fully the the volumes in your cloud service so there's like because it's a protocol it's same with like EFS you can mount EFS on your desktop at home you know it's gonna be terrible performance but you can do it I'm just wondering about the egress and ingress things we were talking about if I be where I'm melting that a volume lives NetApp and I'm melting that separately from each service so the volume of cloud volumes it lives in each hyper scalar yeah I can tell you that so you wouldn't be I don't necessarily do that Nate if it's technically possible I wouldn't you be mounting it from a device I would right but you know just out of curiosity but yeah I noticed that the the mount path it was a public private IP address yes always they're using VPC peering to create that connection or do you have to reserve a block within your VPC it's ooh it's it's different between cloud providers some some opt for peering some up for actually peered subnets no not all of them support that but so yes the reason why the reason why I don't go like now it's like this in this in this and this not because it's a huge secret but because it might change like right now we're in private preview some of it is running in like Direct Connect over Direct Connect and on your own VLAN but net up bears the cost of that like for testing purposes some might actually like live in the data center and be right next to the other services so I can't speak to is that IP address which but yeah is that highly available so say in the AWS yes it's that IP address if I lose a Z that's still going to work in the other availability zones so it's the the fault region the the the the the fault region is the region like it's it's a regional based yeah if I kill something so yes okay so the actual implementation is spanning availability zones you can't lose an AZ still connectivity but you lose the region yeah it's and you're making the IP address highly available as well so yes so underneath you're doing a similar thing to s3 you're replicating that so we're I mean technically we're using features from that out that has some software based IP low passing yeah
Info
Channel: Tech Field Day
Views: 842
Rating: 5 out of 5
Keywords: Tech Field Day, TFD, Cloud Field Day, CFD, Cloud Field Day 3, CFD3, NetApp, Eiki Hrafnsson, Cloud Data Services, Data Fabric, Amazon Web Services, AWS, Amazon AWS, Microsoft Azure, Azure
Id: ovs2R8c3sSk
Channel Id: undefined
Length: 46min 41sec (2801 seconds)
Published: Wed Jul 25 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.