Migrating Data to AWS: Understanding Your Options - AWS Online Tech Talks

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hi everyone welcome to this webinar my great unit AWS my name is Chris Rogers global beyond the storage team and I'm joined today by Brian resident Brian's also on the storage BD team and Paul Reed who is a principal product manager for Storage Gateway we will be talking to you today about data migration with AWS and wanted to first say before we go too far into this that you do have live experts standing by to take your questions so we want you to be able to ask questions we're going to ask a lot of rhetorical questions talk about some use cases go through some of the services some of the questions we might not have full and complete answers to but if you have questions about your use cases and things that are specific to your migration please that's what we want to talk about today so thanks again for joining and taking the time to listen and so now as we we jump in guys we wanted to start by by thinking about some key questions that customers might think about migrations and it really kind of starts with the who's what's and why's right and there's lots of different options when you think about a migration for customers out there that are listening to this I think the first question is why are they going to move data to AWS why do they want to move to the cloud and then there's there's some fall ones there so I'll start with you as you're talking to customers you know day in and day out what are the some of the reasons that customers tell you that they want to move - yeah thanks Chris so we hear lots of customers who want to just migrate - so a Nolan customer they have existing on-premises storage in many cases they want to help moving that storage from where it exists today in your data centers into AWS and into one of our native storage services so migration and all in migration is certainly a big part of the discussions we have with a number of customers sure sure another use case that I hear a lot customers who have workloads so we think this hybrid world where I have a workload that maybe runs on premises and I want to do some processing and cloud as well so you can think of things like manufacturing production facilities from generating data on-premises may be on a manufacturing line and I want to analyze that data in the class right so I need to be able to move that data into the for analysis maybe even bring themselves back on premises to tune my production facility to get better yield out of it sure so again this idea of active data movement between on premises and the cloud they say the shipping data the way you want to process it right right you know and that's great thanks Paul and you know there's a lot of reasons why people want to move the cloud the other question that people face Brian I'll go to you for this one is what is the data that you're moving and that sounds like it might be an obvious question but we find that customers have challenges just identifying which data to move with and whether they need to prioritize the data or they just maybe don't know where the data is maybe chat about that of it yeah certainly so a lot of our customers run into situations where the data itself is collected in distributed sites we find that a lot certainly Paul touched on that with our industrial use cases and the customers in those in those segments and then run into a lot of areas where the data is active vs. non-active AKA archived and that's that's a common use case as well where the more active data may need to be moved in one way and the less active data moved in another way so those are kind of common themes we see sure great thanks Brian and then the next question that kind of logically follows this is where's the data going to go and that's kind of where AWS comes in we're going to talk about some AWS services today they give customers options to put that data certainly I think probably a very well-known storage option with AWS is our simple storage service s3 which tends to be a source or destination for for many people's data but there's there's other options like Amazon s3 glacier and glacier deep archive which are for some cold storage options and I think we'll talk about that and specifically these guys are here they think about migrations and the movement of the data Brian you you work a lot with snowball and all you work with AWS data sink which is snowballs an offline method for moving data to the cloud and data sink is a online method for moving to the cloud we have some use cases that we'll talk about that actually where those two services can complement each other another question that comes up is when do you need the data Brian and I was hoping that maybe you could talk is I know you're talking about this with a lot when you're talking about snowball is you know what's your time frame for moving the data do you need it next week or next month or next year you talk about that yeah and you know I think a common theme with our customer sets is associated with time to value right like when do I need the data is associative what I need to do with that data and what value I needed to arrive from it so you know I talked earlier for a quick moment around the less active data certainly there is less need for moving that data quickly unless there's a data center closure or an end-of-life or some kind of impetus associated with trying to move that data off you know that is currently off premises into the cloud whether that destination be you know Amazon s3 or whether that be elastic file system and you know I think those those questions come up a lot and you know when I look at you know the customer conversations we've had I find that it varies dramatically right whether it's a corporate need to be able to move this data right or whether it there's actually physical reason meaning you know there there is actually a disaster of some sort those kind of snares come up a lot way through that data very fast and that'll help I think delineate what type of migration mechanism you use from from AWS and or are you know in some partner network solutions so yeah that's yeah I think that makes a lot of sense right and I think the other aspect of that right the time certainly drives the migration strategy the other aspect is how much your available capacity do you have right over the network is that that could drive no no snowball but certainly Data Sync which is an online migration tool so probably you talk about you know how much does that factor in whether the customer has usable network capacity and how much do they need yeah and this is a really important consideration for moving any data is basically resource usage so how much bandwidth do you have from your site to AWS all right to the Internet and how much Bandits you have on your so storage system right at the end of it lives on that you have to get it off there whether you're putting it onto a snowball to ship it through a physical device or whether you're putting it over your network and so understanding those constraints that you have within the underlying infrastructure on which you're going to move the decade instead of adding that into the time frame and some of the exotic weights you're super important to get up front to set your own expectation is to have to do this this could happen right right exactly well said I think just to kind of recap on this discussion because it kind of fringe the rest of this the topics that we want to dive into with these use cases right it's the understand why you're moving the data understand what you're moving we're going to give you some options for where you move it and then the win and and how much data in and how much your capacity number capacity do you have that really all defines the success or hopefully not the failure but hopefully defines the efficiency of your my agent right so what I'm talking for a few moments about the AWS portfolio we have a series of methods services that in products that we can provide our customers to help them get their data into the cloud and we break it into three categories we talked about online like Data Sync and offline like snowball we also have as far as online data transfer we have a couple other services one is Clavin door which gives you we'll talk about that in a few moments it gives you the ability to do a live migration of virtual machines you know thousands hundreds or thousands of virtual machines can be migrated or backed up to AWS and that's the cloud or is an acquisition that AWS made earlier this year and that's a now in the AWS family of products there's also introduced transfer for SFTP which is a kind of a niche play in that it's just for SFTP workloads but it gives you a way to put SFTP data which is oftentimes financial data or very transactional data but put that data and move it securely into an s3 bucket and then finally you know Brian side of the house the offline data transfer side we have snowball and we also have snowball edge which is a fairly new release that gives you the ability to do compute at the edge so not only is that snowball just disk they give you storage but it gives you intelligence gives you some compute to be able to run maybe lame the scripts or be able to do some pre-processing and data things like that and then we have kind of the The Associated category with this is is we refer to a hybrid cloud storage we have services like Storage Gateway which gives you the ability now that the data is in the cloud to be able to go in and access and work with that data and Paul you work a lot with Storage Gateway and what we can talk about that but and so it's moving ahead let's get into some of these use cases we wanted to take this as a conversation which is why we're doing this format for the webinar and so we brought in the experts to have this conversation so our first use case and Brian I'm going to kind of prep you here but is the it's a use case from an actual customer where they had the need to retire a remote site I think it was up in Alaska they had 75 terabytes of machine data logs files that they need to archive they want new that they wanted to get it to a combination of s3 and also maybe Glacier deep archives some of it was cold storage and so Brian knew you can take it from there I want to kind of give away the punchline here but maybe you can take it from there no thanks you know we see this certainly a lot a lot especially with regards to the industrial sector even the defense sector and even to a certain degree you know within software internet companies and and we have a service you know we talked about snowball the snow family of services that we have and there really were built to solve this problem and that the problem is a very limited network Lane right a fairly large amount of storage that would take a large amount of time to move off that remote site into AWS in this case you know seventy five terabytes quite a bit satellite link at that at that remote site is limited so it's kind of a good fit for there's no family of services and when we built there's no family services really had a couple different tenants the first was solving this problem right moving large amounts of data over limited network lengths or non-existent network links which does you know I was occasionally happen and that was really the first tenant and so we we built a device and I'll kind of go through the different variants of what snowball there's no Family Services provide you but we built it really to be a ruggedized device-width as Chris had mentioned compute and storage as well as the native AWS services to allow our customers to use their AWS tooling very easily with with their on-premises requirements starting with snowball so of all we have two variants of 50 terabyte and 80 terabyte aimed primarily at just data transfer we also have a snowball edge and that comes in multiple variants one called storage optimized which is a hundred terabytes and with some compute limited compute and we have a compute optimized which comes with obviously more compute double the amount of computation in memory as well as 42 terabytes and we also have a second variant of the computed optimize with GPU again and already stated this but snowball edge really can handle both the data transfer as well as the edge compute we have a third part of the portfolio called snowmobile and that's really aimed at handling up to 100 petabytes on the truck to be able to offload that out of your data center and bring it into AWS and so that's really kind of at three primary variants with that like to just comment on all the variants particularly the snowball snowball in snowmobile right all of them were built with security is a primary tenant you know with snowball snowball edge not only do we have data protection with multiple layers of encryption using our AWS key management service but we also have with the civilize and snowball tamper detection mechanisms built in whether it's some tamper-evident labels from two chain-of-custody tracking as well as embedded inside the device in what's called it the trusted platform module TPM of the chipset itself to be able to track any kind of did you know tampering so that's key and then in addition we built a device to be simple to used for you as a customer right and that's the way that manifests is we have on it a little display so that you know you don't have to worry about having a call you know FedEx UPS whatever USPS to actually pick up your you know the device and ship it back and worry about all that that's all done from the minute you put your order in with the device it shows up on site right you'd use it whether you use it for edge compared it processing you want to use your AWS native services such as lambda such as green grass such as ec2 s3 and buck volumes all that's there once you're done turn it off each label display updates then you can go ahead and ship it right back we take care of everything else and so that's key to try to we're trying to really make it so that the undifferentiated heavy lifting is is really taken off the customers plate and I'll bring up one last one last comment here that looking who's built to be highly recognized right from being able to handle scenarios like II the remote site that you brought up right where it has to be able to handle dust potentially right for for you know certain environments whether that be in the desert whether that be elsewhere you know temperature from a he eaten cold perspective water as if you know it's water resistant as well and and vibration and to that degree will also been able to do DoD barge tests on the device and be able to operate it so I think these are kind of key offline migration and edge compute capabilities and disconnected scenarios I don't want to steal any of some of the thanks Brian for that but a couple of the coolest demos I've ever seen in AWS involvement snowball number one we had an essay did a demo that I think he kicked the snow wall off the stage where we actually had to show that the the snowball could survive an explosion right so that's right they put it that's the barge test that you they actually blow it up right and there's there's video out there so YouTube fans you can go check that out but thanks Brian preciate it they'll walk through for snowball so let's go into it another use case now and Paul I'm going to kind of shift my shoulders here they look at you but this is one where we have a it's an AZ migration and the scenario in this is derived from actual customers again it's the migration of an analytics application that needed to be moved along with its data and they needed to move it into Amazon EFS and so EFS elastic file system they basically needed to run this application in the cloud and it's about a petabyte of files so maybe we talked about what the the solution would be there and people know I'm looking at here so either state is saying is probably outcome in the player yeah we see a lot of customers trying to do this with Davis think whether they have an application that's running the dataset is active it's being modified they maybe don't have a good handle on how the application is even modifying that data said it's sat on a file and maybe it's been that been on that filer for a while and a super nervous to move it but they know they want to move it to it move the data to it whereas move the application to address so to the first step there is to seed the data into the cloud so let's move the data and then we can follow up we're moving the application and so with Data Sync what you can do is you can move the initial data set while the application is continuing to work on the primary there's on-premises data sink doesn't incremental forever model so he'll sync the data over then you'll keep sinking changes over from the program source file into AWS then at some point you can stand up an application in the cloud an instance of your application you can point it to that same data set maybe do some migration testing and then at the point of cut over what our customers will often do is they'll quiesce the application on-premises they'll do one final sync at which point data sync will transfer over just the changes and then they'll stand up the application in a cloud and pointed at the data set at that point the application effectively is move data moved ahead event in the applications moved because you're talking about a petabyte of data obviously and it's going to take a while to move that data data sync will move data it up to line speed up to 10 gigabits a second if you have that connection which is about a hundred terabytes a day right so if you could you know got a lot of network then we could mention of that anybody in about ten days yeah obviously they have less a network will move data slower than that or if you're sharing your network we allow you to throttle data surf back so it won't it'll play nice with other applications on your network so a lot of options there for how to use that scarce resource now to get the data into the cloud I think you know noted yeah just to interject a second I think that that's a that something is real important free app stores right because they a lot of times the answer to the question how much bandwidth you have well we have a ten-game linkages or we have a one gig link maybe not everyone has ten yeah but the real question is how much is available is certainly on those ten gig links the customers put other services other applications it's really about understanding it right and so you know if we said maybe realistically there's five gigs available right instead of taking ten days it takes 28 and again we see customers in all stages we see customers to take multiple 10 gig links bond them together and want us to just blast the day that's like fast as possible and data sync will do that too let's again it's really down to back sort of one of those and only questions that we asked is how much infrastructure you have how much infrastructure you want to use for this data movement and if it's zero because it's awful we're back in the snowball route if you have some network and you want to trickle that data over this incremental model using data sync then we can again work with that right and in the point is you guys are sitting across the table so it's not like you're competing with each other the point is that they're complementary right and yeah we're she going to talk about a use case in a few minutes where you could actually apply you snowball for some of the day and we see a lot of yeah you see that equipment so on move to talking about data singh so yeah paul if you want to kind of walk through what basic is for people that may not know yeah I mean again it goes back to the core use case of just moving data and a lot of customers look at this problem of moving data and either they look at the problem and say this is intractable and it effectively roadblocks their migration to the cloud or they look at this problem and say this is a really easy problem and said I think oh I'll just script something up and throw something together and that will move my data what is that for many customers both those where they view the problem is intractable all those that think it's an easy problem to solve they quickly realized that actually it's a pretty hard problem to solve the filler network quite efficiently move data track those changes ensure your data moves securely is integral so it's valid data and the data is no bit flips there's nothing modified and to ensure that it lands in the cloud in a cloud negative weight right works with all of the cloud services that you want to be able to use both from a storage standpoint so s3 EFS for example and works with some of the other models things like I am VP sees some of those other security models cloud trail logging to know that you know who's accessing the data cloud watch monitoring to make sure you can watch the end-to-end process and so we built data sync as a sort of a cloud first cloud integrated solution to help customers move their data from wherever resides today to wherever you want it and so launch we support NFS sources or NFS destinations s3 buckets as a source or a destination in EFS file systems as a source or destination and any pairing of loads even shuttle data both into the cloud or out of the cloud from unprimed to AWS again we sort of took a lot of the complexity Brian talked about undifferentiated heavy lift of moving data and encapsulate that within the service so things like security come for default we secure data on the wire we never store data in our service it's either in your source or your destination we take data integrity as sort of job number two behind security we do multiple levels of data validation as it moves around through the service both at rest on either end in flight and as we're writing data and again that stall comes as sort of part of the service we understand how to map files to objects of its files again a lot of customers are using both object and files so we managed to maintain metadata across that transition and we built it to move data quick again this problem of moving data over network often these protocols that we access data with whether it's the s3 guy or NFS pretty efficient on the local area networks but pretty inefficient of a wide area network and so we built our own protocol to move data at network speeds and maintain all this validation and security along the way all of the in cloud in perspective needed to receive this data and manage it efficiently into or out of the cloud services is taken care of that's part of the service so you don't have any in cloud infrastructure to run and you know the services run through the address console or through the CLI so again from an ease of you standpoint you basically interact with a service it's data transfer as right-hand of it the bumper sticker I guess I'm down with all this anything that's real important right because we probably have people that are watching this that use the console harder they could go out today and they could you know go out and fire up a dancing can start playing with it sort of looking at moving their own workloads and likewise snowball you can go out and snowball these order through the console I think you mentioned I don't know so guys thanks for that so let's let's move into another use case and so in this case I think you might know where the punch line is going we've talked about so while we talk about they you think guess what I get to talk about would be cloud indoor and so we have this example of a VMware farm maybe you have in this case let's let's use 700 VMs about 250 terabytes of data or so and the customer is looking to shut down their data center but one of the things to say about cognate or is that it doesn't need to be used just for migration it can be a disaster recovery type of platform so you can fire up cloud or to backup give you a disaster recovery mechanism in which case cloud or goes out and it basically creates a an area where your VMs are all backed up and then in the time when you need it when you need to be there shut down your data center or if you have any emergency or a disaster then cloud or allows you a very quick way to recover a couple of things about cloud or over you know doing this VM migration the data that the VMS actually get moved to - it was on ec2 and you can do it over it kind of the same question how much network capacity do you have you have a 10 gig Lane certainly that's ample but it can work over very very efficiently over a lesser bandwidth as needed and so the way that that cloud works is that it's very flexible it tends to automate the process of doing these VM migrations it's very reliable it sets up essentially a staging area where you have the VMS staged and so when you need to move it to production if you need it to to recover from a disaster you have a very quick way to do that the other thing is that it's very non disruptive to production workloads and so it kind of happens in the background it installs on existing agents existing vm's and an OS support is throughout the the service and probably one of the best things about the service is that now that AWS has bought clouded or we've actually been able to make it available at no cost to our customers and partners so there's really there's no reason not to take a look at a condor if you have virtual environments where you want to do mass migrations of live migrations so let's talk about a additional use case here and this is for both of you so we went to snowball today to sync and cloud door and now we're gonna kind of all collectively talk so you can kind of see the flow of this here let's talk about hybrid data processing flows and what does that mean that's really something that's very common I think we see it a lot in life sciences I think you can you can see us in other industries like life science is the one that occurs to me but it's the idea is that you have data that you want to process in AWS maybe it's a genomics company that needs to go and take a lot of data and fire up instances for it compute in the cloud and take advantage of that processing power and be able to run analytics from that data maybe they need to import it into database whatever they might need to do maybe Brian I'll go back to you quickly here and talk about how you would look at a situation like this life sciences customer how would snowball play into that how would you talk to this customer about moving the state in this research data into into the thumb yeah I mean we've certainly talked quite a few life sciences customers and we're snowball tends to fit there at least you know from the experiences I've had is you know the first and the first use case is you know we have archived data that is not necessarily being used as often right and and it's very and it's it's quite looked right and I want to use my active network links for active processing active data and so that that's very kind of common theme where there is you know in the genomics well let's just say you know data that was collected many many years ago lower resolution lower fidelity and so and that in that use case we see a scenario where they want to use snowball to offline data and transfer it and put it into a lower-cost tier of s3 such as glacier right such as glacier even deep archive for certain use cases depend on how old that data is right because you know this the science has been being done for a number of decades right now so that's kind of common theme yeah it makes a lot of sense right and in you know life sciences is kind of what we talked about it's not the only vertical where you might have a data processing needs certainly but Paul maybe you can talk to Life Sciences customers kind of stay with that that maybe they have ample bandwidth right and maybe they have instead of having archive data maybe they are having that for compliance reasons you snowball as Brian said maybe they need to get it up there much more in much more real time maybe or much more quickly for processing yeah and like science this is a good example where a generating data maybe on lab equipment yep but they really want to process it in a club they have data pipelines that run in the cloud that do analysis of this data the sort of a dual play because sometimes that day is coming off equipment in the lab and they want to do some pre-processing on it which maybe is sort of in the purview of snowballs double-edge maybe they only do some aggregation some quality control checks and then they want to move that to the cloud and but I mentioned earlier tanka value really that data is valueless after it comes up the sequencer until they can analyze it through that data pipeline and that's sort of where data sink comes in helping them move that data efficiently in an automated way from on-premises into the cloud data sink will move that data again I mentioned that what sort of sort of cloud first with a view of the world so we're going to store it in whichever storage service that you wanted for many customers are running big data workloads this is an s3 we're integrating with things like cloud watch events and once I pay the lands and s3 we're going to fire off a cloud watch your bank right which you could then use to trigger your downstream data pipeline again we see the same pattern of data being generated in one price place typically on-premises processing cloud I mentioned manufacturing facilities we see it in say chip fabrication where you're drinking high-resolution data off chip fab you want to process in cloud and bring some tuning parameters back to fabrication facility to increase the yield time to the type of speed with which you can do that and get that result back faster you can get higher yield the more money you save less bad product you basically build is this idea of some of these round tripping on the active data is sort of where Data Sync might fit in this particular model at that point you want you it's the need for speed its how quickly can we move this to get time to bed yeah so Daisy is very very good for active workloads right there's another component you kind of alluded to a call where you move the data into the cloud and you get it into say an s3 bucket and now you may want to bring it back down to the maybe there's an application or maybe there's a user service that needs to get access to it from on-premises you talk about that and specifically talking about Storage Gateway and having Ryan access that yeah so while they just think allows you to move data around storage gary allows you to access data in the cloud and so particular file gateway is one of the modes of Storage Gateway that we provide today which allows you to view an s3 bucket as a as a set of files so from a file based application and so in that mode you can use something like data single snowball to get that data into the cloud and then you can use something like file gateway to then access that data that's stored in s3 back on premises and we see a lot of customers do both active and archive images so you can think of file gateway as an access or took an active archive a huge archive of data that's sat in s3 through a small VM that you deploy on premises you have access to all of that data as if it was a file it was a set of files locally to you so for active archive and active I'm sort of workloads that want to be able to view that archive you don't have to have it locked away on tapes they have to go have pulled and put it to the tape library again download it can take days or weeks to get data in that mode if everything's in s3 it's immediately accessible right now through something like Storage Gateway I'm running and as a file gaming right and so the corollary to that is that if you move the data into an s3 bucket with snowball right it's also a little salmon four or five a way to be together allow you to both transfer and access yeah it becomes a very complementary way to utilize the services right certainly you Brian talked about some use cases for snowball use cases for data sink and then final gateway gives you that ability to access cloud resident data from on-premise system so I want to talk about one more use case and in this case and I don't know if there's a debate well it's gonna you share your opinions here the idea here is that you have a customer who is looking to scale and decommission announced they have 200 terabytes of files they want to be able to place that data up in EFS so it's a it's gonna be a file system can be a the decommissioning announced and the patients are gonna host updating the FS and they have about five gigs of bandwidth available and they want to be able to move that data have that be available up in the cloud so with that we're gonna go Paul first I'm gonna tell me anything so Paul what do you talk about that use case and what you yeah to me that one again it's a customer that has bandwidth suspend that connected it's coming it's data sent it's coming from a data center environment they're targeted ZFS are integrated with the FS yeah you know in BBC access to EFS file systems they just think it's it's a point in click in the console select EFS filesystem select your NFS file around pram click go and will copy that data over ok incremental forever models if they have an active workload this case you don't know what a knock-back so it's likely connected so would be a one-shot copy of that data over customer can basically hit go and watch that clad watch metrics and watch that data fly over that five gigabit Network yeah and I think five unified gigs in this case for 200 terabytes is a relatively ample amount of bandwidth sort of would look right so so Brian let's change it up a little bit right I don't have five gigabits of damage I have maybe maybe I have a gig or a few hundred mates use it does that change how you think about it yeah I think so and I think it really comes down to you you have you still have a month right if you really look at the time for this use case that they're turning you know let's just say 30 days right even if we go down to one gig we'll still be able to hit that way right at the data sync option the question then becomes you know how steady is that bandwidth availability right is real real bursty you know does it really is that you know is that really available do the other questions does the customer want to use that bandwidth for it there's a lot of times and you know those of you said you know looking at this tough talk you understand that there's sometimes bureaucracy to overcome to get access to that Network Bailen sometimes it's readily accessible right right at your fingertips it's not takes a little bit of work and so I certainly run into situations faced and what is where our online data transfer functions with Data Sync and even the hybrid use cases but sorry are by far the easiest easiest fit for the customer but other limitations come up and what is accessibility accessibility or even just perceived cost right because they're sharing cost associated with that link and so forth and I think it's always you know so that needs to be untangled but the other aspect I should bring up here is that so you know snowball of all edge well suited to handle this to be effectively three of our snowball as storage optimized devices would be fairly you know low level of effort on-premise to make this happen but the data for this service really ends up on a straight so then it has to be then transferred from s3 to EFS which is again not overly complicated but it takes time where you know and so I would lay that out you know for further customer and say just you know if that makes sense to you you have these options and I think that's the best part about you know one all of us you know sitting here kinda like this we do have a lot of tools sometimes those tools overlap in use cases but the key is we want to provide you know our customers and you know and all the options that they can use to solve these problem sets so I think well there's overlap in the use case once you get down to the individual customer and the nuances of their environment of the business environment or technical realm they're working in the correct service pulls out that fits their use case right in this specific use case right and those variables change like you that you're in that right like they may say they want to move this yeah then they realize what's entailed all the whether it's bureaucracy or cost or or just maybe there's some operational things on their own side maybe there's you know some other issues environmental things involved so consequently you can go faster or slower depending on anything you know that's one of the points of this is that there's no right or wrong answer to any of these yuusuke-chan we go through 50 you skated go through 500 use cases that's our men have right yeah that's only five hundred years cases in an hour but you know I think the idea is that we just wanted to get people thinking appeal that were listening in for the webinar we are just about to wrap up but before we do I just wanted to say you can go out to aws.amazon.com you can check out data syncing you check out snowball cloud indoor storage gateway all these services that we've talked about today another great opportunity that some of you may be aware of as we do our AWS reinvent Show which is going to be held this year these December second which is seems far away it's but it's right around the corner trust us we've already started planning for it and that's it in Las Vegas Nevada and so thank you very much for your time Brian Paul thank you very much I'm Chris Rogers thanks for joining appreciate your time
Info
Channel: AWS Online Tech Talks
Views: 6,669
Rating: undefined out of 5
Keywords: Data Migration, Cloud Data Migration, Data Transfer, Amazon S3, Ingest, CloudEndure, DataSync, Amazon EFS, AWS, Webinar, Cloud Computing, Amazon Web Services
Id: 6nKH0ceFkiQ
Channel Id: undefined
Length: 37min 20sec (2240 seconds)
Published: Wed Aug 21 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.