AWS re:Invent 2018: [REPEAT 1] What's New in Amazon Aurora (DAT204-R1)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
all right welcome to reinvent and welcome to our session on what's new on Amazon or Ora I'm D'banj and Saha around the relational database services in AWS and Aurora is one of the databases that we support now before I start on what's new on Aurora how many of you are hard about Aurora alright quite a few of you and how many of you are using Aurora ok all right that's good to hear and good to know so before I get into Aurora let me actually say a few things about relational database services in AWS you know as you know that we support a number of relational databases or Ora being one of them and three things that we focus on and different talks on different relational database services that you'll hear you'll hear about three things choice value and innovation and let me double-click a little bit on what we mean by that if you look at RDS we support seven different databases right we support open source databases like my sequel Postgres sequel MariaDB we support commercial databases like sequel server and oracle and we also support or Ora which is our own cloud optimized database that's what I'm going to spend most of our time on and they all run on a managed services platform meaning lot of management of databases that DB's do we make it easier for them to execute right so for example high availability security management disaster recovery these are all automated in the RDS management platform which if you are using RDS and all the databases that we support you can use them you know as you go so when you talk about value let me explain that in terms of Aurora this started probably about five years back maybe about six years back now when we first started working on Aurora we had customers who are using commercial databases and you know they like the speed and availability of commercial databases but they're expensive and there complicated license management that he had to do and we also supported open-source databases my sequel and Postgres at the time and you know they are of course simple and cost-effective but they are not as performing and as highly available and as scalable as commercial databases so our customers wanted us to build a database which is optimized for cloud and which is as simple and cost-effective as open source databases but gives the performance and availability of commercial databases and that's what Aurora is it's a managed service the manage database it has speed and availability of high-end commercial databases but at the price point of open-source databases that's the value that we bring and that's what we are going to get into the details of today and by the way it we have two flavors of Aurora we have a my sequel compatible version which is compatible fully compatible with my sequel five six and five seven we also have a postgrads version of the database available which is compatible with different versions of Postgres so if you have for example application written to my sequel and Postgres they can move without any change to the application so in thumbs-up you know when you talk about innovation there or aura is we are innovating very very rapidly as you will see in terms of new feature function that we are coming up with there are three things I want to point out in terms of where these innovations are one is in terms of the architecture it's a scale out distributed architecture which we take advantage of in doing various different feature functions and I'm going to give some examples of that the second thing that we do is it's also a service oriented implementation meaning we take advantage of lot of web services that we have in AWS in both building the data base as well as integrating those services so that people building applications on top of per aura can build applications quickly by taking advantage of those integrations and third thing that you already kind of talked about a little bit is it's a fully managed service so a lot of things that people otherwise have to do is all automated in our aura so let's double-click a little bit on each one of this so scale out distributed architecture if you look at you know the relational databases they have been around for a long time and really not much has changed it's a monolithic architecture you have different layers of the stack you have a sequel layer a transaction processing layer a caching layer a logging layer and a storage layer so what we did in order we have taken the logging layer and the storage layer and created a distributed log structure storage architecture which is purpose-built for the database so if you look at the storage layer of Aurora it is spread across three different availability zones which are essentially data centers within the same region in the same metro area and we do why charting of that data meaning that we you know slice it up into multiple pieces and then spread it across hundreds of sometimes thousands of storage nodes and these are the small boxes that you see there and we make six copies of each data element in order to get high ability and I'm going to get you the details of that and by the way the interface that he have between rest of the stack and storage layer is not traditional storage protocol like I scuzzy fiber channel or in FS it is purpose-built interface that we are build which is essentially based on redo log and as a result we send very little data in terms of total volume and network packets from the engine to the storage layer one of the reasons our performance is so much better now how do we leverage AWS services some quick examples of that for example lambda lambda is integrated into Ora you can actually issue a lambda function from stored procedures sort of triggers in Aurora and that makes it very easy to for example if for every new update you want to execute a function you can actually use a lambda function for that a lot of people do that and there are a couple of other examples I'm going to provide now it's three for example we use that for backup you don't have to schedule any backup in Aurora we continuously backup all the time and you can actually make use of that and that was actually quite simple to implement similarly we are integrated with I am identity and authentication services so bub various levels of database access is made easy through that integration and all kinds of logs we upload into cloud watch logs so you can do various different types of processing on them so there are many other examples these are some quick example of what a lot of people use in Aurora so moving on in terms of automated administrative tasks all the things that you see on the right hand side is all done by RDS platform and order of course as a part of that platform so for example automatic monitoring and high availability and failover that is taken care of if you want to do security management lot of that function including patching and various different types of compliance and auditing we take care of that various different types of advanced monitoring and maintenance function for example there is hardware maintenance or scaling up your storage or your sulfur that is also API driven sometime automated depending on what type of scale up or scale down you want so because of that lot of boring things that developers and DBAs have to do you don't have to do that anymore so that you can focus on your business focus on what really matters schema design query construction query optimization etc so how we are doing now thanks to all of you or aura is still the fastest growing service in the history of AWS a lot of enterprise customers you know some are shown here these are essentially a subset of them that are public references that are using order today and if you look at why people are using our aura there are actually two types of customers or users who are using aurora one are people who used to use open source databases they use aurora because it's much faster it's 5x faster than my sequel 3x faster than Postgres it has better availability and durability and believe it or not it's actually cheaper than what you can get through open source databases because it is so much faster you can use a smaller database node which saves you money and I'm also going to give example of server less which makes it very very efficient and saving money and further migration of course is very easy if you are already running on my Sakura Postgres a couple of clicks you can move to Rolla if you look at commercial database users they are moving for a slightly different set of reasons or Ora is literally one tenth of the cost of some of the commercial engines you don't have any licenses to manage it is integrated with cloud ecosystem it's comparable in terms of performance and costs and getting better every day and migration and tooling although not as simple as moving from my sequel and Postgres we provide lot of tools and help so that people can move from commercial databases to Ora so let's get into various different technical aspect of what we have been doing and some of the new things we have done past one year in order to make sure we have everyone on the same page I'm also going to talk about some of the things you already have and some of the new things you have done right so that there is some context so as I said it's 5x faster than my sequel and 3x faster than Postgres sequel this is of course based on you know various different benchmarks that you do this is using suspense and as you can see we have compared with five six five seven five five or a toe of my sequel and we have two versions of Aurora which is compatible with my sequel five six and five seven both in terms of read and write we do much much better and we continue to maintain the advantage that we have now if you not only just throughput if you look at for example data loading performance this is by the way from Postgres and Postgres loads data this includes both loading the data as well as doing the vacuuming and building the index that can be up to 2x faster if you look at the same number for my sequel it is 2.5 X faster right so you know it becomes a lot lot easier to load data and then process it one thing which is really interesting is not only that throughput but also the consistency of throughput this is Aurora Postgres under heavy load and what I have shown there the bottom graph in purple that you see that's essentially the variability and performance that you see in or or Postgres as it is working under heavy load and the one in blue on top is what you see in community Postgres and it is in terms of if you look at the variance you compared that the standard deviation it is 10x better a 10x more consistent than what we have in vanilla Postgres and by the way similar thing for my sequel here also over up my sequel performs much better than community my sequel now we of course have not been standing still besides all the software improvement we have done we have also provided new hardware so is with 8xl as the largest instance which has 16 V CPUs and 250 s or 32 V CPUs and 256 cores up 56 gigabyte of memory so that is what gave us hundred thousand rights per second we now support 16 excels which has 64 cores and almost half a terabyte of memory in terms of writes per second we get 200,000 lives per second which is about 2x improvement of what you know we started off with and we are going to go to 24 Excel very soon so you are going to have even more Headroom in terms of the performance that you are going to see if you look at read performance read performances we started with 500k and we are now at 700,000 reads per second the performance growth has not been that dramatic because we are now Network limited we with 24 Excel you are going to have faster Network and you can expect to see much better improvement in terms of read performance just raw throughput let's talk a little bit about how do we get that I mean of course there are a lot of trickeries we played the main thing that you see the difference between Postgres and aurora Postgres and my sequel and order my sequel is that we do less work and we do it more efficiently right by less work I mean that we do fewer i/o and few our IO means we do lower number of network transmission we also cache prior results we offload a lot of things from the database engine to the storage nodes and by more efficient you mean we do lot of things as synchronously we reduce latency on the path just by pure engineering we also use lot of lock-free data structures which reduces the amount of contention we have in the code path and we do a lot of things in terms of batch processing so just a quick example of the i/o profile or over says this is order my sequel versus my sequel using EBS volume or any other storage volume for that matter if you see on the left hand side there are a lot of different types of Rights that are going on there is log rides there is bin log rides there are data rides double buffer rights and also frm files that we have to write on the aura side on the right hand side you know there is only one type of light that you do this is because of the architecture that you have only thing we do we stream redo log records from the engine to the storage node and rest of the things have processed in the storage node itself and because of that although you know in case of Aurora we do six copies so we have to do write six times as opposed to ones that we need to do under my sequel side we still reduce the number of writes per second for the same amount of data that we process so if you look at left hand side we have you know over a thirty minute period we then actually see spent over a thirty minute period we had seven hundred and eighty K transactions processed by community my sequel and we had almost roughly about seven million i/o but million transactions average about 7.4 IOT's per transaction and that number on the right hand side for Aurora even after six copies that you have to do is 0.5 iOS per transaction even after six amplification which is 7.7 X less than what we have the my sequel side and because of the smaller number of i/o and the network transmission that you need to do it actually helps you the performance the numbers I was talking about now there are other things we have done for example this is what we have done with our lock management if you look at vanilla my sequel there is a big mutex the big log that you see on the left hand side in front of the lock managers so every transaction that has to go and get a lock from the lock manager is getting serialized by that mutex if you look at the right hand side what we do is that we have completely rewritten the lock manager as a lock free implementation and as a result there are multiple transaction who can me in the lab manager at the same time increasing the concurrency and parallelism Aurora my sequel which is you know something that leads to 5x performance improvement now we talked about throughput the things that also matter is latency and we started working on latency year back so last year we did a synchronous key based prefetch where we are actually prefetching in the background lot of data based on the keys that we know we are going to access and if you see on the right hand side the as synchronous key based prefetch actually has a pretty dramatic improvement in terms of the latency numbers for a decision support benchmark this is similar to TPC TPC H benchmark that we have and if you look at the numbers those are essentially the improvement that we have seen for various different queries square you want to query 22 and the biggest improvement we saw was 14.5 X and of course something which is close to 1 is you know we didn't see much improvement and there are many where we have seen significant improvement the other thing that we did is bad scans so in my sequel for example queries are always processed or all the features are done one at a time and because of that there are lot of i/o that is going on disk access and the network access that's going on there a lot of contention that happens in our case what we do is that we do bat scans inside the memory buffer and as a result we have another pretty big improvement in thumbs up what do you see in terms of query latency right and here also we used the same decision support system and you know we have seen a significant amount of improvement using batch scans now what we realized yes those are good improvement but we probably need to do better and we can do actually much better again taking advantage of the architecture that we have so last year we started working on what we call parallel query processing now a lot of databases claim that they do parallel query processing what they really mean is that they are actually processing query in a multi-threaded way in the database engine itself what we really do is that push down the query processing to the storage node remember I told you that we have you know hundreds of storage node behind its storage volume each of this storage node really are very powerful servers with SSDs in them and they have thousands of cores in the storage layer so if we could push down query processing to the storage layer we can take advantage of this massive parallelism that we have to get really really impressive performance improvement and that's what we did in or a parallel query we released that couple of months back and if you look at the performance number this is really really interesting so you know this is again the same decision support system benchmark that we use the other two that I saw showed you the peak speed up for some of the queries is hundred and twenty X by the way that's one hundred and twenty times not one hundred and twenty percent right and you know the only way you can get that is that if you paralyze in a massive scale and that's what you have done now eight of the twenty two queries in that benchmark we get more than 10x performance improvement and a lot of customers have been using this this is a quote my favorite quote from Netflix were using this in one of their queries which was doing lot of full table scans right so what they really did is that by using parallel query they could reduce the scan time or the query processing query time from thirty two minutes to three minutes and while doing that because you are pushing down all the processing to the storage layer they reduce the size of the instance where they are running that from eight X large to 2x large which is 1/4 the size in terms of you know the number of course memory as well as the price of that processor so you are getting 10 X per immense improvement by paying one-fourth of what you are paying without parallel query okay so let's kind of transition to next thing I want to talk about you know your performance only matters if your database is up okay so you know this is one area where we spend a lot of time just for context you might have seen me talking about this thing before is that we have a three six hour distributed storage layer meaning that every piece of data we copy six time and we put two copies in each availability zone right now each availability zone is really a data center in a metro area now we do a quorum based read and quorum base right meaning that when we write we lie to all six copies but we say that right it's stable when four of the six copies are written to the SSD right when you read we do three out of six quorum although we don't really need to do that we typically read only one because we know where is the latest copy only time we do three out of six reads that when you are recovering from a failure we also do peer-to-peer communication in the background between storage node for example if a storage node dies or if it is slow it can catch up by doing peer-to-peer application with other storage nodes and the volumes are striped across multiple volumes are striped across hundreds of storage node that's for performance now the advantage of doing that is that you know if there is a data center failure which rarely happens but can happen you still have full read/write availability nothing really shows up in terms of user experiencing any issues with that now you know in my almost four years running Aurora only once I have seen that but still it's a good thing to have and that was in a typhoon I think in Australia we lost power in one of the data centers and within one minute all Aurora instances in Sydney are all up and running right and in fact you know if you didn't have head node or the engine node in the affected availability zone nothing happened to the database because storage layer was completely transparent to that failure now what do we do also that on the shared storage layer we offer a master and up to fifteen read replicas each of the read replicas are promotable to a master meaning that if one of them fail if the master fails we automatically switch over to one of the read replicas by the way you can specify in which order we should pick the read replicas because sometimes what happens is that if we have a read replica in the same availability zone and that is still up and running you are better off using that read replica because your application may be in the same availability zone one new feature we have added is that we have we used to have a reader endpoint through which you could load balanced across the read replicas now you can have multiple reader endpoints and you can associate different groups of read replicas behind a reader endpoint and the advantage of doing that if you have for example different applications and for each applications you want to associate a group of read replicas you now have the flexibility to do that the other interesting thing about read replicas is that we don't use logical bin log based replication or wall based replication in Postgres we use our redo log based replication which is a very lightweight protocol and as a result read replicas are only lagging behind by roughly about 10 millisecond or so now the advantage another advantage of our architecture or log structured storage architecture is that the crash recovery is nearly instantaneous if you have used molecular database either commercial and open source database you know that you know we need to do checkpointing typically checkpointing happens every minute so every 5 minutes and when you have a failure you essentially have to apply all the logs since the Check Point in order to recover now even if you check point every five minutes you have logged for five minutes worth of data typically log applications especially in my sequel for example is single threaded right so it takes much more than five minutes to actually apply all the logs that you have accumulated and you know if that is the case then your recovery time could be minutes sometime ten sub minutes in our case you know the data is really stored in pages where we have all the logs associated with that page in the page itself and all we need to do is after a crash we need to just reset the pointer to figure out what the last consistent point was and you don't really need to do any other actions like application of the logs logs are applied on demand when that particular page is read and due to that recovery time is very very fast now we are going a step beyond that we are working on multi master this is something we are by the way doing a preview up so if you haven't tested it out and want to try it out you should try that hopefully you should be able to make it available as as production sometime very soon the idea here is that instead of one master and multiple readers you are going to have multiple masters and in that case if one of the masters fail you still have other masters providing readwrite availability and you will have continuous availability now other advantage multi master is that you also do write scaling and this is one area where we actually are doing pretty interesting stuff that if you look at the aura architecture the consistency management unlike other multi master systems like rack or spanner etc there are many of them you know we do a optimistic contention resolution we assume that there is no or low contention and we proceed that way and if there is a contention we recognize that as a conflict and we roll back one of the offending queries right so how do you do that I mean you know this is a quick illustration of what happens there are different ways of consistency within Aurora if the transactions are originating from the same head node the head node actually cannot be tripped if the transactions are committed on the same storage node the storage node can arbitrate and if you have transactions which are coming from two different head nodes and involves more than one storage node in that case you need an external arbitrator so in this particular case what you have here a blue master and a purple master they are writing to two different tables table 1 and table 2 they happen to be on two different pages on two different storage nodes there is no conflict in this case there is nothing to worry about and you know it proceeds as if you have a completely shot at completely partitioned workload if you have a situation like this where for example you have again the two masters blue and purple and you are writing to the same table table one which is on page one on storage node one say then there will be a conflict and one of them we do a quorum based system and one of them is going to win the conflict in this particular case the blue master wins the conflict okay so as a result the transaction t1 from blue master is successful and transaction t2 from the purple master is rollback okay so this is also pretty simple because you are taking advantage of storage node to do the arbitration this is a more complicated one here you have two transactions t1 and t2 they are both writing to page 1 and page 2 writing to both tables now transaction 1 wins on page 1 and transaction 2 wins on page 2 in this case there is no easy way there are two different masters involved two different storage not involved and we have a designated tie breaker who will break the tie in situations like this of course your commit latency you go up and if you have workload where this situation is very common then performance of course is going to suffer but in most cases what we have seen that that's not a problem most of the multi master worker or what load we have seen has nor low degrees of conflict now if you this is by the way from our test system you can try it out if you are in the preview is that there are four nodes in the system as you know the first node and we are running actually sees bench on this cluster so but when you have one node up to five minutes you have roughly about twelve or fifteen thousand right transaction per second after that you add at fifth minute the second node the number of transactions you can handle kind of roughly doubles because you know there is limited amount of conflict after that you add the third one by the way the scaling factor here was roughly about 0.85 for 0.9 meaning that every time you add a new node you are essentially adding 0.9 or nine point nine X and the capacity of the each node as if they're a single master system and then what happens at I think at the 12 minute there was a node failure and the performance goes down but it recovers very quickly because what you do is that we are constantly monitoring what happened and then we recovered that node quickly either through a software reset or in some cases in this case it or the software reset either replacing the node the hardware itself moving to another interesting feature that we recently released couple of months back is backtrack right so you can think of this as a time machine for databases you know many times people say oh my god I accidentally deleted something or did something which I would like to undo so this is really the undo function for databases very very useful it's much faster than the traditional ways of doing undo which is so far has been point-in-time recovery so it takes roughly about a minute couple of minutes to do the backtrack and quite popular with lot of our customers let's move to the other aspect of our order which people really like is ease-of-use and there are a number of features that we have worked on which has been quite popular one of them is performance insight right so you know using performance inside you can actually drill down into the performance of various different queries you can rank for example the queries in terms of how much of CPU they are taking how much of wait time they are facing and you can actually store the data by hour by day by week you can have historical data stored for two years so that you can go back and do the performance analysis I believe seven days are free and this is something which is very very useful in terms of monitoring your system right in this particular example for example you see you can set a target what is the Maxie PU that you want to tolerate you can set an alert on that so that you can get an alert when you're above that limit if you look at that bump they're looking at the shading you know that this is actually a CPU bound contention that you are seeing you can actually figure out which particular query is responsible for that so very very powerful tool and we have this available for not only Aurora but also for my sequel and Postgres and sequel server I believe in Oracle is coming soon so one tool can help you monitor performance of all the databases that you have in RDS now this is one of our partners Shalom they integrated performance insight with API is with Alexa so that you can talk to your database and you can go to their booth and see a demo I think there is a video that I have and let me see if that works [Music] Alexa open performance insights how are the databases doing in Ohio performance insights reported no issues in your databases in Ohio check the database performance in Northern California for the past two days on Wednesday November 14th the average active sessions on AMS 5/6 was 8.7 more than the allocated for these CPUs what would you like to know about this incident what's causing the load 94.4% of the load on ems 5 6 was coming from cpu load from the other weight types is low who was responsible for the load user kyl e.l.f is causing 91.9% of the load on ems 5 6 would you like to notify the user yes would you like me to email or send the user a text message email him email has been sent stop good bye alright so you know for full disclosure I'm not using this you might find it useful when you are driving and want to find out how your database is doing so now moving on to other thing that you do security management is one thing that a lot of people find very very bit useful right so we do encryption at rest we do encryption in transit we support V pcs which are kind of like a firewall and we do lot of compliance which people find very very useful because that's one thing that everybody has to do in terms of keeping their data up and compliant and and recent new thing that you have added is database activity monitoring which is in preview which is going to available soon so these are some of the compliance regimes that we support as you can see some of them are new they we have three different types of databases we have or my sequel or a post-credits and also now or a server less and we support various different types of compliance for all of them moving on this is a new thing that we are doing so the idea here by the way is that you know there are a lot of different types of audit logs that you collect and we put those audit logs on a Kinesis stream and then you can send it to either cloud watch or you can have third party providers who can then ingest that logs and provide various different types of database activity monitoring reports right the partners that we are working on are McAfee IBM Guardium and Imperva I think some of them might actually be here providing demos of this you can also for example use Amazon CloudWatch and once you put your data on cloud watch there are various different kinds of things that you can do for example you can search for specific events you can you know monitor various different kinds of metrics using your favorite dashboard you can do it visualization you can set alarms etc it's quite useful now you know we also support various different ways of migrating data into Ora right if you are using my sequel or Postgres sequel in RDS it's actually very very simple we automatically ingest a snapshot and hydrate and Aurora database with that data if you are using my sequel percona Postgres mariadb etc on ec2 or on-premises there is an easy way of doing that you can take a snapshot put that snapshot in s3 and we can ingest that directly into our Ora if you are using oracle sequel server and recently we have added db2 DB & Cassandra then you can use our DMS and SCT tool DMS stands for data migration service and SCD for schema conversion tool which are quite useful in moving that data into Ora these are some of the new things that we have added to our DMS and SCT for example we have step by step instruction for migration from Oracle and sequel server a lot of people find it very useful in fact this was because of various different feedback that we got from our customers other thing which is also quite useful is our workload qualification so if you have a workload and you want to figure out how much work it is to move them to Aurora from Oracle and sequel server we provide that estimate using these tools schema conversion so if you are you know if you are using Oracle and sequel server they are not exactly same as my sequel and Postgres we do lot of automatic conversion of schemas and our accuracy now has improved to 90 percent for both of them and the native start point is another thing that you have added where we can use engine native utilities for example PG dump and load in order to seed your database and then use DMS to actually catch catch on the additional part using CDC change data capture most exciting thing that we have done in terms of management is or a server less right this is something that we released a few months back for Ora my sequel the idea here is that you have applications which are intermittent in terms of their workload they are on some time off sometime or at least their workload is going up and down right but those kind of applications you want to make your database follow the workload curve without an impact on the application itself of course we can do that by auto scaling but when you Auto scale your application gets impacted because application sessions get terminated with server less you have completely eliminated that so what we do is that do you have your application that connect to a set of requests router the database instance is not really there when you do that now when you send your first query we have a warm pool up instances we spin up an instance from the warm pool and attach it to the request router endpoint and then we automatically scale it up and down as the workload is changing and taking it down all the way to zero sometime when there is no workload and customers pay by minute and you know a lot of people actually save a lot of money in fact one of them posted after we launched this or a server less is that I was paying some $8 30 cents this was on Twitter $8 30 cents per day for my or my sequel and now I am paying 38 cents from for my Aurora that was of course great news for customers although my boss was a little bit worried that this might have a revenue impact now if you look at you know how it works so if you look at the Purple Line that is actually the workload and if you look at the blue line this is how we are automatically scaling things up and while we are scaling these database sessions are still active and as a result the application kind of sees this really a bump on the web and as you notice that you know on the way up we actually scale very very quickly on the way down we are little slower want to make sure that we are not thrashing because sometimes you know there will be temporary change in workload and you don't want to go too fast on that if you so here so these are some of the new things we have done for our server list it's now compliant to it FedRAMP a PowerPC is OSI a sock I saw an hi trust there's a whole bunch of new regions that we are launched so it's currently available almost all commercial regions now we are just announced in the reinvent that we are now supporting the preview up or a Postgres several so if you are in order Postgres users go and try that out and we also are supporting what we call a rest data API so this was a big ass for a lot of customers with serverless of course you can scale up and down very quickly right but people also wanted an HTTP endpoint so if you have a lambda app or a mobile app or an app sink or something like that you don't have to go through JDBC connection you can use an HTTP endpoint in order to connect to it so we just announce an RDS data API which will provide an HTTP based interface to order a server less and that is really useful for lambda app sync has already integrated with that if you have a lot of mobile application I think you will really find it useful these are some of the other sessions if you are if you like this session these are some of the other session that you might enjoy so we have dat 305 which is a deep dive on or a Postgres sequel we have d-o-t 304 which is a deep dive on Aurora my sequel and also there is a session on how to migrate from other databases into rora so those are some of the things that you can go and check out that's all that I had and as you see Aurora is ready for you are you ready for Aurora [Applause]
Info
Channel: Amazon Web Services
Views: 5,315
Rating: 4.9130435 out of 5
Keywords: re:Invent 2018, Amazon, AWS re:Invent, Databases, DAT204-R1, Amazon Aurora
Id: 2WG01wJIGSQ
Channel Id: undefined
Length: 45min 1sec (2701 seconds)
Published: Wed Nov 28 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.