Building Highly Scalable Retail Order Management Systems with Serverless

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
good afternoon everyone um my name is Lester no no I'm solution architect for Everest base in London working for travel and retail customers like Expedia or River Island which I'm pleased to be joined by Charlie Wilkinson head of our picture at rear items today's session is building highly scalable retail order management systems with Cerberus that's a lot of words so you probably heard a lot of servers will just want a lot of how it's interesting and important for customers to run that and how they can build their retail order management systems so I would first start with going through several estimate auctions on to set the scenes on the service that we were unused and typical use cases that we see at us and then Charlie we dive deep more on the challenges of their current architecture and why why did they move to to service architecture and more even based architecture and we'll try to find at least five minutes for Q&A or after the session can have a description of course status so you probably have a lot this term T's this morning because we see a tremendous interest in this in this in the industry and but what does it mean for us at us what this means goes in four characteristics first means no service okay you get it but no service means that you don't have anything to deal at the operational systems evolve you don't have to SSH in your box anymore you don't have to patch you don't have to upgrade just take care of your business services second characteristic it scales with usage so when request comes in no matter you have one request per minutes or tens of thousands of cats per seconds every S will take care that this service will adapt to handle the workload we do the auto scanning for you you just use the service the third characteristics goes along with the other neighbor playful idle you probably have been used to do capacity planning enterprise applications build your servers to handle the peak workload your night patch your prime day your Black Friday and so you end up using about 20% of your servers you most of the time and this 8th sense of a new capacity is just sitting there idle just not bringing any value for your business which is completely different with servers because you just pay for what you use if you have a large night batch if you have large promotion day we just handle that and the force characteristics is spitting variability and for trends so we have this concept at areas called regions one region is composed of multiple abilities zones and when you build a service on address you have to build it Milty easy you have to make sure that your service is available and will cheer babies on to handle this hi h a high ability for trends in service it's fully managed you we handle that for you let's start to dive leap in a bit more in a few services that River Island used to build their auto management system so the first one is a compute service called Everest lambda you can run code in OGS Python Java C shops and now go the important point here is that it's even based compute which means that the compute is triggered only when you have an event an event can be a technical event something is off is down is up something happened but it can also be a business event and you can think of having your stream of events so in the river island case it would be older coming through the changing changes in one order can trigger this new code and this code can be anything and you can connect to the rest services to an extra more API or even your on-premise application if you have the right network connection but again the most important part is that it's an even based architecture so that's good but customers have asked us how to be in more complex architecture to the most complex application using a responder and that's how we came up with step functions that is service that enables you to orchestrate your application with the weight events with conditions with business or technical conditions and you can see on the left how it looks in the in the console one important point here is that it handles gracefully the errors technical or business errors can be managed by step functions pushing the errors to adela to queue or something else if it's technical error you can retry or build and so you can be more comfortable complex business workflow using step functions and the last service I wanted to touch point on and that was heavily used by women and its DynamoDB it's no sequel database in the cloud fully managed what do we mean by fuuny manage database in the cloud it means that when you want to create a new table you just select the region the table name the throughput that you want on this table the number of requests per seconds and do you nm allows us to auto scale this request per second and that's it and you have a database who'll email Jessie flexible secure and auto-scale based on requests to finish I wanted to touch upon your few use cases that we see at address for my customers using surveys the first one is quite straightforward you can host the website host an e-commerce website from a static website on s3 to more complex application extracts Express second use case is data processing this one is very interesting because again use you can think of your data pipeline very differently you used to have like a back Knight batch running the whole data in one ETL and then dumping down that elsewhere and now you can think of this as events and every piece of information coming from streams coming from an event to new order new click streams can be processing pretty much real-time with even more heavily batch next third use case is a IML alexa so for example when you ask a question to imagine a excites usually a restaurant that function triggered and it powered a lot of chat bots with metal legs and the last dress important use case is back-end systems which is running all the backends of your services in a very decoupled very even based mechanism and that's exactly what River Island has done building the older management systems on servers fully managed and I let Charlie explain a bit more how they used our services to beat this automation thank you thank you cool hi guys I'm charlie I'm the head of architecture at River Island the first thing I'd like to say River Islands perhaps not an organization you might typically immediately associate with ultra cool new tech so I'm hoping to dispel that that myth a little bit for you guys really what I want to do is talk to talk a little bit about as a business we've recognized that we need to undergo a massive digital transformation we've also realized that you know adopting and embracing digital isn't just a case of having the most shiny widgets on your website it's actually it's a real grassroots exercise we're looking very very closely at the way that we offer our business fundamentally operates the really really core processes that we that we have as a business and how we bring those into the new digital age and really that's today one are we talking a little bit about the first step on that journey for River Island so first a little bit of a history lesson so really this is a case of explaining where we've come from so like many organizations and many retailers out there we've we've come from a you know a heritage of big old Oracle estate fairly common pattern and that has many drawbacks that or features that are perhaps less than desirable that many of you guys are familiar with we're going to focus in on the way that we process web orders so it's fairly familiar flow I'm sure to most of you guys our website produces web orders and at the moment or historically we were consuming those in batches so we get like a group of 100 or a thousand or however many orders have been placed in the last X minutes since we ran the scheduled job to go and fetch them we sloped those up and we stuffed them into our core merchandising system into into Oracle or DBMS and and that's really become a hub which designed as a centralized hub for all the key business operations that were occurring throughout the business fairly standard pattern historically as well from there those new orders again in batch process they would get stuffed into our warehouse management system and equally into our LMS automation system which is where we do the packing and dispatching again push through in batches on a scheduled job once we've you know executed all about picking in the warehouse we've you know run run a pick wave we'll get those pick results again in a big batch and actually route them to the order management system so that they can then be dispatched you'll note that it flows through that core merchandising system again just following that hub-and-spoke model that we that we had adopted in the past our order management system used to then execute the payment capture which was we needed to do that because we only capture payment once we've actually shipped the goods again fairly standard pattern so then we've got a load of dispatch results again big batch of them every X minutes we run a cycle job and and consume those up through the cool merchandising system again at that hub-and-spoke model and ultimately push those dispatch events up to back to the website so that we can notify consumers and say hey your orders on its way fairly standard worth noting is that all the stuff that are highlighted in red is actually happening through DB links it's it's all PL sequel hidden away inside the databases it's not actually files flowing around which makes it that much harder to get at that information and then over the top of this we have to have this sort of overarching Orchestrator system that basically executes all these weird batch jobs and and it's a really really sort of what's the word I'm looking for sort of beautifully oiled Swiss watch of components I can see a few of my colleagues laughing about to actually order organize all of these complicated batch routines that are running it's it's executing shell scripts is doing all sorts of gubbins so this is probably familiar to many of you what are the disadvantages though well the most obvious one is that we've got a whole load of really exciting business events pic events dispatch events all sorts of stuff going on in here that we can't get at it's all tucked away in these these Oracle databases and in these these database links and PL sequel routines then just not available excuse me another issue is adding adding new features you know doing something new with those business events is really really difficult and highly a risk you know you touch one system here you go to the full regression test of the whole estate which is very very expensive and very time-consuming and as such you know big projects are generally avoided or or too expensive to to consider we've also got a lot of single points of failure here in fact almost any one of these three Oracle databases that they're down you're screwed you can't do anything equally in the orchestrator dice same same story you've got a whole load of people stood in the warehouse twiddling their thumbs we're nothing to do and obviously that always happens on Black Friday if it's going to happen you know source law so you know it has it has its flaws what I would say is that you know we do respect it it's you know runs a billion pound business it's not to be you know snapped at but it certainly has room for improvement so we're calling out also batch processing everything in our heritage estate is you know primarily is driven by batch processes now I hate batch processes it's a bugbear of mine and there are several reasons for that firstly you've got peaky CPU usage right you're only really your databases are sat there doing nothing most of the time until you schedules run and then everyone's familiar with the mid night schedule that you know makes the the server's going mad so that's pain you're paying from servers that you know using most of the time Bastian alluded to earlier one dodgy input breaks the batch this is this is a common problem it's really really difficult to defensively code around batch processes and generally speaking the you get that unexpected character or whatever it is weird promotion that the marketing team decides to put on and suddenly your batch is broken but not only that usually that that dead bat will block all the other ones behind it as well because you shed you'll is interrupted it's you know waving its arms I don't know what to do about this and the whole thing stuck and again you know one broken step in that chain that we had up earlier and the whole whole house of cards falls apart as well so not ideal and then on top of that your date is not really that timely yes you know you can run your cycle jobs every 15 minutes or so but but you know when you've got like 15 jobs in a row you know that compound time is potentially quite long as well for your data to actually be updated and reflected so it's not exactly very fast either so let's talk a little bit about the architectural principles that we've come to to think of when we were looking to address this and and and you know might make a better solution ultimately we came up with these two guiding objectives firstly we really really want to surface our business events all those business events are tucked away in our and our estate those things need to be available I want to be able to do something new and interesting with each one of those those business events and with ease I don't want to have to go an unpaid massive PL sequel routine to actually do something new with with a new event that's occurring our business the other one was I really want easy and timely access to master data if we've got a new idea internally somebody's got a new idea or we meet some new third party who's got this you know new starter because it's ultra cool piece of tech we want to use and they need you know or they need a product catalog feed or they need you know order history what are the barrier to entry that to that should be well here's an API key go fill your boots that's really the objective and the reason for that is these two these two objectives they combined they really drive business agility and that's the ultimate goal we want to be able to add new features new experiences for our consumers really really really easily at the moment that's just not the case so pulling that down into some actual architectural principles what does that mean so breaking down the existing monolith is is important decoupling our core systems kind of the same thing really really breaking apart that that heritage estate reduce dependency on batch jobs for all the reasons I articulated we all expose our business events and we want to make our master data much more accessible on top of that we want to drive more and better operational simplicity as well you know the the existing estate as many of you I'm sure will know running a big old Oracle estate is not without its challenges and you've got to be an expert in all of it to be useful on call you know there's it's and as such there are very few people in an organization who have that those expertise and can develop them it takes a long time and so we want to reduce reduce that and make our system much more operationally simple so let's talk a little bit about the solution or what we think is our solution so we call this our order pipeline decoupling and this is where we've adopted AWS and service technologies to to come up with a new way of doing this so the starting point is very familiar we've still got those three Oracle systems there they're not going away they run you know important operational services for our business and and and frankly getting rid of them isn't an objective in its own right perhaps one day we will do something different with them but for now that they're sticking around and getting rid of them isn't really an objective so they're still bad but what's different well the first thing is that the website is now it's responsibility starts and ends with publishing web orders as soon as it's got as soon as somebody's completed a checkout it publishes those to this new order event stream in red it's a Kinesis stream and that is all it has to do it doesn't care about anything else it's only job is to publish to that stream from there we have to do an initial bit of validation just some basic checking that the orders not dodgy there's no weirdness going on with it some basic routines so for that we've got this order validation micro service so that's actually been implemented as lambdas orchestrated as a step function I'm going to be talking a little bit about that in more detail in a second it uses the this order reference service which is a synchronous API we've created which just allows it to fetch a few bits of extra extra information like product master data other bits and bobs that it needs to do that validation and ultimately the order validation service its objective is to check that the orders okay and then spit it out onto another event stream this valid orders stream so now I've got two states I've got an event stream of all of the orders that the website is producing and I've got all of the validated orders as well so next step and I want to take those valid orders I'm going to consume them and I need to pump them into our existing Oracle databases and indeed any other services that are interested in the fact that a new valid order has actually been placed so the first thing we actually we've actually done and this is a new feature we've implemented we've got this order shipment Ruta its job again it's a step function and implementation its job is to decide essentially where we're going to fulfill this order from so in a moment got to - well I've got shown on this diagram as to event streams but essentially you have one event stream per fulfillment Avenue essentially so the mast majority of ours at the moment just through the way our business is set up go to our in-house consignments we fulfill them in our own DC in our own anyway so it's just spitting out splitting out those orders potentially even going to split orders into individual consignments or routing the whole order we can implement that logic and we can change that logic crucially if we do change that logic it's only that step function we have to change nothing else gets touched so we've got those consignments we've got those those events in that again in the Kinesis stream and we now have these two so the in house order processor and in house warehouse processor process is perhaps a bit misleading I think there's a more as adapters essentially consuming those events and pumping them into our into our three operational systems in essence the ideally what we will probably do is split out the in-house order processor into two because it would be actually much more resilient to have one per service so that we've just got a bit bit more operational resilience so from there we then got these various systems again are publishing events out of the back of them so go on all these batches you know when the OEMs when we've actually completed a dispatch that's a per order that's an event we've we've dispatched the the order so we publish that event on to an event stream we also handle returns and indeed sort of short ships or failures to dispatch in the same way they're just events business events that have occurred so we publish them into Canisius streams same thing for picking pick results as well the pict consignments the warehouse management system publishes that as an event from there we then want to actually go and capture the payment so again it's the same business process but executed differently we have this order payment step function well that's doing is just listening in to those event streams it's doing a little bit of enrichment again and deduplication just making sure that we haven't already [Music] what's word captured payment for this order and they needed making use of the ri payment service which is a synchronous service running in ECS as docker containers with golang that is essentially that's our broker in front of all of our payment provider payment providers their parties so it exposes a nice friendly API for us to use internally within the business to go and capture payments so the order payment step function it just invokes that and says hey here's an order I need you to go and capture the payment for it and that goes often does that talks to payment service provider job done and out the back of it we get another event stream settled orders again each order we get a corresponding event on that event stream so looks kind of complicated but really it's just a logical step it's a bit kind of crush down to get on a slide what are the one of the advantages though I mean the most obvious one hopefully is that we've got all of these business events are now available you know if I want to do something new do something interesting with new orders or more likely the new validated orders I can go and say right well let's go you know spin up a delivery team we're going to go and do something new and interesting with that and we want to send it off to Idaho an s3 bucket or we want to go and do some do some analysis I can subscribe to that event stream and I can do something new and useful with those order events and I don't have to regression test anything else on this estate and that is a massive massive step forward for us all of that pain has gone away and I can essentially make those events public eye within River I know potentially even externally as well so that's a really really really big advantage the other key differentiator here is that if any one of these you know core systems dies for whatever reason before retail merchandising system dies doesn't matter the OMS and warehouse management system they can continue to consume events or of the event streams they can continue happy as Larry and we don't end up with a DC full of people twiddling their thumbs waiting for you know somebody on some poor bugger on-call getting you know it calls from 15 different people in the business desperate to get the system running again so we're much much more resilient we've haven't added any more service in fact we've taken one away there's no orchestration server in this we haven't I you know we haven't had to provision some database server any of that frankly I'm not interested in running servers I'm interested in selling clothes it makes sense right and as a bonus we've also added business functionality we can now split the orders if you want to and send them to different fulfillment vendors in future if you want to do that and we've also made it much much easier to change the way we handle payments as well because we've got a dedicated payment service that only handles payments and it's not bundled up within this the order management system as it was before so I don't need to go and unpick all of that to make changes which is great so let's take a little bit of a deeper dive on one of these micro services so this is this the first service in that chain the order of validation micro service and this is a step function implicit implementation so as you might expect from the previous slide it starts by consuming events off of this new order event stream every single order they gets placed atomically appears as a single event so we can consume that in now the first step is we've got this step function in Bocca this is actually something we don't want to have it's a bit of a workaround so at the moment it's it's not possible to invoke a step function directly from a Kinesis stream especially a feature request that I'm sure bastions going to help me get sorted out yeah putting on the spot so we had to work around that a little bit so we subscribed this step function in Bocca it listens to the event stream and it is responsible for triggering the step function but more than that it does something else as well because we don't want to mark the the order or the event on the the Canisius stream as consumed and you know dealt with until the step function is completed what this step function invoker has to do is actually poles the state of the step function as a whole and we'll only mark the the item on the event stream is consumed and completed once the step function has successfully completed as well so we have to do that there's there is a dead letter queue as well so in the rare occurrence that some weirdness happens some parsing error or some things we can dump the event into dead letter Q we can put alerts and monitoring on that and respond to that if we need to so we invoked the step function we do a bit of enrichment I mentioned earlier calling out to this order reference service we then move on to the next step which is to actually execute our validation logic it's not really important what that logic is it's just some basic validation routines usual sort of gubbins but ultimately we end up with one of two outcomes either it was yes it's the orders fine or no there was some validation error or or what have you in either case the action is very similar we push that event into a DynamoDB table now why do we do that we we do that primarily for to manage our Ida potency we basically deduplication it there are several user edge cases in here where we might end up processing the same event more than once why accident it's relatively rare but it can happen and so by putting the event into a DynamoDB table we can just check that we haven't already processed that this event there is a fringe benefit as well and that is that with each dynamodb table you get what is essentially a Canisius stream out of the bottom of it with all of the change event syn a DynamoDB table so we don't have to manually go and publish to an event stream ourselves with another lambda function although we could do that there's just no point because we can subscribe directly to the event stream at the bottom of the dynamically B table and it's the same api it's the same so there are a couple of sort of disadvantages of this the the need to have a step function invoker is one one example and hopefully that's obvious why that's a bit of a pain the other things we noticed was that the in the console you can actually see the payloads being passed between each of the step functions and that's really handy it's base 64 encoded you can go into how to look at it that's great for debugging but if you're processing orders as we are you or anything any other entities that can contain PII data customer data and so we didn't think that was a good idea having that available to all the developers in the in the console so to avoid that what we've done is actually we persist the the event itself into an s3 bucket that's encrypted and we simply pass the the a RN between step function steps instead and that way the the consumer doctor is never visible in the console to developers one other thing where we spotted as well was that there was there's for some reason there's a 32k limit on the size of the messages between these step functions which is sorry between the various steps in the function I to my life for me I have no idea why it's the only 32 K perhaps fasting can tell us afterwards that that was a bit too small for us some of our orders are bigger than that so what we did well originally we started with compression to try and get around it but once we switched to putting the payload into s3 and passing our n that problem was solved as well so that's let's just want to watch out for so it's conscious of x on a skip on so let's talk a little bit about a synchronous service really this isn't actually a thing for a service but it's a bit of a bit of a weird one so we had a requirement to be able to invoke the next step after our you know order order processing sasural OMS has has actually gone dispatched the item we needed to then invoke the next step in our big diagram that we saw earlier these systems are oracle databases they can't talk to Kinesis directly there's no there's no library that actually do that what they can do is make HTTP or HTTPS calls so what we did was we said okay we'll put a PR gateway as a proxy in front of Kinesis so now our service that isn't able to talk to Kinesis because you know it's a bit old hat it can talk to Kinesis because we can just push events straight into via API gateway and it sounds a bit kind of janky but actually it works really really well so so that's what we've done and then in the future if we wanted to do something different with those events again we've got all of those dispatch events are there as as Canisius streams that we can either publish to from other sources or can you from other subscribers as well we also played around with in this one executing the logic in just a go Lang routine wrapped up in docker and deployed in ECS we played around with that just to experiment with different ways of processing we could have equally done this with step functions and probably that's what we would do if we were to do it again because there wasn't any major advantage of doing it this way just playing around with the different development of routes the question often get asked actually on that subject is we use ECS people asked us why not use kubernetes maybe we will the the the reason we haven't done so far is because - while there are advantages are running a kubernetes stack you also have to have a load of people who know how to run a kubernetes stack and operate it properly and those skills are not necessarily flying around all over the place and it's a it's a new capability that you have to build as a business and actually ECS works great relatively straightforward and all of our developers are quite comfortable using it so will we use kubernetes in future for managing dock containers probably but equally you know ECS works great for for a starting point as well so the other little nugget in here is we wanted to make our access to our master data much more much much easier if you remember so really how do we do that or how does this let us do that so what we've done is essentially said okay we got all these business events let's actually take the date Lake strategy and archive all all those business events as they occur straight interests airy will dump them in terrestrial irrespective of whether or not we think they're particularly useful or not because s3's super cheap sod it we'll throw it all into history so the way we do that is we have for any given event stream in this case order order events we have an event subscriber lambda function we drop it into we've drop the event into s3 we've also played around with enriching that data with data from our heritage estate and we found the best way to do that or what a really good way of doing that at least is to just use plain old JDBC and with AWS patch again serve this way of allowing us to execute routines to enrich our data works really well the other the other little consideration here is as I mentioned earlier these are orders in in our use case we're talking about today so there is PII data in there and frankly most of the BI and analytical use cases that we might want to use this data for don't really need that data so in the spirit of gdpr which is a pretty hot topic at the moment what we what we also done is we take those events from that raw s3 bucket which is all encrypted and very few people have the keys to and as soon as there's a new object in there we clone it across into a separate bucket and we essentially run a simple routine in lambda that cleanses all other customer data out of it we do a combination of just removing it and also obstacle and ultimately we get the same event duplicate it across into a nice like clean bucket with no PII data but it's the same event just with all the masking done so then that way we can give the keys to that bucket to loads of different people don't have to worry about proliferation of of PII data which is super cool so one of the things that we've we've done with that is yes we can give the keys to a third party and they can even give them access to rst bucket directly that's really great but we've also played around with putting Athena on top of these s3 buckets as well the lettuce put a SQL interface on top of these roar events that exist in in our s3 buckets and there's thousands and thousands of them and then on top of that we can put a dashboarding tool we've actually played around with with quick site with them with good success as well which is Amazon's offering but there's no reason why you couldn't use any others as well and we're playing with other tools so it gives us a really nice pattern for accessing our the data associated with the all of these orders we're placing and processing and really is the beginning of our data Lake so to wrap up monitoring one of the questions I get asked is you know how do you instrument a service architecture or predominantly server service architecture and the answer is it's actually pretty easy although the team might disagree with me we'll see using this is one of the advantages generally of using Amazon services CloudWatch metrics you know all the basic stuff that you'd expect to be able to instrument a monitor about your infrastructure and indeed about all of the execution of your your lambda functions in this case and your step functions all of the metrics associated with the execution path are all there ready available really easy to consume we use a tool called graph honor it's a free tool us.the the black graphs there to aggregate that data and present it and that gives us a nice operational dashboard to tell us you know how many events are being processed basically what's the health of the infrastructure one thing I would say is that the hardest part is actually figuring out what business metrics do you need to explicitly you know a lot of time developers will forget that you you actually you know what it doesn't really matter if your lambda functions are being invoked what really matters is how much money you make in how you know how what's the value of these orders that's flowing through for example so we can actually expose those as custom metrics in cloud watch and again bring bring those through into our graph on our dashboards as well on top of that I mentioned quick slide earlier so this is a little screenshot of dashboard we've we've whipped up on top of using a theme sorry using quick cite on top of Athena on top of our business events so this is actually the real dashboard here and you can see that we can we can break down for the business and give them a window on what's happening with all these orders gone are the days where they have to wait you know 24 hours for the cycle jobs to run through and the big batch shot batch jobs to run overnight to then have their nice report you know on the desk in the morning they can actually now go into here and it's pretty much real-time as soon as an order is processed within reason you've got that information available in this dashboard as well so that's really cool so a couple of little notes about costs another question we get a lot the costs for operating the service infrastructure is it's very very good it's well to put it in context we've we're now running all of our web orders through this infrastructure and I think we've barely broken the free usage tier on lambda so it's like I think the monthly forecast for this month like 25 bucks for the 4 lambda which is nice the step functions are themselves a little bit more expensive but but overall we're talking you know a few hundred five hundred bucks a month to run this one of the biggest line items of our bill is the the AWS support so it's pretty pretty low cost to operate this service and as I said this has gone live we are now pressing all of our web borders it's one of the best go lives I've ever been involved with because I didn't get any phone calls and basically the business didn't notice that it had not alive this as you guys can tell was a really really fundamental shift in the way that we process web orders in our business massive massive change and basically nobody noticed because it went so smoothly so full crest to the team for that I think that's pretty much it isn't it so hopefully that's been useful guys obviously we'll give a stick around for some questions got christened red there so it's a good question dynamo is well we wanted a no sequel store to start with as a first step but also the fact that it was the dynamic scaling is a really good advantage that there are a few different advantages we could have used a more traditional RDS instance but there was also the fun of playing with a new tool as well you could you could make this work with RDS if you wanted to but we like dynamo it works really well sure yep so so the question was how do we deal with D duping with with Kinesis that's one of the reasons why we we're publishing pushing those events into dynamodb and undoing our DG thing they're programmatically I don't know if there are any other other options for that that's the best way to do that is use as more the application level and key diseases to the street so you have to deal with the duplicate duplicate possible the beginning to so the sound of that yep [Music] okay so how long did it take us to do this had to scrum teams or must a lot that's been built by one by one scrum team but to scrum teams really and it's taken SiO like six months something like that however prior to that we've had to build our capability and and learn how to use these tools that has been a multi-year process River Island to learn how to use cloud services and Amazon services but the actual execution time when we decided to go and build this sort of the order of six months or so and the other question was CI CD challenges I probably not the best person to ask about that however there are some of the actual engineers who built this in the room so if you want to come down to the front later on I'm hoping that few of them will volunteer to come down and chat with about that one last one yet say that again sorry what was the biggest PCI challenge we for this implementation poppy she is out of scope really because we're not handling consumer credit card data we we do handle the PII data which is which falls under data protection and GDP are so and I'd say for that we we did take some steps to make sure that the access to that data was controlled which is one of the reasons why we didn't want to have that data available in the console for any of the developers to look at because there's no need for them to do that but PCI wasn't really a consideration because we're not handling credit card data fortunately thank God cool hi guys I'm gonna hang around but I think we'll we'll wrap it up there thanks guys
Info
Channel: Amazon Web Services
Views: 24,239
Rating: undefined out of 5
Keywords: AWS, Amazon Web Services, Cloud, cloud computing, AWS Cloud, AWS London Summit 2018, breakout session, order management system, OMS, retail, AWS Serverless, AWS Lambda, AWS Step Functions, Amazon DynamoDB
Id: 1owKl4sGYKg
Channel Id: undefined
Length: 45min 59sec (2759 seconds)
Published: Wed May 23 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.