AWS re:Invent 2018: Mainframe Modernization with AWS: Patterns and Best Practices (GPSTEC305)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
all right good afternoon everybody thank you very much for being here I'm so happy that we can talk about mainframes and more specifically how we modernize customers with AWS so let me start with this how many lines of COBOL code do you think our mainframe customers do have and that's an average that I made from the customers that we are talking to while you're thinking through it let me introduce myself I'm Phil de Valence I'm a solution architect with the AWS and I work with mainframe customers mainframe partners helping them being successful with a tablet I do that globally with partners and customers so for those of you that don't know mainframes mainframes are typically central computers in large data centers that typically handle large volumes of transactions large volumes of data they are often time system of Records meaning their back-end systems to other service for example if you get cash from an ATM it's very likely that the final transaction is actually being done on a mainframe so we still see many mainframes with our customers many multinational banks have mainframes insurance companies still have mainframes we see mainframes in many government agencies we see mainframes in the automotive industries and many others so why is that important to us well mainframes are more complex than typical platforms that are being migrated to to AWS and cloud is the new normal customers don't ask if they shouldn't move to the cloud but how fast it can move and with the mainframe they are getting strong as they don't want that so they want to be able to include mainframes as part of their larger monitor modernization journey so this being said getting back to the question how many million lines of codes do receive both COBOL mainframe customer once the answer is 21 million lines of codes so that's a big number how do we handle that when we modernize to a DeBlase similarly our customers typically have an as an average 10 petabytes of data they have 36,000 myths that they need to move to AWS so how do they do that so what we did is that we analyzed many projects that our customer performed while modernizing - ADA was successfully we looked at similarities between all those projects and we extracted all the best practices we extracted all the common characteristics we created patterns so that all this can be given back to a new mainframe to address initiatives so this presentation is really about giving back to you all that experience so that if you're starting a journey on migrating or modernizing your mainframe - to a de Bresse then you can be successful as one so let me talk about some of the drivers why do customers want to modernize with a database the first biggest driver that we see all the time is cost cost reduction mainframes are notoriously expensive customers want to reduce a misconception that was too expensive customers want to go away from third-party licensing costs then want to adopt a pay-as-you-go model right only pay for what you use and typically for mainframe modernization to a de Bresse we see customers doing savings from 60 to 90 percent for the annual cost of the infrastructure so that's a big benefit that you get from modernizing to AWS the second big driver is about agility customers want to go away from rigid architectures they want to go away from the archaic interfaces we want to go away from the monolith and they really want to get more agile they want to get be more they want to create some CI CD pipelines they want to go into micro services you want to benefit from the cloud speed they also want to reduce a technical debt so I'm going to go away from the proprietary platform what's at the hardware level and the software level they don't want to be locked into a vendor the next big driver is around digital strategy support they want to embrace the cloud benefits they really want to be able to speed up innovation and adopt best-in-class software something that's also interesting which main from customers is because mainframes have been out there for decades they have accumulated loads of data and data is a new goal so mainframe customers want to be able to explore data that data they first want to unlock it and then they want to leash the benefit from it and I'll talk about some of the patterns we have in that space and then the last driver that we have that strong with our mainframe customers is around the workforce pore size so there is a mainframe retirement skill gap that you may be aware of many of the mainframe elsewhere baby boomers and now getting retired so it's getting harder and harder for our customers to maintain those systems furthermore they also want to be able to attract new talents with cutting-edge skills so getting into a database allows customers to actually benefit from all those drivers so of course the mainframes are very different from a database at the architecture levels it's totally different or the software stack is different so there are some challenges when modernizing to a de Bracy but the good news that we do have solution for each of the challenge let me go through them quickly technical complexity right as I said 21 million lines of codes it's not easy to handle and there are a lot of prerequisite dependencies from from that code they have petabytes of data so the way we do that is that we break down the complexity we do that by workload and then we use tools so I'm going to talk about tours a lot during this presentation tools can be in many aspects it can be a solution to do a deep dive analysis into what the portfolio is it can be a solution that actually emulate some mainframe functionality it can be a solution that facilitates refactoring of the customer code and infrastructure it can be a tour that actually facilitates monetization of the data so for all the technical complexity expect we actually use tours in many ways the second challenge is really about business impact so you may know that the on mainframe you typically have core business data core business processes that are being executed so we need to minimize the risk and we do that in several ways the first best practice that we have in that space is around doing a complex you see the sooner we can do a complex PLC the sooner we can limit the risk the second aspects we do or the second technique we do to minimize the risk is around doing performance benchmark performance test validating that the solution is actually variable and then we do a lot of regression testing we do incremental changes another challenge is around non-functional requirements right on the mainframe you've heard that high security high availability high performance good news is with a de Bresse we do have services that can provide that quality of service as well next one is about legacy languages so the mainframe could have assembler code it would have pier 1 make sure some COBOL code as well well when modernizing to AWS we do have the solutions and the techniques so that that code can actually be executed on a tables as well same thing with legacy datastore on the mainframe you could have index files you could have generational files you could have hierarchical databases while we also have techniques and towards that can facilitate that modernization to Ada bliss some customers tell us Wow we don't even know what turning on a mainframe it's all undocumented we don't know what the code is doing well here again we use tools to do a deep dive analysis and transform that that that could and that workload to AWS and finally often time customers don't even have the expertise to actually deep dive into what the mainframe is doing while good news here is that we have many partners that can help us with us with the expertise to facilitate those modernization to AWS so we've addressed so many benefits of modernizing to a de Bresse there are some challenges we do have solutions for them so what I really want to cover now is all the patterns that we have as a toolbox for modernizing to AWS so we do have two families of patterns we have patterns that are actually geared towards really shutting down the mainframe and then we have patterns that are geared to what augmenting the mainframe adding functionality to it so the first time you have fat on is about shutting down the mainframe short-term migration how as we call them so the first pattern we have in that space is emulator we're hosting we're going to use an emulator that's going to facilitate some modernization and the the movement of the application code and the data the second one is about automated refactoring we're going to use some techniques so that we can automatically refactor the workload onto something that's almost like cloud native onto a DeBlase the third pattern we have in that space is about tree platforming for more modern workouts that could be running on the mainframe such as Linux workloads Java workers or even certain software packages while we can replied form them onto a DES brace and then we do have the augmentation patterns so here we're not trying to shut down the mainframe but we're trying to complement its functionality add an augment its capabilities so easiest pattern we have in that space is around data analytics so we move the data over and then we explode that data with an arc its value another pattern we have is creating new channels right customers want to be able to create new functionality be able to enable those mainframe workloads with new capabilities so we can create new new channels on - aw directly for that the next pattern and the last one I'll be discussing today is around development and test so rather than augmenting is a capacity of a mainframe for dev and test then some customers use a de Bresse for the dev and test environments so now I'm going to do a deeper dive into each of the pattern and explain to you how they work so the first one is about emulator re hosting so the typical use case for for that pattern is you have a customer that has invested a large amount of money and time and expertise into developing a large mainframe or application and typically that customer wants to keep that that application code so they're going to try to actually move over to a debilitate efficient code limiting the changes necessary to the code so for example is it have a large COBOL development team so you want to keep that team and they want to keep developing in COBOL but rather than doing it on a mainframes you want to do it on another use case for emulator we hosting is if the customer has stabilized application they don't want to touch it right make minimal changes to it and then push it to a SS just to benefit from the agility and the cost reduction so how does it work one for that specific pattern we're going to use an emulator so the emulator is actually going to be able to mimic some of the mainframe functionality it's not going to create a mainframe onto a de Bresse but it's going to create the functionality that's necessary for the application code to execute and process as the transaction successfully onto a device so let's make it go through the dynamics of that pattern so you can see on the left hand side a typical main frame so you see the main frame code you see transaction manager batch subsystems some of the typical data stores you can find on the mainframe and send some some of the utilities are necessary to execute the application well what's going to happen when we do a monetization with an emulator when we migrated the code over if the code is not supported we'll have to convert it first but if the code is supported by the emulator what's going to happen is that we're going to use the emulator to actually recompile the code and be able to execute that code onto a de Bresse of course that code is going to have some dependencies it could be a dependency on the transaction manager right it could be because there are some 30 to 70 screens it could be because there are some temporary data queues that are necessary I mean everything that's been traditionally provided by the transaction manager while it's going you is that the emulator is going to provide that for you so you're going to be able to run the application in the emulator environment and for any functionality any third-party dependency that's not provided by this emulator then we would have to find some actually some utilities that match the equivalent functionality and what will happen in the case there may be some code adaptation required but that's going to be a de jure only only for the interfaces most of the code is going to be stays the same so from a risk perspective and from a management of the code perspective that makes it very seamless right you just move the core over you do a recompile and then you're able to exec you to code on on to AWS now but by virtue of being on a de Bresse the good news is that you're going to benefit from services so if the application is actually follows certain characteristics you'll be able to do horizontal scaling for example use elastic load balancing and Auto scanning groups if the application is if you want to use a managed service for the database example then you could use Amazon or the yes for the database itself and that facilitates the administration of the of the database you could also use cloud watch for monitoring so by virtue of being on the edge of lists and you really benefit from all the ADA based services typically those projects for medium-sized workloads take around a year it could be shorter for us like your simple application could be a bit longer if you have more but it's typically run around a year now what's interesting in the tied on is that I'm talking about tree hosting if I had to stick with a de Bresse conventions I would have to say that this type of project is more like a complex replayed forming project and the reason is because as you can see we're not keeping the same operating system we're not keeping the same mirror all right there is quite some transformation that needs to happen recompilation than this happen in in traditional database terms Rijos if you just lift and shift the workload you keep the same operating system same stack and everything is exactly the same here there is more work so that's why it's not a typical resource as we used to with Ada rates where you do the migration in a few hours or few days but it's more like a larger project that's more involving I do have the custom example for this pattern we we do have a customer a multinational beverage company that actually did exactly that so they had 28 applications just to give you an idea of the scale of of the workload they had 266 integration points with distributed systems so that was really a back-end server that's being used by plenty of applications so they actually migrating all this to trader breath so the benefit from the agility and actually is it benefit also from great cost savings so they actually were able to save 72% in annual cost by just moving to AWS reducing the licensing cost reducing the infrastructure cost and benefiting from 72% saving the next pattern I want to talk to you about is automated refactoring so as I was telling to you about before if you want to keep hope or COBOL for example or the mainframe language and keep investing in that then immolated Rios is good for that now some customers don't want to keep that so you don't want to keep the mainframe language they want to standardize a stack some customers want to be only Java some customers want to be only C shop and do the same across distributed and the traditional core business were closed so for those customers automating automated refractory can be a solution it actually creates more like cloud native applications it has very much similarity with cloud native apps so what happens in in this pattern is that we're not only converting code well we actually refactoring is a complete stack here you really have to understand that it's automated and refactoring it's not manual work it's all done by tools so that you can minimize the risk while doing the transformation and it's as I was saying it's a complete stack so it's not only the code but it's all the dependencies or the middle way middle way for example at the transaction subsystem level is going to be transformed all the data dependencies are going to be mapped also to a new data model so the automated refactoring and it's very important to look at the entire stack we will be able to actually make sure that the entire functionality is magnitude over and keeping coherent functionality between the sauce mainframe and the target database environment so let me take you through how how that works so typically those tools first they do some automated reversing general right trying to find out how what are all the dependencies both of the program level and the data level and what they do while doing that is that they created an application model so the user get deep type analyzes about the behavior of the programs and they can see all the dependencies and the reason they're doing that is because they want to be able to have the granularity to decide how the transformation is going to happen so as I was telling you everything is automated in that process what's not automated of the rules that define the transformation so by default the vendors in that space we provide the default target stack that's optimized with the tool set but if you have specific requirements for specific framework or a specific target a de Bresse service then they can customize a tool and customize the rules manually so then everything can be regenerated and actually accomplish what you're trying to achieve on the target side so once you they have identified which program then has them on the transform it they're actually going to do some automated forward engineering and that's a critical step that's really where the differentiators happen between the various tool vendors a lot of vendors actually do automated reverse in engineering right so you have lots of tools to do deeper dive and understand what's going on but as far as for one engineering is concerned it's much more difficult to find good capabilities in that space and making sure that you get actually a coherent result making sure you have functional equivalence between the target and and the source mainframe and once you've done all the foreign engineering once then you can deploy onto a degrees and basically the default be deployment would be for example on ec2 using all the SS database possibly using or as a database but if you want to use SQ as if you want to use SES with containers if you want to optimize some of the workloads with specific services like ElastiCache then that's also a possibility it's a matter of returning and refactoring some of the rules so that you can probably properly target the right services I do have an example for this pattern as well so we have a Department of Defense customer that should actually went through that process they had a supply chain system that was actually managing all their military equipment and they wanted to standardize their platforms they want you to make sure that they don't have to deal with the mainframe language anymore they wanted everything to be Java so they did several iterations of the automated refactoring and so they ended up having some java application and they've been able to deploy that onto a de Bresse so the good news is that in that process not only is it got more RI not only they were able to create a dynamic CI CD pipeline but also they were able to make some strong cost savings they don't have any licensing cost that attached to the target stack and they were able to save they see me things are saving 25 million dollars per year so that's a great achievement for for for that customer the next pattern I want to talk to you about is for more modern workers if you see some Linux if you see some Java workers or even some software packages that are on the mainframe well we can do real platforming so this is much more similar to what we do in a larger scope with within a Tabriz so for Linux workloads again because the underlying hardware infrastructure is different we won't be able to do a simple lift and shift we'll have to go through a real platforming project that means we'll have to reinstall software we'll have to really process them a Linux version for instance we install the same inner way and then once all the runtimes and the middleware at the same level same versions then we're able to actually export import the application were able to do backup restore of the data and run similar environments onto onto a degrees so on the left hand side you can see a mainframe with some more modern like workloads and then what we do is that we live teller and shift and then for Linux we would for example deploy that on ec2 instances for Java we have more choices and options if you want a diploma on ec2 that's a possibility but if you actually want to deploy those Java applications and runtimes with in containers that's also another option if you want to benefit from a manager environment you can also use the elastic beanstalk for software packages so which would work is very similar you would actually deploy we install the software backup restore the data and the applications and typically we would need to follow the run books from the vendor itself to define exactly which steps needs to be followed so the good thing is that for that type of projects there are many system integrators that can help and that have knowledge around the vendor products and how to modernize with a debrief so the three patterns that I covered actually before I move to that I actually have a custom example or so so multinational insurance company that has a claim on the mainframe they had Java workloads messaging workloads they were actually used doing some BPM relational database once they migrated all of this onto a de Bresse and then we were able to see substantial cost benefits and also gain some some agility so great story with with that multinational insurance company so as I was saying we got through all the three patents main patterns that we see customers successfully performing for auditioning on the mainframe now if SMS want to augment end that mainframe I'm going to go through the patterns that we have so the first one we see is around data analytics so this one is about moving the data over to AWS and then once the data is on a de Bresse then I mean the sky is the limit there are many services big data services even machine learning that can be used on a de base to really unlock and unleash its full potential so how does it work well uses the traditional data source on the mainframe side it could be relational hierarchic or some data files various types of data files available and used on the mainframe so that the day that data would be replicated to a degrees so many ways to do so some customers do that manually some customers use batch jobs to do that but so once I actually want to get the best information and most relevant information from the mainframe to a de Bresse try to use some real-time replication many ways to do that one way of doing it is using some messaging middleware but we also have some partners with great tools that can do data replication using some change data capture right using some CDC techniques they're able to get data from a relational database or even hierarchical or even some index data files and move the data over almost in real time to AWS once on AWS that that data can learn in a nested data like for instance or it can be processed right right away with key Nessus for example for immediate analytics and then it can move over so the data can be put into a data warehouse it can be moved to or dynamodb for if the customer wants to know sequel database datastore and then further processing can be done you can be using data pipeline to move the data even more you can do processing with EMR for instance or even some reporting and some dashboards using quick sites so one custom example with this one we have a National Railroad passenger corporation that has a booking system on a mainframe and right now before they actually did the project they had some challenges understanding what the customers were doing exactly in terms of reservations what the trends we're at cetera so what they decided to do is that they move the data over to a the risk using a messaging mineral way and then once on on a de Bresse they actually put the data on s3 copy the piece of it into DynamoDB and then as soon as the data was there they were able to do some nice reporting from it almost in real time because the data was pushed right away from the mainframe to AWS so the good thing with that is that they were not only able to do some real-time analytics but also they were able to do some forecasting and try to see what trends their customers were having while doing their reservations the next title we have for mentation is augmentation with new channels rather than just doing analytics when the data is on a de Bresse as soon as the data is on a abreast one of course you can do new processing develop new functionality and a common pattern that we see customers adopting is actually rather than increasing MIPS on the mainframe they decide to create new functionality onto a de base and provide the data and expose the data to new new consumers for example if you have a mobile application then you could actually integrate through the API gateway provide the service that's going to read the data and make the mainframe data available to the mobile applications or if you want to enable a new channel through voice one you could use the elixir and a developer named Alex SQL and make that data available it's still mainframe data that's going to be exposed through the Alex device to your consumers so there are whether of doing so right some customers should - you slander some of them choose to use containers so the worse ways to actually compute the data and expose it to the consumers one example I do have is a large US commercial bank that wanted to be able to integrate the mainframe data and expose that to the mobile app for the concern for their own customers so what they did is that they well they went or service they do you want to have to manage and administrate any database or any computer so they want all service they use Landa's they use DynamoDB and and they were able to actually put on expose the mainframe data to their mobile mobile users in real time so the good thing with that is that of course it was scaling seamlessly right because it's all server less they didn't have to manage any scalability constraint and then of course because it's all services one then they were able to achieve big cost savings and then the last pattern I want to talk about regarding augmentation is around development and test so as usual I mean mainframe more expensive customers don't want to increase meet but sometimes they still have new needs right and sometimes they have developers they have new projects as you want to perform and new applications they want to develop but rather than developing new environment on the mainframes they actually decide to put those environments onto a DeBlase so here you see the main term on the right hand side and the reason is because the production environment is still on the mainframe but all the devas and tests are going to be put onto a debrief so it's going to start with the dev environment customers could use ec2 for example you could use a diverse workspaces they would put their ide onto AWS and start coding there for example put some an ID that can do COBOL development and put it on onto a degrees then they would put the code committed for example through a database could commit and then promote it through the various environments so they could have like a test environment integration tests environment performance test even onto a de Bresse to do like advanced testing and they would use a mainframe emulator here so it actually kinds of go back to the first button we talked about right emulator Rijos we can use the same emulator we host solution here we even have like you're a famous mainframe manufacturer that even have own a mainframe emulator that's a variable just for dev and test only so once the code had been promoted through the various dev and test environment then that code has been pushed back to the mainframe for the final validation and then for executives occurred into production one example before I move over one example of a customer doing this we have a multinational financial services company based in Southeast Asia that's that is actually doing that they have hundreds of developers right and they don't want to maintain all the development environments onto the mainframe and they always have new projects that they want to be able to develop quickly so rather than putting that on the mainframe they decided to actually create one ec2 instance per cast developer in every single ec2 instance they have both the IDE and animal a variable to them so that they can do unit testing right there then they have other types of ec2 instances for integration testing and so the code get promoted through the various environments onto AWS and then the code is being pushed back to the to the mainframe while there is final performance test is being performed final recompilation and that the code is being put into production for the mainframe in the mainframe directly so I went through two families of patterns that we have right again patterns for shutting down the mainframe and then patterns for mounting the mainframe now what I want to talk about is the approach we recommend customers to take one modernizing with with a DES breeze so as you can see a critical success factor for doing that modernization is around the tour right depending on the tool that you choose you'll be able to deliver certain objectives you'll be able either to emulate similar functionality compared to the mainframe or you'll be able to do refactoring etc so it's really the tool itself that's going to be a critical success factor in the modernization journey so as quickly as possible you want to be able to validate that the tool can actually satisfy the requirements for the mainframe that we trying to modernize so the approach we're taking is always start from what the customer requirements are that's typical Amazon way of operating we start from the business requirements IT strategy requirements and then the specific mainframe requirements which technologies do they have deployed on their mainframe there is no one-size-fits-all so we have to understand exactly what can be done with that specific mainframe some emulators are good with specific type of mainframes but some of them are not good with other types of mainframe so we no need to know exactly what what the mainframe technology is for that we use a main term questionnaire that we have that allows us to actually deep dive into what we comments are exactly about and then very soon we need to also understand the strategy and identify which patterns from the patterns we've reviewed which pattern is actually applicable to that specific mainframe which patterns will allow the customer to satisfy their business objectives once a customer actually decide the preferred pattern then we need to identify which tone can support that pattern for every patterns there are several tools that can be used they all have their pros and cons so they have more or less experience more or less capabilities more different value propositions so you wanna make sure that this is aligned with what the customer is trying to is trying to achieve and then once customer has identified which store is preferred then the strong recommendation is actually to do a complex proof-of-concept so complex proof of concept is not an easy proof of concept that just going to show some of the functionality what we want to do here is really take what's most complex on the mainframe and prove that the tool lets strollin can actually achieve the business objectives and the technical objectives so we're going to take what's most complex could be bad could be stringent non-functional requirements it could be lots of dependencies it could be a specific software version that's not common at all I mean whatever seems challenging in that transformation we want to validate it through the complex PLC and the reason we do that is because we don't want to see failure in the middle of a large project we want to be able to identify it as soon as possible that will be able to be successful with that specific tool so the complex PLC will not only prove the key ability from a technical perspective but it will also show the dynamics of the tone how fast does it go what is the quality of the deliverables how satisfied is the customer going to be with the outcome of using that tone so the complex PLC will provide a lot of insight it will also so it will be good for the customer to reassure that the tool can satisfy the objectives it will be good for the tool vendor as well just to put them in in confidence moving forward and it's it's good for either process one so that we know that the partnership can work well and be successful once the tour has been proven through a complex PLC then actually you can start doing the architecture design it's very hard to do the architecture design before you know what tool you're going to use the reason is because depending on the tool wireless you're gonna have different workload sites right you don't know exactly what it's going to look like once you have a once you know exactly which tool is going to be used then you know what also constraint you know exactly what all the dynamics you know what all the options so you will be able to Reese stop deep diving into the architecture design once you know what the arc is architecture is going to be like then of course you can start defining the activities to build up that that architecture you can start actually creating the activities for the larger project and move on to delivery so again very important to understand what the customer is trying to achieve what are the business objectives and validates that the tool is capable of satisfying the business objectives through the complex PLC so we talked a lot about tools is the first requirement for the tour of course is to support the Technic technology that's the in the mainframe right if the tool cannot support like assembler or if the tool cannot support a specific type of mainframe then there is not even any reason to start considering the tone the second strong requirement is for the tool to be actually get be able to guarantee functional equivalence between the source and the target and that's going to be done through the complex PLC so the sooner you can do the complex PLC the sooner you'll be confident that you can actually invest more time in evaluating the tool and then delivering the project with with a specific tool and then of course you want the tool to be aligned with the IT strategy if you want to keep mainframe if you want to keep developing with a mainframe language you want to align with that with a specific tool if you want to go away from that and change and refactor you want to be a language that you want to keep the mainframe but only augment you want to be aligned with that on the tour that you select it needs to be aligned with that strategy so some of the evaluation criteria that you can be used during tools evaluation and it's going to be it will have to be aligned with what the specific requirements on the specific project objectives are but typically migration project speed is it pretty important I mean if the project is just going to be too long like three to five years of course it's going to put a project at risk so the more the tool can actually expedite the process the better so better that's going to be another criteria you can use to actually select the tourism aggression and cost per line of code see how cost efficient to Tory's complex PRC resorts so once you do the complex PLC you'll be able to see in detail exactly what the tool capabilities are does it provide maintainable code how easy is it to use the tool itself how scalable it is what also constraints etcetera you want to look at also the targets a kajillion some tools have constraints some of them don't have similar constraints so you only be able to see how well it plays with ADA brace specifically around some of the functionality that database customers are used to in terms of scalability in terms of security in terms of integration with the remaining ADA based services so you want to look at all of this and take that into account when selecting the tall target Kahn maintainability it can be another criteria availability performance scalability so that goes back to the mainframe non-functional requirements making sure that it actually matches with what's expectations are here what's interesting is that some tools have the capability to actually select various illiberal service depending on the quality of service meaning that if latency is very important then some ad based service can be well-suited if it's more scalability that's important or if it's more security than many several or I would say different a table service can be can be chosen and you know finally compare license costs that can vary dramatically also from one tool to another so I want to go over some best practices as ones that we identified looking at successful projects so the number one best practice we have again is complex proof of concept right I'm not going to repeat it enough you know as soon as possible you want to validate that the tool you're working with the solution that you are looking at is actually a variable solution so the sooner you can do that complex PLC the sooner you'll be able you'll be confident and to invest more time and money in that solution the second best practice is really about maximum automation so I mentioned the 21 million lines of code petabytes of data so the more automation you have the better you'll be positioned to deliver this project quickly and provide short-term results maximum automation is not only about managing the applications the data but it's also in terms of how the project is being delivered right around see ICD pipelines a lot of the projects modernization projects do have to do a lot of regression testing so you want to make sure that all the regression testing is automated as much as possible so that the project can be delivered quickly modernize legacy data saw so when on a de Bresse it's possible that some of the legacy data so keep the same format and in in general it's not a good idea to keep the legacy data format onto a tablet and the reason is because if you want to be able to exploit the data through other services then it's going to be much harder to actually access the data use the format and reformat the data and then be able to leverage it so recommendation we have here is rather than keeping the legacy data format try to modernize it through the project when migrating to a DeBlase it's typically a minor investment but you provide a lot of value after the fact because if you modernize the data format for example in a relational data store then you're able to access not only that data for the mainframe workload but also for any new need that you may have around so it provides a lot of value workload based modernization so again there is no one-size-fits-all and one you modernize to a de Bresse and you've seen all the patterns it could be that within a large mainframe there are several workloads it could be that one workload is going to follow a pattern and another workloads for it's going to follow a different pattern and then define truly variation factors so I mentioned to use some of the factors you could reuse but of course those have to be aligned with the IT strategy objectives and the business objectives so you're going to have to define your own factors from a business perspective some best practices we have vendor-neutral pattern selection so you want to make sure that you actually choose a pattern before you choose at all and the reason is because pattern is one of the most important aspects to really align with you as IT strategy so when doing so you will see that based on the specific mainframe technology not all patterns are going to be available to you so if you use very common mainstream technology on the mainframe then it's very likely that all patterns going to be available to you but if you have specific technical constraints if you have an unusual mainframe etc it's it's possible that there could be no solution or only one or two patterns are going to be applicable so make sure that you've select a pattern and then once you have selected a pattern then move on to selecting your Torn's the architecture and then defining the activities for the larger project some customers want to do a business level modernization while doing the modernization to address what we see is that many of the tools that we use formalizing to a de Bresse are very good at doing quick equivalent transformation to trade abreast but if you need to do business level transformation that requires some manual intervention and as soon as you do manual interventions with the amount of code with the amount of data that we have then it puts a project at risk so the strong recommendation there is to try to serialize first use the technical transformation make it run on a database and then only after you've done that transformations and start doing business improvements when you do modernization to a de Bresse you not only want to evaluate the migration class or the modernization cost but also the targets statcast so that includes not not only the ADA brace infrastructure cost but also any licensing cost that you have on top of it and then let me tell from system integrators also system integrators are great partners of us as it can help in many ways from the beginning to the end of a mainframe modernization modernization journey you want to make sure that the system integrators that you're working with especially before it truly strollin are actually knowledgeable about the various patterns that are available from one of the modernization to Ada bliss you wanna make sure that every single pattern they have experience with it you know which tools it can be used and actually help providing advices to the customer as best as they can once a tool is selected then the system integrator has to have expertise in the specific tool so they need to have experience with using that specific tool and modernizing to to AWS often time we see actually groupings of system integrators of partners to actually do our mainframe organizations it could be expertise or tools that are required from various vendors and so that's where we see a lot of partnership going on I'd like to cover some of the resources that can be useful first we have a blog post that talks about the patterns and best practices getting into more some more details about the patterns that I did describe so feel free to go and read that blog post I want to also mention a section that we have our own mainframe within the appian blog the AWS partner network blog we're trying to keep adding more and more blogs from our partners in there so you've seen that I was talking a lot about tools well if you want to know which tool do work for with a de Bresse feel free to look at the blog and you see not only tools that have performed projects with a de Bresse but also some customer success stories then so I want to spend some time also on describing the quality of service for mainframe workers on the ADA website so as I was saying for mainframes it's very important to have security availability scalability system management I'm not going to go through the entire list of Debrecen can support those requirements but keep in mind that we have now more than 130 services available onto a de Bresse and we support all that quality of service we have plenty of enterprise customers in all industries that have stringent requirements and that allow us to actually support stringent requirements for enterprise workloads so we do have the ability to run mainframe workers on on on a de Bresse and we also have a great partner community so it's good to know that we have strong experience from tools that can actually execute and the tools that we work with with our partners are actually getting more and more optimized to benefit from the many ADA based services and make sure that the solutions are running one on on the LMS so if you add to that the cost savings the agility that customer can get I mean all these are very good reason for modernizing - to AWS so finally I'd like to give you my call to action if you see a mainframe out there the first thing is to try to identify mainframe workload right there could be lots of workloads on one mainframe so we want to see which mainframe workload is a good candidate second you want actually collect business and the technical needs for that workload is it a stabilised application does it need system mostly the same does it needs to be refracted etc see exactly specifically what was the technical constraint and we can help with all those activities then you want to understand the available patterns for that specific workload then you want to actually select a tool and start investigating partners that can help with modernizing - to address that specific workload and then finally do a complex proof of concept so with this presentation you can see that we have the patterns we have the best practices we have the successful customer stories we have awesome value proposition with the breeze so let's modernize mainframe together [Applause]
Info
Channel: Amazon Web Services
Views: 8,219
Rating: undefined out of 5
Keywords: re:Invent 2018, Amazon, AWS re:Invent, GPS-Technical, Global Partner Summit, GPSTEC305
Id: AJ88gY1w9NA
Channel Id: undefined
Length: 46min 36sec (2796 seconds)
Published: Wed Nov 28 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.