Developing Your 2022 Cloud Plan Megacast

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello and welcome to the mega cast by actual tech media today's topic developing your 2022 cloud plan and i encourage you to think about it for a moment what's your plan in 2022 to better utilize cloud computing in your it organization that's what this event is all about and we've got a huge lineup of some of the most innovative cloud technologies represented on the mega cast today you'll hear from experts at rapid seven zerto with tear point rubric pure storage netapp plyops turbonomic ping identity firemon outsystems red hat sofos and faction wow this is going to be an incredible event thank you so much for joining us on the mega cast before we get started there's just a few things that you should know about the event my name is david davis of actual tech media and i'll be serving as the moderator along with my fellow moderator mr scott becker who will be joining me later in the event as always on the mega cast we've got some incredible prizes i'll be talking about those here in just a moment as well as the eligibility requirements for those prize drawings we're all former i.t professionals here at actual tech media and we know how tough it can be out there in the world of enterprise i.t we encourage your questions we want to help to solve your technology challenges here on the megacast so while many of you have already said hello and good morning and good afternoon they're in the questions payne we also want your technical questions regarding today's solutions and cloud computing we even have a best question prize which i'll be talking about here in just a moment to help encourage those questions and i'll also have some poll questions for you along the way in the form of poll questions and we appreciate your participation in those we also want this to be a social event you can tweet directly from your audience console and i'll be monitoring twitter during the event if you tweet using the twitter icon the hashtag for the megacast which is atm megacast will be automatically appended i also want to call your attention to the handouts tab it's there that we have a number of resources hand selected by today's expert presenters and we encourage you to check those out we've got special trial links ebooks solution briefs white papers and more as i like to say on the mega cast we've got a mega lineup of prizes and today is no different we've got five apple iphone 13s in your choice of color the awesome new iphone 13 we'll be giving out five of them on the event today and as if that wasn't enough we've got amazon 500 gift cards given out every 30 minutes after every presentation now you must be live in attendance to qualify and you must also meet the actual tech media prize terms and conditions which you can find there in the handout tab i'll be announcing the winners live during the event we also have our best question prize this is for an amazon 50 gift card one for each session on the event today so that's another amazon gift card that can be won roughly every 30 minutes now the catch with the best question prize is that we'll be contacting those prize winners via email after the event because that gives us a chance to review all the questions and determine what we feel is the best question for each session so we encourage your questions if you have something on your mind make sure that you ask it because you'll be entered into the best question prize drawing of course you must also meet the actual tech media prize terms and conditions which like i said are there in the handouts tab all prize winners have the option to make a donation to selected charities and you must submit an irs form w9 to actual tech media we reach out to prize winners via email after the event over the years thanks to generous prize winners we've donated thousands of dollars to the charities that you see here on the screen if you win a prize and you'd like to donate it to someone less fortunate we would love to help you do that the hashtag for today on twitter is atm megacast and like i said if you tweet from your audience console there that hashtag will be automatically appended you can follow actual tech media on twitter and me your moderator david m davis as well subscribe to the actual tech media social channels on youtube facebook and the 10 on tech podcast over in the itunes podcast store and of course we post all of our latest and greatest content there on linkedin so make sure that you follow us there as well in your handouts tab you'll find a link to the gorilla guide book club if you haven't been there before this is a great place to download free easy to read enterprise it books authored by top industry experts it's a great way to stay up to date on enterprise technology solutions and yes you can read these on your mobile device your ipad your kindle and yes they are all completely free another great way to have an opportunity to win some amazon gift card cash is by referring an it friend or co-worker to actual tech media's online events and you both could win an amazon 300 gift card we do these prize drawings each month and you can use the refer a friend link in your handouts tab to do that you'll also be automatically redirected to this referral friend page at the end of the event and don't worry we won't spam your it friends and co-workers we'll send them an invitation with a list of upcoming events if they don't respond we'll send them one more reminder and after that we won't bother them again so with that it's now time to kick off today's mega cast keynote presentation with our friend mr ned bellevant he's a blogger he's a plural site author speaker and more you can find ned over at his website ned in the cloud.com and also check out his youtube channel where he posts a ton of great content and don't forget about his awesome cloud related pluralsight courses over in the pluralsight.com video training library ned it's great to have you back on the event take it away this is ned bellevant and today i want to talk to you about five reasons why you need a platform team now before i get into those five key reasons first i want to back up and define a platform team what is a platform team and how would it fit into your organization now i want to say up front this is really for larger organizations you know not the small medium size but more the larger or enterprise type size businesses because a platform team is there to service internal customers and when i say internal customers i mean either different business units or different application development teams the platform team is there to service those internal customers they're not on your application development team they're not part of that sort of devops unit they're not on your operations and monitoring team they are separate and distinct from that they are a stand-alone team that is there to provide a platform for your internal customers and your ops team okay now that we kind of understand what a platform team is why do you need one let's start with reason number one consistent offerings across clouds and on-prem you may have already built something akin to a platform team when you were all on prem and using something like vmware your vm admins to a certain degree were running a platform and offering compute as a service and if you went big if you got like vcloud director or something you even had a self-service component of that platform but things have changed you've now added a cloud dimension to all the different services that you use and those cloud platforms all have their own self-service portals and a way for people to just swipe a credit card and start going what the platform team is there to do is to build a platform that goes around the various clouds and services you're using including what you have on premises now why do you want to do that that brings us to reason number two enforce standards for compliance and security that's right those pesky old things like standards compliance and security the larger your organization is the more important it's going to be to have a certain level of consistency when it comes to standards best practices how you're approaching security and compliance your platform team is there to work with the infosec folks and to work with the compliance folks and design the platform and templates on that platform in such a way that they adhere to what security wants to see and what compliance wants to see now there is the possibility when you design these offerings for your internal customers that they may not meet the needs and the extreme acceleration of your application development teams which brings us to number three investigate new services and features there's a balancing act that the platform team has to do when it comes to interacting with their internal customer which is the application development team and the other stakeholders the folks in infosec and the folks over in compliance the application development team wants to go fast and use the newest and greatest features and the infosec team and compliance want to make sure that you're actually adhering to the best practices and standards the platform team is there to investigate new solutions that have been brought along either by those application development teams or maybe coming from a vendor that has a new offering and once that offering has been vetted now it is integrated into the platform and made available to the application development teams to consume now how are they consuming that service or that platform well i'm glad you asked because that brings us to the next reason create self-service workflows the platform team's job is to create an offering to create a platform and then get out of the way the application development teams have grown used to the cloud self-service model where they can stand up the necessary infrastructure services and features they need for their applications without having to ask for permission or put in a help desk ticket if the platform that the platform team is building requires sending in multiple help desk tickets and interacting with a bunch of other teams the application developers are just going to swipe that credit card over at aws and open up a new account because they got stuff to do and they can't wait for the platform team or the ops team to get that new feature or that new service available and make it self-service so the platform team has to listen to what the application developers are telling them and incorporate in the platform what those services and features are and make them self-service so all the app developer has to do is maybe write something in infrastructure's code submit it and boom they're using that service now the very last thing i want to add is the customized experience that's expected within that platform integrate toolsets for custom experience every organization is going to have different needs and want a different portfolio of services and tools not every tool fits every organization and in that way it's up to the platform team to assemble a relevant set of tools that can be consistently used across the board by the different application development teams now maybe that could mean using github for source code using github actions to create ci cd pipelines using some third-party tool to do the static code analysis and security introspection and maybe another tool that's going to hold the artifacts from what's been built from the source code repositories there's a ton of different services out there some of which you're going to hear about today and tying them all together in a way that is easy to consume for the end user for those application development teams is what the platform team is there to do i know that was a lot of information to throw at you all at once so let's review again the five reasons why you need to build a platform team number one consistent offerings across clouds and on-prem number two enforce standards for compliance and security number three investigate new services and features number four create self-service workflows and number five integrate tool sets for a custom experience i hope that you found this information useful as you work to build your internal platform team to help service the application development teams you have in your organization if you'd like to interact with me you can find me on twitter it's ned1313 or at my website nedinthecloud.com thanks for watching and until next time stay healthy and stay safe out there bye for now all right thanks so much ned always great to hear from you and learn from you we appreciate your your really cool content there so thanks for being on the mega cast today i've just brought up a poll question for everyone out there our first poll of the event it says which of the following would be of value for your organization of a platform team so that's what ned was talking about there five reasons to build a platform team this is a multi-select question so feel free to select more than one option on here and i will share the results of this and you can kind of see how you stack up with your peers at other it organizations you know across the united states and around the world how you stack up so i'll give you a moment to respond to the poll if you don't see the poll a push refresh on your web browser and 99 of the time that will resolve it if you haven't answered one of these polls before you just do it right there in the slides window select one option or in this case multiple options you know that correspond to you and your company so i'll give you a moment to do that and actually it looks like we've got a ton of responses already so let me go ahead and share the results and it looks like the leader here was a 43 percent who said enforce standards for compliance and security security's you know kind of always a winner here that's in fact that's what our first presenter is going to be talking a lot about here security followed by offering consistent offerings across clouds and on premises so thank you to everyone who responded to that and then one more poll question before i introduce you to our first presenter on the mega cast this is what's your time frame for adding new or updating existing cloud solutions at your company so today you'll be learning about 13 different cloud solutions you know if you find a solution here that really you know you think could help you to help make your it organization more efficient or more agile or your company more scalable or you know increase security what's your time frame for taking some action all right thank you to everyone who responded there to the polls we do appreciate your feedback we'll have more polls of course during the event but with that it's now time to kick off the mega cast with our first presenter i'm excited now to introduce you to dane grace technical product manager at rapid7 hey dane thanks for being on hey debbie thanks for bringing me on glad to be here absolutely take it away all right so today we're i'm going to talk about uh our feature cloud configuration assessment uh which is our cloud resource inventory and misconfiguration management tool within insight vm um before we dig dig into that i wanted to talk about the rough workflow of sort of traditional or more classic uh vulnerability remediation management because really that's what rapid seven and inside vm and previously next pose where it's known for was the sort of scan engine uh network network assessment of assets within the premise premises of a network and identifying vulnerabilities on those types of assets and um i want to kind of run through this workflow just to understand um how this is sort of a delta or a departure from what rapid7 is typically known for so as we all know you know in the in the traditional infrastructure sense generally speaking you have something like a host operating system on top of that is the software installed on that and then we identify the vulnerabilities you know throughout that stack so next one was inside vm for example with our scan engines we could spy to the network identify assets either perform credentialed assessments of traditional assets or non-credentialed assessments of assets by looking at network ports etc identify misconfigurations and vulnerabilities and then report that back to the user and sort of the underpinning uh concept here is that there was some sort of host os for for us to log into or install the inside agent on which gave us inroads into sort of the software stack that we were trying to assess and address and identify vulnerabilities and misconfigurations and but as we know um especially over the last year and a half with the situation with remote work due to covid the network the network uh the network boundaries become a lot more porous or or very least it's dissolving and um there's a lot at least from my experience i've seen this uptick in interest in the cloud so um you know customers i spoke to a year or two ago that were somewhat hesitant to move to the cloud or it was some sort of far-off project that they would undertake have accelerated that due to the due to this you know the new face of work and remote work so as we look into that let's look at how cloud vrm differs from traditional vrm and really what it comes down to is the stack of technologies that you're using and and how those technologies are stacked um in order to assess this for vr for vulnerabilities especially from you know the rapid 7 the vrm products side of things whereas traditional environments are you know static cloud environments can become inherently ephemeral uh ec2 instances things like aws fargate um serverless infrastructure as code these sort of clear the board for our traditional sort of vrm assess assessing tools both from the agent side and from the scan engine side and moreover you're not looking at um delivering solutions by way of patches right it's not there's a vulnerable version of windows you have a vulnerable version of the java runtime engine it's there are these misconfigurations in uh the service that you're using and we need to identify that also the ability to inventory um ephemeral assets mit2 instances that sort of thing um and with that comes comes specific challenges right because these tools are so powerful and because they're built to give to breed agility and to you know serve nearly any need there are a billion ways to a configure them and probably more ways to misconfigure them and oftentimes the defaults fail open which from security or vulnerability and risk management perspective can be can be problematic and so um if you have a vrm tool oftentimes you need a feature or a specific tool to address cloud environments because as we've just as we've sort of outlined here these are inherently different than traditional assets not even just a laptop but of server rack or even virtual machine instances which could be relatively persistent and so for us this is where we come into cloud configuration assessment we had a preview version of this product that focused solely on aws rapid seven in the middle of middle to early 2020 purchased uh acquired divi cloud and with that we integrated divi cloud into the insight platform to create insight cloud sick that's also sort of uh includes our functionality brought by our acquisition of the kubernetes security vendor alsee and with that with inside cloudtech we have this very powerful code base of enterprise class software that will handle enterprise-grade clouds um which given our pre our home built preview version began to beg the question um what were we going to do with this where are we going to go with this there seemed to be a huge overlap so um with that we were presented with the opportunity to take a subset of that functionality and expose it in inside vm and the specific use case here is um for you know teams that have um that have limited bandwidth or limited budget in order to tackle their cloud their cloud security program and so they can't either justify the budget or they just don't have the time to make full use of a full-fledged an enterprise-grade solution such as inside cloud sec or for a team in which uh for an organization in which um the cloud isn't there is they aren't a cloud-native business they're not a cloud-first business it's just a place that they do work and so really they they need um they need a look at sort of the the most important assets as well as in this configurations so with that cloud configuration assessment is uh powered by insight cloud sec and what it does is it gives us the ability to inventory resources across aws azure and gcp whereas the preview of our cloud configuration assessment feature which focused solely on aws um address the most popular resource types and assess those against cis benchmarks for misconfigurations and what that does is that gives us a tool that help that can take a perhaps a mature security program or a security program with limited um budget or bandwidth to to handle the cloud security program and gets them in the seat going on the road down a more secure cloud environment quickly again just just a quick look at what we're looking at so we're providing coverage and uh coverage and inventory capabilities for the top 50 resource types across aws microsoft azure and gcp um misconfigurations across all three of those as well as the ability to manage exceptions for specific findings and this is really to ease the burden of operationalization in that we understand that organizations do have the concept of acceptable risk and we want to be able to support you there and so we have the abilities to uh to um to support you there so looking at the resource types that we're looking at um this is a non-exhaustive this is a non-exhaustive list uh and we are planning to add more but you'll see here this is sort of the greatest list greatest uh hits of all of the um of resource types within aws gcp and azure and again our idea idea here is to hit the services that are most likely deployed by organizations and quickly get them into inventorying these getting an idea of what's what is present what is present and what is going on with their environment um and then digging into some of the misconfiguration checks again uh we're applying relevant cis benchmarks against uh aws azure and gcp resources so for us this would be things like serverless function not within a private cloud for aws our instances don't have endpoint protection installed for azure looking at databases right ensuring that a database instance doesn't allow anyone to connect with administrative privileges or with identity management make sure that policies don't have full administrative privileges are not created so that star which gives them all of the kings to the kingdom and again these are these are high impact these are high impact benchmarks intended to really um jump start users into a more secure cloud environment so with that i would like to jump into any questions we might have absolutely yeah we do have some questions for you drew a great presentation by the way let me go ahead and bring up this whole question for everyone out there in the audience we want to hear your feedback on this what additional information would you like about the rapid7 solution and so let's say there's a lot of great questions coming in they're wanting to know what's the difference in functionality between insight vm cloud configuration assessment and rapid sevens insight cloud sec that's a great question and to be honest that was one that we that we really hit upon especially when we had the preview of car configuration assessment up because there was this massive overlap the way i would describe it is that um cloud configuration is essentially a cspm tool whereas whereas inside cloudstick takes you beyond just inventory and misconfigurations there are things like uh you know policy policy assessment not just sea ice benchmarks but you know disa suck to mist as well as policy enforcement we have the insight outside has a number of automation options so um automated remediation options both um you know human triggered and automatically defined where they have this concept of bots where if a bot finds a misconfiguration it'll go and try and close that misconfiguration as well as support for things like alibaba cloud oracle cloud and things like that and the other half of it to know is that cca only supports resources in which there is a corollary across all three providers so the resources that we're supporting for example ec2 on aws virtual machine on azure and compute on google but if there are um resource types that are specific to aws azure gcp in which there's no correlation corollary across all three providers um we don't support that in cca whereas divi cloud would so really the resource types that we support in cloud configuration assessment is a subset of what uh of what um ics supports so i hope that answers your question got it yeah thank you for clarifying that um let's see another question here they're asking how do container how do containers security and what cloud configuration assessments differ oh how does container yeah yeah go ahead yeah so this is the this is the question of sort of of um workloads versus infrastructure and i think the the challenge here is that security teams really they think of things in vertical slices right there they're not the industry has generally approached uh cloud security and sort of addressing single tools right or certain classes of tools whereas the security teams are like no this is what my applications run on i need to secure not only the workload but the host of that workload is running on and possibly the host that the host is running on um a classic example of this would be you know containers for example where you could run them in multiple fashions within something like aws where and not that i suggest anyone do this but you could run a linux instance on ec2 install a docker host on that on that on that linux vm with an ec2 and then install containers on top of that right and so really you have four layers of technology that you need to secure you need to make sure that the container image itself doesn't have any software vulnerabilities in it you need to make sure that the docker host doesn't have any software vulnerabilities in it and you need to make sure that the underlying debian host doesn't have or linux doesn't have any software vulnerabilities in it and then you need to make sure that ec2 is configured in such a way that itself is secure and so um with cloud configuration assessment we're really focused from a container perspective where you could have a container running on a container host we're really we're really focused on the underlying cloud infrastructure not the workload that goes on top of it so my analogy that i use is if the cloud infrastructure is the road on which a car drives and the container or the virtual machine image is the car on the road cloud configuration assessment is primarily focused on the road that the car is driving on not securing the car itself got it okay yeah i like that i like that um analogy there um another question here they say i hear a lot of acronyms cspm cwp what do these mean and how does this fit in with the cloud configuration assessment yeah so cspm cloud security and posture management and then um cwp cloud workload protection and then you'll hear vendors call their platforms cloud workload protection platforms i i go back to my analogy there where cspm cspm tools are primarily interested in the road and then cwp vendors are interested in the car um but again i would like to stress and as in my previous example um chiquita teams have to secure all of it they don't care if it's if the road is crumbling or the car is crumbling their concern is that everything runs everything's safe and everything gets to its destination you know safe sounding uninjured i mean this isn't to be a show for for either vendor but um i think i think the i think the uh the differentiation there is a bit of marketing i'm not saying there isn't value in the term cwp or cspm but at the end of the day um the security team is responsible for all of it generally speaking right um and i often hear that of as i'm talking to customers about cloud configuration assessment or insight cloud tech they say great you you handle the underlying host and infrastructure but what about the virtual machine or the container running on top of it you know as our previous question illustrated and so um yeah i mean you make that differentiation of the workload versus the infrastructure but again um security teams are responsible for all of it okay okay yeah thank you for clarifying that um another question here is cloud configuration assessment an additional product and do insight vm customers have to buy another license or how does that work yeah it's a great question um cloud configuration assessment we're not charging a separate licensing fee so it's not an add-on module um the way we the way we've worked it out and divi cloud divi cloud and later inside cloud tech have this concept of buildable resources so it's basically instances of compute uh database and uh cache and search instances and what we do is we have we have buildable resources within those three categories and we do we do some calculations to normalize the number of billable uh resources you've used over a month and then those those will get those will use inside vm licenses for how many available resources you've used for the month um so it'd be normalized right and the idea there is we didn't want to penalize customers to try and use another feature within inside vm one and two i like to call out that those inside vm licenses aren't special cloud licenses they're they're seats for anything that same license could be used for a laptop you know a a server or a cloud resource just to simplify the billing and make it make more sense for our customers okay okay next question do you have to install an agent or configure a scan engine to use cloud configuration assessment or how does it really pull its data from the cloud environments yeah so you uh there i mean each cloud each crop cloud provider differs a bit but you basically use apis to pull the data out you have to configure for example with aws you have to configure a role with proper permissions and then we use the the sds assume role in order to you know inventory your environment but no there's no operational burden of having to deploy and update agents or configure scan engines or anything like that um we all pull we're we're going cloud native cloud data so we're we're pulling via the apis and then we process the data on our side okay okay that makes sense and then another question here we may have talked about this one but they're asking do you provide security at the firewall level uh that'd be my follow-up question would be like firewall configuration rules which i don't believe so okay there's a bit there yeah yeah i guess i read it as is rapid seven a firewall or do you have a firewall solution so we don't have a firewall solution um no okay got it and then um another question here what about orchestration jamie is asking uh if there's way to uh integrate this with like orchestration tools yeah for that i would point you to inside cloudtech there they have a number of extensions and um integrations that you could leverage but that would be something that i would push to i that would that would be something that i would say this is a worthy conversation to have with insight cloudsec also they have their own internal sort of remediation and automation tools as well so i'm sure you can do some sort of one-two punch between ics and whatever orchestration tool you're using okay okay that makes sense and then um let's see rob's asking do you offer templates or pre-built policies around cloud configuration to speed up deployment and testing based on compliance standards uh absolutely ics does cloud configuration assessment is only assessing against cis benchmarks but uh but you could absolutely do that in insight cloud sec with uh for things like nss stock2 fedramp et cetera et cetera so there's a number of built-in policies that you could gdpr right there's a number of built-in policies you can assess against and uh actually i think there's there's policy enforcement as well so which is powerful right right right okay um another question here do you provide kubernetes cis or are there cis benchmarks for kubernetes there are uh we don't currently have that as a resource type within cloud configuration assessment that's one of the ones i want to add but inside vm if you have an underlying host os first to scan a scan engine or an agent will identify it and then with our acquisition of alseed um insight cloud sec absolutely does that they do policy enforcement within kubernetes of runtime as well as uh misconfiguration of the of the kubernetes host itself as well as ci cd integration so you can look at the at the configuration and make sure that you're not shipping a kubernetes instance that's configured in an insecure way because i'm sure we've all read the horror stories in the news about monero miters getting deployed to misconfigured cubelets and running up uh incredible compute bills for those poor poor organizations wow yeah that's scary um all right let's see so if folks want to get started with rapid7 what do you recommend what kind of resources do you think would be the the best choice i think uh just go go to the reptile website uh reach out to someone um to take a look we have uh we have a we have a link in there to um talk to someone and really start really start tucking out your use case rapid 7 has quite a few offerings across a broad range of products and really i would like to make sure that we're addressing your need so i think it's best to have a conversation with someone with either um with one of our with one of our representative folks to to get a better understanding of what you need so we're not so we're not over equipping or under equipping you to meet your needs so i say just talk to someone on our site and we'll get you we'll get you fitted for the right products absolutely yeah sounds good all right well dain i see uh 22 questions still there in the electronic queue we didn't have time for maybe you can get back to some of those folks but uh really great i'll stick around and take a look at this awesome thanks for being on dain thank you bye-bye and thank you to rapid7 of course for being on the event today make sure that you check out the handouts tab there you'll find a link to rapid7 cloud infrastructure security solution brief and it compares the different comparison contrasts inside vm cloud configuration assessment versus insight cloud sec make sure that you download that resource before you go all right and if you haven't answered the poll now is the time to do so i will leave that up while we announce our first prize winner on the mega cast we have an amazon 500 gift card going to alex oryx from alabama congratulations alex oryx from alabama and with that it's now time for our next presentation on the cloud mega cast i'm excited to bring in now zerto and tierpoint welcome andy fernandez senior product marketing manager at zerto and sarah fowler director of product marketing at tierpoint andy and sarah take it away welcome to today's webinar with zerta an hpe company and to your point my name is andy fernandez and i'm the senior manager of product marketing at zerto and i'm also joined by sarah fowler who's the director of product marketing at tier point we're very excited to be here we're going to make sure that we monitor the chat throughout this webinar so that we can answer any question you have whether the question is for sarah or myself uh just put it in the chat and we'll be able to get to it very quickly so let's get to today's topic and that topic is around dr strategies we're encountering you know we're getting close to the new year and with that comes a lot of initiatives a lot of planning and a lot of evaluation that needs to be done so we're going to talk about today specifically around disaster recovery as a service uh so let's go through the agenda real quick so for today's agenda we're going to talk about what are the actual challenges that every it organization faces uh what are those challenges specific to dr and then because of those challenges why people choose disaster recovery as a service to be able to solve for that how does zerto deliver that value as a dr solution and how to zerto and tear point as a managed service provider provide that disaster recovery as experience that organizations are looking for and through the whole through the whole presentation we'll make sure to address any questions that you have so let's get started so what we look at here are standard challenges that we're seeing uh they haven't changed too much in the last couple of years but there are three things that we know the first one we know that data growth uh continues to skyrocket not only how much of that data is growing and how much of it requires to be managed and stored but where that data is growing uh and it's in completely disparate locations whether you know iot whether we're looking at work from home uh what we're looking at containerized deployments sas applications we see that there is a sprawl of data um that still needs to be managed so that brings a lot of challenges there number two is cloud platforms whether that is cloud workloads or adopting the cloud as a platform for example for disaster recovery it brings a lot of value but it's also a new technology that some organizations aren't used to yet so there are some challenges there in adoption challenging and being able to succeed and to truly uncover the the financial benefits of the cloud as well and the third one is obvious ransomware we're seeing an increase in the severity um and the volume of these attacks and no organization is safe it's not a matter of if but what now when you combine all three of these core challenges we also realize that organizations and consumers are demanding less and less downtime they want access to their applications as quickly as possible and they do not want to see any disruptions right that applies not only to your customers but your partners and your employees as well so while solving these three challenges you also need to be able to improve your slas but what does that actually look like from a responsibility perspective what this means is there's two ways of categorizing this the there are your unplanned activities and your planned activities when we talk about unplanned that's where things like ransomware natural disasters come in and usually what this means is you need to have the ability combining with a backup and a dr solution you need to have the ability to be able to move your data elsewhere this comes from infrastructure failures outages hurricanes tornadoes fires you name it but we also consider ransomware kind of a dr scenario as well if you think about certain ransomware attacks that completely disrupt businesses and their critical applications it's clear that it's a dr scenario because they lead to unplanned downtime and data loss but then you have the unplanned events that you can kind of account for throughout the year right which is i know some of my employees are going to delete emails i know there's going to be user errors and i know there's going to be corruptions and i need to have a solution that allows me to to quickly select these files and restore them quickly but then there's also the plan section which is i know i have legal requirements i have to retain data for this much and have the ability to recover this quickly and then you get into a few of the challenges that we've talked about my organization is asked that we move to the cloud that we modernize our infrastructure uh that we consolidate data centers all these activities are important because they're business building but they bring a lot of challenges as well specifically disruption specifically time resources and labor spent on them as well and then you have your tests in dev right this is kind of a generic category but we're talking about your ability to effectively perform patch management your ability to effectively test your data dr and backup these are all things that we have to solve for as an as an organization whether it's in-house it or using a service provider so that's where the challenge or the the question of do i do this myself or do i use the managed service provider coming throughout this section we're going to talk about the challenges uh and the reasoning as to why people move towards zest recovery as a service and then we'll get into why zerto is beneficial and then ultimately how tearpoint is able to really package that up in the services approach so if you think about the biggest dr challenges this is data that we're seeing from the idc state of data protection disaster recovery readiness report from 2021 and it showed what organizations are looking at as specific challenges within dr which is we don't have the staff that is knowledgeable or skillful enough to do this in a mature way with the slas that my organization is demanding or maybe we do have the staff time but we have the staff but we don't have the time the resource availability to maintain a secondary site to perform these operations as i mentioned before sla is a a very important aspect as well but then there's also things like costs moving to the cloud and these are challenges that organizations have with dr regardless of what that dr practice is whether it's on premises doing it yourself or using the public cloud or a service provider but this is where disaster recovery as a service brings a lot of value because one it's about infrastructure and two it's about finances what do i mean by that what i'm what i mean by that is we have to think about what are all the activities the time the resources the labor and the budget that i have to dedicate to building my own data center to building my own dr practice towards using disaster recovery as a service so when we think about the diy model right this is i have a i manage an it organization and i know that we have to build a secondary site somewhere for me to fail over to for me to able to do that i have to account for compute from a dr perspective right there's a capacity there i have to account for the storage i need the infrastructure and the hardware to manage this uh with that comes maintenance contracts i need to have security infrastructure right networking firewall infrastructure i need to buy a dr solution manage that solution and i also need to have the hypervisor management tool licenses as well i need to have monitoring i need to have support i need to perform patching maintenance i need to pay for power i need to pay for cooling uh i need to find a location i need to do all these things for for me to have a redundant facility that i can use and that is just costs right but think about the people that i have to hire the people that i have to train the people that have to continually add administrative overhead to in order for me just to have a secondary site to fail over to and perform that operation myself but if i want to use a site that is not a site i own whether that's public cloud or a service provider then i have to just have the right sizing and storage requirements i need to purchase those licenses probably just pay per month and then i use my compute on demand and what that means is essentially i'm offloading all of the capital expenditures and just paying for what i'm using i'm reducing a lot of the complexity i can use that staff that i had for dr operations uh for other things that are much more important and conducive to the revenue of the business so not only am i able to offload all these capital expenditures offload the complexity the time the resource the labor but also move towards a consumption financial model and also improve my slas now as i mentioned you have two options you have the option of being able to just use a secondary site i still have the staff that i have but we're just using aws or azure or you have the ability to use a service like tear point a managed service provider that will do everything for you uh where you're able to gain a lot of benefits and when we talked about the dr challenges of i don't have time i don't have a resource and i want to improve my recovery time those are challenges that you're dealing with disaster recovery as a service makes a lot of sense with draz you get an expert partner people who are trained specifically for this your implementation is exponentially easier than building your own data center you set the recovery objectives that you want and they will meet it it's simple it's secure it's built with security in mind and you're bringing a mature business continuity disaster recovery organization now as important as that is it's also extremely important that you think about a solution that can power this so when you're thinking about okay i know i want to use a managed service provider but what is the actual dr solution that i want to use it's time to consider with zergo the core of what we do is continuous data protection what this means is we use replication near synchronous replication that essentially is creating changes every five seconds so instead of using backup instead of using other technologies you use a zerto software only model that's a scale out architecture that will work anywhere on the hypervisors that you use to the infrastructure that you want to use so what this means is if an incident happens with zerto you're able to simply select from checkpoints from thousands of recovery checkpoints a day now i mentioned that zerto gives you thousands of recovery checkpoints a day right it does so by block level replication uh which means that instead of using backup solution that gives you one checkpoint a day you have thousands to choose from and they're replication based right so these are non-disruptive events this is happening as your business is conducted these critical applications are being replicated with zerto the replication is combined with journaling technology journaling technology that allows you to simply select a point in time as we see in the ui here throughout the day and you can fail over an entire site you can fail over an application recover an application you can even perform instant file level vm and application restores which brings a tremendous amount of value because it minimizes a lot of the data loss this is what improves your rpo and as we talk about rpo it's extremely important to compare this to disaster recovery or backup solutions that use snapshots right i'm taking a copy of that data once a day and then that is what i'm using as the point of single point of recovery with zerto that's thousands of recovery checkpoints that you can easily select and use to be able to solve and accelerate and lower your rpos from hours days to five seconds now rto is just as important so as i mentioned rto right recovery time objectives it's extremely important to know why zerto delivers and is able to accelerate your rtos meaning before after a ransomware attack after a dr incident it would take me an hour to be able to recover an entire site and get operational with zerto it just takes a couple of minutes and the way that we do this is how we treat our applications every other dr solution every other backup solution either copies or replicates data on a per vm basis right you have an enterprise application that has multiple vms what zerto is able to do is able to replicate all of them at the same point in time you designate what we call a virtual protection group and that virtual protection group now then just replicates every five seconds your entire application stack this turns your operation of having to restore and recover data your enterprise critical applications from days just to a couple of minutes so it's crucial to understand i need a service provider that can deliver these benefits for me but you need to find a service provider that is using the right engine the right replication engine that has automation that has orchestration and can help you get operational as quickly as possible but you can give the keys to a ferrari to somebody who doesn't know or drive them and there's no value there so you need a service provider that's able to deliver that so with that i want to transition over to talking about tear point so i want to introduce sarah fowler who's the director of product marketing at tier point and she's going to go over how combined with zerto tier point is able to deliver an exceptional disaster recovery as a service solution so with that let's move over to sarah thank you so much thanks andy i'd like to first take just a couple minutes to give you some background on tearpoint and our relationship with zerto so we've been working with zerto for about a decade now and are one of their top five partners worldwide we were honored this year to be named their 2021 msp of the year at zertocon at your point we have a deep bench of dr expertise with engineers that are dedicated to recovery services they have essentially been living and breathing dr and zerto technologies for years we support over a thousand customers with recovery services and i believe at last count have over 10 000 vms protected with our dras solutions so who is tearpoint tierpoint is a leading data center and managed services provider it's our goal to enable hybridity solutions and guide our clients on their path to i.t transformation we have a unique combination of clients facilities solutions and services we have a very large customer base with thousands of clients in a variety of sizes and industries and we really support a broad spectrum across those sizes and types of organizations so from small businesses to mid-market to fortune 500 and in verticals or industries like healthcare technology and financial services we also have one of the largest and most geographically diversified footprints in the nation we have 40 enterprise data centers and 20 markets coast to coast they're all interconnected by a low latency highly secure national network this network of data centers is particularly relevant to our disaster recovery business we can deploy a private cloud in any of the 40 data centers and have eight multi-tenant or shared cloud pods across the u.s as a result we have the ability to offer geographic diversities our clients are seeking so for example your business is located in dallas and your production workloads are housed in one of our dallas data centers you can set up your recovery site or your recovery target in any of our other facilities we see a lot of clients maybe take their recovery sites to nashville or to chicago so you can get the diversity you want as part of your dr strategy and it's all within our network of data centers so i won't spend too much time on this slide but i did want to note that we have an extremely comprehensive solutions portfolio so our capabilities cover private multi-tenant hyperscale and hybrid cloud platforms we can offer co-location in all 40 data centers we have disaster recovery services security services other managed i.t services it's also probably worth mentioning that we have a great professional services team that can help design a cloud roadmap plan a migration or work to establish a broader business continuity plan for an organization all this is important because it's your point it's our goal to become that trusted advisor and partner with our clients to really understand their business needs in order to customize an end-to-end solution that both meets the current requirements but can also evolve to support their future goals and business outcomes okay so circling back to the importance of dr in your 2022 cloud plan with zerto the technology's there they're a proven leader in this space their platform is going to allow you to meet your recovery objectives but don't discount the importance of the human element if we think back to the earlier slide that andy referenced with the top dr challenges the top two are it personnel knowledge and skills and then their time and resource availability unless you have the individuals on your teams that specialize in dr or have the time to focus on the necessary recovery related tasks leveraging an msp to support your recovery plan really makes a lot of sense it can be critical in the event of an outage or a disaster so msps they can assist in a variety of areas so from the planning and design phase to supporting during an actual failover event this slide has a handful of items that i've laid out that sort of demonstrate the value an msp can bring to an organization's recovery strategy and many of these we've heard directly from our own clients first up msps with professional services capabilities can really help you develop a more holistic business continuity plan to support your organization's recovery goals they can do this in a variety of ways sometimes it's through a comprehensive business impact analysis by doing a risk assessment or a deep dive review of the current i.t infrastructure and its ability to handle a disaster msps can also help with the design of the actual recovery systems as we know it environments and hybrid architectures are very complex and leveraging the expertise of dr specialists or engineers can help make sure it's done right the first time for many organizations testing is something that gets lost in the shuffle of the day-to-day working with an msp helps keep that testing regular or helps keep it up to date with scheduled rehearsals and testing exercises to ensure that everything is working as it should another big component of sort of the planning and consulting support provided by msps is run books run books are authored by experienced dr engineers in collaboration with the clients with the organization the msp though is very instrumental in updating validating and maintaining that run book this is particularly important during a failover so in msps dr engineers they'll follow that collaboratively develop that tested run book to bring the infrastructure back up i think the key point that i'd like to leave you with or that i'd like to make here is that by leveraging an msp your team has access to the expertise that they need as it relates to disaster recovery and the ability to focus on other things whether that be other strategic products projects or a disaster instead of the recovery related tasks and processes now i'm going to hand it back over to andy to wrap it up thank you sarah for following up and really talking about how tier points able to deliver that draz experience with zerto at the end of the day it's about finding a partner and a solution that allows you to get operational get back up and running as soon as possible post event zerto and to your point are able to do that for you entirely with that i want to leave a few act actions for you uh two of the things to the assets i want to leave for you are one the state of data protection disaster recovery analyst report and the idc tech spotlight continuous data protection take a look at these objective studies kind of look at the data of the difference between dr backup the challenges that organizations are seeing and the types of solutions you should look for especially when you're looking at protecting your critical applications and how you should evaluate continuous data protection so with that we'll make sure to answer all of your questions hope you're able to access these items as well we'll post links on the chat as well thank you so much i hope you have an incredible day all right great presentation thank you so much andy and sarah i learned a lot about zerto and tear point uh this sounds like a great solution that could help so many companies out there better protect their data and recover uh their data rapidly you know should some sort of disaster strike so i've just brought up a poll question for everyone out there that says what additional information would you like about the zerto and tear point solution and i'll leave this up here for a moment while i announce our next prize winner and i want to remind everyone that uh questions are rolling in for zerto and tear point if you have a question now is the time to post it we don't have time for a live q a discussion here with andy and sarah but they are in the questions pane there they are answering your questions as quickly as they can so thank you for those questions in advance and of course don't forget about our best question prize as well all right excellent questions coming in i'm routing these over to andy and sarah as quickly as i can and so with that it's now time for our next amazon 500 gift card prize announcement uh this one's going out to pedro hernandez from virginia congratulations pedro hernandez from virginia we have looks like 11 more amazon 500 gift cards to give out on the event today in addition to our best question prizes and don't forget about our grand prizes we've got the five iphone 13s the next grand prize drawing is going to happen after the next presentation here where we will hear from rubrik all right if you haven't answered the poll now is the time to do it because we're about to move on to our next presenter all right thank you for the questions thank you for the poll responses let's keep the mega cast rolling here and with that i'm excited now to introduce you to our next presenter welcome drew russell technical product manager at rubrik here to talk about rubric from microsoft 365. drew take it away uh they're screwed up today to jump on our webinar my name is true russell and i am part of the microsoft security practice here at rubrik i want to spend a few minutes with everybody today talking about microsoft 365. we're going to cover you know what's the high level landscape look like from 365 these days and and why that translates into more and more of our customers looking at 365 data protection seriously for probably the first time um you know since they've been a part of that that platform and then dig a little a little bit deeper into the architecture aspects of how we protect 365 itself so let's jump right into it here um you know like i said i wanted to cover what is today's microsoft 365 ecosystem look like part of my job is talking to customers across the globe you know every industry you can think of every customer size you can think of and they're all telling me you know almost the exact same thing now more than ever their business critical operations are flowing through microsoft 365 and by its nature that means more and more business critical data lives in the 365 ecosystem and if we look at the stats microsoft provides around this it tells the exact same story just this exponential growth that they've seen um you know over the last two years with that only increasing with hybrid work kind of being the new default de facto standard that's in place um you know again kind of across the globe here the other thing that almost all of our customers are telling us is you know they have a very limited lifeline and defense for this data and and you know that translates to you know 74 of the folks surveyed in this case have no data protection in place for this business critical data in 365. and since you know there is that limited last line of defense and then you know all that business critical important data living ecosystem today's threat groups view microsoft 365 as a gold mine and this is a direct quote from some of our partners over at mandiant if you're not familiar with mandy they are one of the industry's leading cyber security uh incident response organizations they're basically who gets called if the fortune 500's of the world are breached they go in and determine how that breach happened and then help the customer get back up and running so these are the folks with the boots on the ground you know analyzing world war tax situations and they're you know across the board they're seeing a plethora of money um being spent on understanding microsoft 365 and then simultaneously how to attack it by these threat groups that's one of the reasons why it's very easy to go out and find headline after headline you know of these external attacks happening um you know across the board it's not just one organization experiencing it's not just two um you know 70 plus of organizations have experienced some kind of attack capping in the 365 ecosystem the other kind of really important thing to remember here um you know in the overall context of microsoft 365 data protection is it's just it's not just these external threats you have to keep in mind it's also also these internal situations that can happen whether it is you know a rogue admin going in and you know causing havoc or like this example from kpmg whether you want to a you know apply a retention policy and if you're not familiar with those we'll kind of cover those next they apply that retention policy to 145 000 users accidentally and you know with a click of a button all of their chat history was deleted so different situations like that uh really kind of have to be combined with these external threats that we're seeing in the ecosystem today to really understand you know what the overall threat landscape looks like so the next thing you know you have to look at here is how does microsoft look at data protection for microsoft 365 you know specifically zero trust data security if you're not familiar with that term in essence it means you know never trust always verify and this slide is in essence a copy and paste directly from the microsoft documentation where they are focused around complying with legal and regulatory standards and the way they have that categories is into these kind of forward broad areas and each of these areas have a multitude of tools underneath it to help achieve these goals so let's kind of start with the left side here and work our way through the flow here so first step here is pretty obvious right you have to understand what data lives in 365 in order to protect it once you understand that data you can start doing things like apply different protection policies to encrypt it to restrict access once that's in place the next step is you know how do i present that data from being leaked accidentally or how do i detect different risky behavior on top of that data and then the last section here is govern your data where the main goal is to retain the information that you need and then delete the rest um you know that way can't be affected in any kind of um you know situation and this is really the main area i want to focus on here this is essentially where we see our customers focused in terms of a core kind of data protection aspect in terms of backup and recovery so dig into the government governor data section right this is again for compliance and regulatory requirements not for backup recovery and this is by design from microsoft the way they have this broken up is in two two kind of broad categories here so information governance tools and records management tools information government tools are kind of the the broadest area here this is your classic business data that would live in ecosystem powerpoints pdfs you know everything that you use to run organization on a day in day out basis the main goal with tools under this category is you know keep what you need and delete what you don't you know like i said earlier you know the concept here being if the data is not present it can't be um you know accessed accidentally it can't be um you know deleted by a rogue admin or you know some kind of threat actor that got access to the environment the next section here for records management is really focused around actually let me go back here so the main tool that customers use in this information government section is retention policies um vast majority are you familiar with these are kind of on by default in essence what you do is create a a policy that says keep your data for x period of time after that time period it passes delete that data from your ecosystem and this is all automatically handled on the back end for microsoft so really a set forget tool all right so going back to records management here these are very specific use cases kind of the the diamonds here uh in the graphic and the tools around this are very heavily centric around different legal and compliance tools or compliance regulations and then other kind of very critical business data that can't be affected in some safer form the main tools we see customer reducing in this case is litigation hold pretty self-explanatory you know what legation hold is mark some piece of data as being on hold once that data is marked if anything is deleted or modified a copy is again automatically got traded on the back end in the 365 ecosystem so it can be accessed later on if needed and this is either through you know their their extensive e-discovery suite that comes part of 365 or you know exporting it out to um you know the the corporate lawyers involved here all right so now that we have kind of a high level understanding of what these native tools are the the question that's really driving all of the the the conversations today around microsoft 365 protection is what would you do if you're hit by a ransomware attack or what would you do if you experienced a rogue admin or um you know somebody on your internal teams accidentally deleting the business critical fires in your ecosystem and these questions are you know a top-down question usually so they're being asked by the c-suite they're being asked by the board of directors and then probably just as importantly or maybe even more importantly is being asked by the cyber security insurance auditors if you guys haven't had the uh i'll call it the fun of dealing with cyber security insurance um you know the short version here is it is um very difficult to get these days because so many payouts have been happening the the rates are just astronomical in this case so you know they're you know if you're going through and you know if your company's looking at this insurance or trying to decrease your premiums the the auditors are going through and asking these questions you know it's do you have this business critical data living in 365 you know and if your azure environment is compromised or your microsoft 365 environments compromise what would you do in these cases so you know in order to answer that question right you kind of have to understand what this threat landscape looks like and how these native tools enable to recover from there so what we've seen happening is you know it's it's your core parameters being breached it's not 365 itself you know they have a very very solid platform in place in terms of their security mechanisms but that 365 environment is still connected to the rescue ecosystem so threat actors can get into the into your ecosystem through you know any number of means once they have that you know we've seen them work the way up the stack to where they can get global admin access at that point they obviously have piece of the kingdom so they can do different things like exfiltrate the data off of your um platform send it to their commander control servers and then delete that data once that happens you'll get the range of our note and then you know you have to figure out how to get that data back and get it back quickly so you know the two options you have in place are to restore from those retention policies that we talked about earlier um in this case though they are you know you know since they're not designed for backup recovery they can actually be used as an attack surface to your ecosystem so like we mentioned earlier from mandi and their you know their take on the today's threat actors knowing the microsoft ecosystem um extremely well you know they're familiar with these retention policies they went through created a new one that said find any data in the ecosystem that is older than a day old and automatically delete it and just like that everything is erased from the ecosystem the other option here is that litigation hole we talked about at the end of the day though litigation hold is not designed for mass protection you know if you kind of think back to the image we had there was that small subset of those diamonds where microsoft would apply these litigation holds at the same time you know litigation hold by its nature also nullifies you know some of the other aspects of these native tools you know very easy to talk about retention policy here again where you know the goal is to remove the data from your ecosystem that way can't be compromised in this case litigation hold um you know kind of negates that the the data still lives in your in your environment in this case so basically that leaves you with no good options and you're forced to pay that rank some to have you know those threat actors send you back your data to get back up and running the other really important part to remember here is it's not just your 365 ecosystem that's being attacked in this case it's the rest of your cloud workloads it's the wrestler data center report simultaneously being breached and attacked here so how does rubric kind of come into play here and how can we help these situations so you know the the the most important aspect of this is being able to create this logical air gap between your microsoft environment and then the backups that are stored in rubric so this is a full sas offering um you know we'll go into a little bit of the details here um here in a second but in essence we reach out to microsoft 365 pull the data back into rubric and then have that fully segmented from the environment that way if threat actors do get those global admin credentials they don't have access to your backups they can't delete those so they'll go through and um you know delete your data but in this case you know once you get the ransomware note if you have rubric in place you're able to actually get that data back up and running since it's in a you know a fully isolated cloud wall on the rubric side at the same time you know it's not just being able to recover your microsoft 365 environment in this case it's being able to unify that protection across all your workloads unify those restoration jobs that way you can ensure you know your entire environment is back up and running really important part to kind of here too if this isn't a you know rubric versus microsoft story uh and in fact it's quite literally the exact opposite microsoft has an amazing suite of tools focused around compliance our customers use them on a regular basis and more importantly they use them in tandem with what rubric brings to the table so the overall recommendation right is to leverage everything microsoft has in place you know identify your data restrict the access have the encryption in place but what rubric does is allow you to protect from those external and internal threats that we mentioned earlier and do it in a way that is fully automated from a management perspective and then being able to get as grain there as needed from the restoration jobs and then do that in bulk you know so worst case scenario happens your entire 365 ecosystem is blown away from a data perspective there's all the business critical data that you need to get back up and running so rubric allows you to do that in an easy to manage fashion and that's exactly why um probably i don't know a month and a half two months ago we announced that microsoft took an equity stake in rubric specifically around a kind of core joint mission statement mission statement up here right up on the left i should say all around being able to take a concept of zero trust data security and apply it to each of our joint customers and really all that is being driven around you know 2021 main imperative of ransomware and this is like we talked about earlier it's a sweet c suite level conversation it's a board level you know top of mind across the board so you know this an investment from you know microsoft allows us to do a whole bunch of things on on the back end you know in terms of ensuring we have adequate protection in place you know leveraging their engineering resources their product management teams too to make sure you know that the joint products we have in place meet their standards meet the rubric standards and allow our customers to have a holistic data protection solution uh you know for any kind of internal external threat scenarios so the fun part here how does it work so first and foremost security is top of mind when we design the solution so if we work our way through the stack here um you don't start with being able to authenticate into your microsoft 365 ecosystem all this is done through what microsoft calls modern authentication it's their implementation of oauth what that means is we never have access to your credentials microsoft gives us a one-time access token and then from there we'll create what are called enterprise applications to enable our api access into the 365 ecosystem and then those enterprise applications are scoped specifically to the api permissions we need once we have those in place we'll reach out to 365 like i said ingest that data into the rubric platform all fully encrypted at flight and then land that data in a rubric cloud vault that is also fully encrypted um you know the the important aspect here is that this is fully isolated from your environment you know if your 365 or azure or you know really your main environment is compromised these backups are secure in a rubric controlled instance once that initial configuration is placed you can take our sla policies and apply those to each of the high-level applications so think of applying sla policy to onedrive to sharepoint exchange that way you have a base level of protection in place and i guess to take a step back here too if you're not familiar with the sla policies what they allow you to do is define your your business use case and logic for for backups take it back up every four hours keep it for 30 days you know take a a backup for a month keep that for a year and just kind of automatically handle that on the back end you can also get down to the individual user level and get as grander as you want to with these sla policies that way you can ensure your vips your top priority users have a more frequent backup frequency than maybe the intern we started yesterday when talking about microsoft 365 protection the the most important aspect to remember here is it's an extremely complex api system that we have to work with here if i relate this back to the on-prem world and taking like a vmware snapshot you know we'll call the the specific api for snapchat and then ingest the data into rubric there's no equivalent of a snapshot api in microsoft 365. so there is you know four or five different apis that we leverage on the back end the the main one being microsoft graph microsoft graph allows us to talk to individual applications through a single endpoint then each of those individual applications also have a specific api dedicated to them so there's a onedrive api an exchange api etc and each these apis have their pros and cons so which one works best in situation a which one works best in situation b and at the same time what happens if these api fails you know they're used by something like 300 million users across the globe um so it's not a matter of if the api will fail it's um you know a matter of when so how do we automatically handle those failures transparently and ensure that we are always ingesting your data according to the sla you um define as part of your production policy the other really important part here is geeking out the performance of these microsoft apis so you know since they are globally available microsoft does an amazing job of um an amazing job not the best way to describe it but they very very heavily throughout these apis you know for talking to a single connection point in the 365 we can hit those throttling limits almost instantaneously which is exactly why our approach is to load balance our connections into the 365 ecosystem across multiple enterprise applications and if you remember these are the enterprise applications are how we have api access into 365. so we're able to achieve a you know an industry leading level of performance to get your data back as quickly as possible here the other part to kind of keep in mind here too is the underlying in architecture uh you know plays a pretty important role in this as well so under the covers we're using the azure kubernetes service to dynamically scale up our compute workloads to meet your demand so um you know for example if you're taking a snapshot we'll create a new container or if you're doing a new resource job create another container and we can automatically scale those up to whatever the you know whatever limits we need to ensure we're meeting your your performance and reliability expectations and at the same time all of this applies to the rest of your ecosystem whether it is in azure or in your data center with vsphere you can take these same sla policies and apply them to those workloads as well and then utilize the rubric players interface here which is our sas based application to manage the the recovery of those workloads in in the event of an attack so that's all i really wanted to cover today um you know that the main goal was to make sure you have an understanding of what today's ecosystem looks like for 365 both in terms of its overall usage and you know what the threat landscapes looks like for that and how rubric and microsoft have joined together to offer a solution that allows you to take advantage of you know best of both worlds the native tools for microsoft that are focused around compliance and regulatory issues and then the core backup recovery that rubric offers to protect you from internal and external threats the other thing too is we're going to um you know i know we talked about doing a demo here we actually have a new guided work through that we're going to send out as a leave behind here you can get hands-on access to the player's interface and walk through a demo yourself in my opinion it's way easier to understand that ecosystem by clicking through and getting actual hands-on time so we will send that out here after the webinar as well so appreciate everybody's time today and i am in chat here if you guys have any questions so please feel free to throw those out all right great presentation thank you so much drew really appreciate that i've just brought up the poll question for everyone out there that says what additional information would you like about the rubric solution and of course we appreciate your feedback on that we got a ton of great questions coming in here for drew over at rubrik he is in the chat electronically responding to those as quickly as he can we don't have time i'm afraid for live q a but we'll be doing that after our next presentation so keep your questions coming in of course don't forget about our best question prize as well and the poll on the screen so thank you everyone it's now time for our next set of prize winners and we have another amazon 500 gift card this one going to christian polpart from vermont congratulations christian po part from vermont and our first grand prize for an iphone 13 in your choice of color goes out to chris richardson from new jersey congratulations chris richardson from new jersey all right i also want to remind everyone there in the handouts tab you'll find a link to rubrik.com they've got a lot of great resources on their website of course they have their new rubric ransomware recovery warranty you want to check out which is pretty cool they have an upcoming data security spotlight event happening next week you can register for right there on the home page to learn more about rubric and of course there are resources from all of today's presenters there in the handouts tab as well all right if you haven't answered the poll question now is the time to do so because we're about to move on to our next presentation and with that i'm excited to introduce david staman staff field solutions architect at rubrik david i've seen a lot of your great content over on twitter it's great to have you here on the mega cast today take it away welcome to the cloud megacast and we're going to talk about how pure storage can simplify your hybrid and multi-cloud solutions my name is david staman a staff field solutions architect here at pure storage and so when we think about the cloud it's a different location but a lot of the similar problems that you once had on premises there's a lot of choices out there whether you're using azure or aws um whether using managed disks unmanaged disks do you pick standard ssds ultra ssds premium ssds right a lot of considerations there do you want cost do you want performance do you want availability on the other hand with aws a similar thing right gp2 gp3 disks very popular from a general purpose disk but not necessarily performing io1 io2 iot block express performance but expensive instant store works but it's not ephemeral so it means that as the machines reboot that storage goes away and as these cloud providers kind of change all the time it's hard to implement the right option at the right time and there's also a lot of considerations when we think about the cloud the idea here is that you kind of want to have a seamless way to orchestrate and have bi-directional mobility and with pure storage we can do that so whether you're using your storage on-premises in an msp or hosted environment or even a colo or whether you're using the cloud you're gonna have a common shared set of data services and the reason why this is important is because on-premises and your cloud data is very different when you have on-premises data you have these highly reliable arrays with built-in snapshots in dr they're very efficient than provisioning due duplication compression and really from an api perspective you can use it however you want when you go to the cloud well there's a lot of other considerations things aren't as resilient there's lower availability lower durability you can pay for it to be global they're replicated durability comes at a cost efficiency also is very inefficient because you provision or you pay for everything you provision whether you're using that capacity or not and you tend to have to over provision just to be able to get performance so for example in aws you want a 100 gig disk perfect you deploy 100 disc but you might get 100 amps if you want to get a full performance of that disc it's going to be a 5 terabyte disc and you get that 16 000 iops but what happens if you don't need 5 terabytes well you're paying for it whether you like it or not and that's where kind of cloud block store comes in and can help with that and then different set of apis and they're all based off the different cloud providers it's not a common set of shared data services and so what can pure storage do well what we've done is we've taken our enterprise cloud storage or our enterprise grade flash array and delivered natively in the cloud so whether you are in aws or azure right you have a environment that is going to be the high reliability and efficiency hybrid mobility and protection and a consistent set of apis and automation so you don't have to learn a completely new tool set it also allows you to use the same pure one data storage management to kind of see insight into both your on-premises and cloud arrays and then it actually runs the exact same purity software so there really is no differences in the features you get on premises or in the cloud and the idea here is that we bring this evergreen model to the public cloud when you think about the storage right you no longer have to worry about the azure managed disks or the amazon managed disks right in this case you just get an array and each array has a pre-built performance and capacity characteristics and so features are always on right no raid no storage pool deduplication compression data rest encryption is always on and you get all of our best-in-class snapshots with no performance impact and so the idea here is that you leave all the hard work to us let us figure out what is the best underlying components to use and over time with our evergreen model you'll be able to adopt those new solutions as well and then just provision loans right provision lends as small or as large as you want no difference in from a performance and no difference from a capacity so it really makes that efficiency a very great object and really the array is uniquely simple so really kind of four use cases we like to target the ability to migrate to the cloud but also what happens if you need to migrate out of the cloud disaster recovery to the cloud is also a very popular topic and also devtest or hybrid analytics and then what happens if you're already in the cloud today and you need some levels of high availability when we think about the lift and shift migration well there's a couple options right you might utilize azure site recovery or cloudender to kind of move that os over but the idea here is use purity replication to replicate that data it then is going to be a native disk on cloud block store and then through ispz you'll connect it up to your machines and what this allows you to do is have that seamless mobility have the ability to actually migrate your data into the cloud at a lower time frame because we'll actually consume less bandwidth because we're preserving that in-line deduplication and replication and then again if you ever need to replicate back on premises you don't need to worry about converting all that data because it's already in that native format same thing applies to hybrid disaster recovery right we have the ability to natively replicate up to the cloud and say you're doing a warm dr target you might have those windows or those linux machines already stood up kind of say maybe connected with a amd volume when the time comes for a dr event all you have to do is take that replicated snapshot instantly overwrite that disk and you're up and running then what you can do is reverse your replication back on premises and this is kind of important because the cloud is very expensive especially from an egress point of view and so if you have an application that you failed over to the cloud let's say it's 20 terabytes you want to go ahead and fail that back on premises traditionally you would have to replicate 20 terabytes again but with our inline deduplication compression say you're getting four to one well maybe you're only replicating five terabytes instead of 20. but what happens if some of that data already exists on premises then that's going to be even lower so we really make that levels of efficiency better the next idea here is the intracloud disaster recovery what happens if you are already in the cloud so say aws or azure and you want to fail over to another availability zone or another region and what this allows you to do is stand up another cloud block store and then go ahead and do replication if you're within the same region you can do our active cluster or active dr replication that can be our near zero rpo if you're going within that same environment or maybe across regions you can use our asynchronous replication and that is our periodic snapshot style of replication to have those point in time snapshots on another array that just in case one region or one availability zone becomes unavailable you'll have your workloads and your data in another environment to get that up and running another great way is utilizing cloud snap cloud snap allows you to natively move snapshots to the cloud or if you're already in the cloud move them to another environment and so whether you're using a flasharray or pure cloud block store we have the ability to upload the snapshots to azure blob or aws3 and sometimes it might be for long-term retention right snapshots you just want somewhere else but what happens if you kind of want to do a old disaster recovery you have a slightly higher rpo and rtl right it's not that really low time frame what we have the ability to do is utilize cloud snap so whether again you're utilizing a flash array or cloud block store you would offload those snapshots to that s3 or azure blob target minimal interval of every four hours what this this allows you to do is either pull back those snapshots down to your on-premises environment and utilize those or what you could do is on-demand stand-up cloud block store and pull the data you need to out of that and fail over to the cloud then when the time comes you can either copy those snapshots back to that bucket or then directly replicate back to your on-premises environment so it really provides you the flexibility to kind of pick and choose what solution you use and when really depending on what your timelines are another great way to kind of think about this is a feature we have in our aws base cloud block store and this is called fibernate and so what this allows you to do is we use s3 for our persistent data and what we'll do is we will terminate we'll kind of flush all the data to s3 terminate the ephemeral drives and shut down the controller instances so this will greatly reduce the resource consumption of our storage array and this allows you to kind of say okay well i've done a synchronization i'm going to shut it down right maybe power it on once a day a couple times a day once a week synchronize it then shut it back down again also kind of a really great time for test that right you may not always need your testing to have data right you're not doing things on weekends so you have the ability to kind of terminate that array and really only use it when you need it the next really powerful use cases are kind of our snapshot style of clones right so think about dev test or we're going to talk about also hybrid analytics when you think about the cloud right snapshots and clones are very costly or non-existent especially in azure right in aws snapshots are cloned they're stored in s3 there's kind of some right penalties it looks like they're instant but there's still some that's theta being moved to the back end when you need to revert that snapshot again it's going to be a pointer file it's going to look like it's instant but there's going to be performance penalties as you're reading from them on the other side side whether you're looking at premium ssds or ultra ssds right premium ssds you can see snapshots ultra ssds there are no snapshots so what happens with that right the idea here with cloud block store is all of our snapshots are metadata based so you can really create as many snapshots as you want at no cost the only ability of what your kind of change or the only data that you're actually charged for is the unique data that would consume capacity and so that's very efficient and so what this allows you to do is either have data sets that you replicate from on premises to the cloud or whether they're data sets you already have in the cloud create one two twenty a hundred snapshots we don't really care we're gonna dedupe and compress all that data they're gonna be free they're gonna be instant you can kind of build your retention policies around them so really makes this environment a very great kind of a use case for this well what happens if you're interested in learning more on how to get your hands on cloud block store well the great thing is it's in the marketplace so whether you're in the azure or the aws marketplace we have two listings one will be for a subscription one will be for a product deployment the subscription will provide you the license key you can also get that through our pure as a service what this all the diploma allows you to do is pre-build the form and it's going to go out and either deploy an azure resource manager template or an azure cloud formation template for our solution you don't have to manage each individual component and then this will automatically provision all the resources that compromise cbs and so what happens if you don't want to do that the manual way well we have a lot of ways to do automation right you can automate with terraform we have a terraform provider that can go out and deploy your top lock store for you it can also handle the decommission we've also created a solution in which we call the quick launch and so what this allows you to do is as you as it ties in with our terraform provider it can quickly deploy an environment needed for cloud block store and all the needed prerequisites whether it's in aws or azure so a really great way to build a demo environment do a poc or just get your hands on it and if you really want to get learn more right the idea here is that if you have any kind of id cloud commits well you can go through the azure marketplace go through the aws marketplace get that license key if you want long-term if you want an ideal method for kind of looking at a unified model or more of a flexible option definitely take a look at our peer as a service this is an ideal for customers who are looking at this unified subscription mod and if you're interested in learning more about cure pure cloud block store take a look and scan this qr code it'll link to our support site where we have a whole bunch of details of information webinars walkthroughs and even more that we can learn a lot more about what the solution can entail and then don't forget on december 8th we will introduce a new level of our flash story family so please join us at 8 a.m pacific time 11 a.m eastern time and register today www.purestorage.com launch.html and if you want a quick launch go ahead and scan that qr code so thank you very much and let's stand by for some quick q a yeah great presentation uh we just brought up a poll question for everyone out there in the audience we encourage your participation on that poll we want to get your feedback um i learned a lot about what you guys are doing there at pure lots of innovations and even a big new announcement sounds like happening next week so lots of excitement are you ready for some questions david yeah let's go for it all right yeah we got a lot of great questions coming in from the audience if you have a question here for pure and now's the time to get it in i'll just start with this one here they're asking what kind of data reduction can you get from pure cloud block store compared to native amazon or azure disks so that's a great question so one of the things about the cloud is that there is no native data reduction and so depending on your solution right we have some proven data reduction whether you're using databases whether you're using standard virtual server infrastructure whether you're doing snapshots and phones on average we see about four to one for those general workloads but it definitely varies if your current peer customer today um whatever data reduction you're seeing on your on premises array will be the exact same thing that you see in the cloud we actually tend to see a higher a higher overall data reduction in the cloud just because people are trying to make their storage more efficient there and kind of maintain multiple copies of the exact same data set yeah that's a great point that a lot of folks might not think about that you know you store data in the cloud and you there is no dative uh native data reduction um deduplication kind of features going on there so a great point great feature another question here they want to know how does pure cloud block store help with cross availability zone ha yeah so it's depending right so we can think about this in a couple different scenarios depending on the clouds so if you think about aws right there may not be native replication you're pretty much taking a snapshot and storing that in a mastery bucket in another region whereas with our purity solution one of the things is that we are highly available within an availability zone but the other aspect of that is we can use our native replication to deploy another cloud block store and another availability zone and either utilize our asynchronous replication we can do our semi-synchronous replication which will be almost a near zero rpo or we can actually do our active cluster which is our synchronous replication and actually achieve a zero rpo zero rto solution across availability zones and that's available in both aws as well as azure very nice very nice a lot of fun functionality there to make sure the data is always available um next question does pure enable data mobility from on-prem to cloud yeah and so it's definitely a really great way and so the idea here is that as customers are looking at moving to the cloud um sometimes the goal is that they want to get it there sooner than later um and so they might just do a lift and shift migration compared to a full re-architecture and so in that place using our replication we can actually replicate existing data sets from an on-premises environment directly to cloud block store and then customers can either redeploy their images say it's a database server right they might already have a custom image up with sql installed and then gain access to their data and it's great there's a native format there was no conversion required for that and if the customer ever needed to migrate that data back on premises again they could do that directly because of our in-line deduplication on our replication um we're actually going to one reduce the amount of bandwidth needed to get all of that data into the cloud but also if you ever need to egress that data we're definitely going to reduce those costs as well nice nice cost reduction is always a huge benefit let's see another question here they're wanting to know how does pure help with cloud disaster recovery is that a good use case for this um yeah so one of our um i would say our first customers and one of our public references is a customer doing um dr right and so the idea here is that having the ability to have very low rpos and very low rpos in the cloud is kind of something that's really key for those business critical applications and by being able to preserve that and have that consistent set of replication they were able to keep that very low rpo and rto in the cloud be up and running but then also whenever they needed to revert back down to their on-premises environments when that site become unavailable um it's there um i think this year a lot of customers are trying to get out of the data center business so they're trying to use the cloud as an extension of their data center and this helps and again like we said cost is key so when we do replication and we can reduce the amount of egress costs that's definitely going to be kind of a big thing and also make it more efficient and not have to move as much data so not only is it the rpo and rt to actually get to the cloud but it's the rpo and rto to fail back from the cloud as well okay okay smart and then what about uh dev tests in the cloud there's a couple of questions about that here as well yeah so like when we looked at the video right is one of the problems in the cloud is snapshots um and duplicate data are very costly and so when we think about our snapshot technology that's something that pure does extremely well um our snapshots are all metadata based which means that when you take a snapshot it's instant doesn't matter if it's one gig or one petabyte same thing applies to reverting those and so when you have multiple copies of those data sets one is we're able to make those tasks really efficient and we're also able to duplicate that data so whether you have one two 100 copies of that data it's going to consume the exact same amount of space which makes that very efficient so if you have any kind of like analytical applications where you're doing data analysis against the exact same data set but just 80 times right think about that you might have 80 copies of that data and you have to pay for any copies of that data if you're using native cloud whereas appear you would get an 80 to 1 data reduction and only pay once for that data set so a very efficient form to actually utilize that data absolutely yeah you don't want to pay for a copy of your data like 80 times in the cloud so good info let's see another question here they're wanting to know what about cloud storage pain points i know i know you talked a lot a lot about some of the different ones but uh what about some cloud storage pain points that this might address in general yeah so i think a lot of it is trying to figure out what is the best storage for which job and i think the way we do that is we make all those engineering decisions for you instead of you having to figure out what type of storage to use we provide an array and it has set performance and capacity a big pain point i hear from customers is that when you provision volumes right is scale comes with or performance scale comes with capacity and so if you have a very small volume you might have very low performance and you tend to have to over provision those volumes to be able to get performance and so we make that very efficient because we don't have that issue we also thin provision all that data so we're we're pretty much deduping then provisioning all that data you're not using making your storage more more efficient so i think those are kind of a lot of the pain points and also just like the data mobility um really ones that um the ones that we see the customers wanting to have addressed the most yeah absolutely absolutely um let's see here's another good question lots of good questions coming in um justin is asking [Music] can i perform fail back or fail over with array-based snapshots using cloud block store we do um so our with that that's kind of where we go so we'll replicate the data up to the cloud um you'll take that snapshot convert it to a volume mounted to your cloud-based vms when you want to reverse that replication you'll put that volume back in a protection group replicate it on premises and then you would then take that snapshot and overwrite your on-premises environment so that's definitely the one of the use cases that we see customers using okay excellent uh there's a question here about kind of how the billing works for cloud block store would it be included say in my my aws bill or do i get a separate bill for that how does it work so it's kind of a two-fold approach um so part of the license is the under or part of the cost is the underlying infrastructure um so in aws that will be some ec2 instances and azure the some azure vms and ultradiscs and so part of that cost is that the other cost is going to be built um from the license to utilize the software build that effective capacity um so it's kind of a a small approach um it's also available through the marketplace from a license or through our pure as a service so kind of a two two cost thing but it can be completely unified through the aws or as your marketplace if that's what you want okay nice uh jamie is asking if pure can help with their challenge around protecting cloud workloads and office 3 data in office 365. would this work for that so yeah so this would work for the cloud workloads so our solution since we are network based we require high schedule connectivity so we can help from a security perspective um a couple things in the news right ransomware so we have a feature on our flash arrays called safe mode all of our volumes and all of our snapshots are already immutable but safe mode pretty much prevents the deletion of any data without any type of multi-factor authentication to our support team and so we have that same solution in cloud block store so you can protect those data there as far as office 365 there will be no direct integrations there okay got it and then another question can pure replicate between aws s3 and azure blob it's an organization is using multi-cloud strategy um so today right now when we do our cloud snap offload it is only configurable for a single target so it would only be able to do um s3 and blob at a single time we are hoping to enable more multi-target support so we should see that feature that feature soon to be able to offload to two different types of storage okay makes sense and then another question here they're wanting to know i lost it for a second um what about the encryption and security of the data as it's as it's being stored in the cloud yeah and i was actually gonna forgot to cover that i should have covered that with jamie so um we have the encryption data at rest um so we can either use like in aws use customer manage keys or aws manage keys anything applies to azure um it's all encrypted and we also have a second form of our purity level um encryption which means if someone were to gain somehow access to the underlying desk they won't be able to do anything with that data um so all of our solutions are highly secure available we're even available in some of the web clouds out there um out there nice nice and then what about management of you know say you have multiple pure like flash arrays and then you have storage in the cloud is there a single point of management yeah so the idea here right is pure1 is our cloud-based management portal to kind of see all of your your fleet of arrays and so that provides you that single pane of glass to kind of look at your on-premises and your cloud experience on the other hand it's the exact same purity software as our on-premises arrays and so it's the exact same user interfaces the exact same apis the exact same automation and so it's going to be the exact same tool set so if you are a newer company moving to the cloud and you don't understand all those cloud technologies right it's a really great way to still manage your cloud-based storage the exact same way as your on-premises storage it's kind of that cohesive experience nice nice and then there's another question here they're wanting to know does it matter what the application is that's that's running in the cloud you know in terms of compatib compatibility with cloud block store you know like if it's a container-based application or a windows server or does it make a difference um it really doesn't um a lot of the biggest things will be like do the software vendors support um certain types of storage um so i know for example like sap requires that they can only be um test dev on our solution right and that's just in general because they don't validate iscsi whether it's databases right we have sql customers we have oracle customers we have customers doing virtual server infrastructure we have all types of data warehousing so we really don't have a compatibility we're block storage so as long as you have an application that can support lock storage then we kind of work there so that's pretty much everything now we also work with containers right so say you're using aks eks you can tie in with our solution again we also work with portworx which is our kubernetes platform our data management platform that we recently acquired um so really a lot of use cases out there to be able to utilize it the list of what we cannot do is very small so list of what we can do is very large very nice and then i know you talked about you know potentially cost savings um that could be gained by adopting pure cloud block store is there some way or do do folks have some way to estimate how much they might save yeah so if you go to our pure storage website we have a public tco tool we are doing some updates to it um but the best way to kind of think about this is reach out to your local pure teams or your partners we do have a tco tool that we can kind of provide a full end-to-end solution we kind of need about some very basic information what cloud what region how much capacity and then we can provide you a tco tool to kind of see what that would look like but it's definitely out there because of the features we have very nice and so you said if folks want to get started with this or they want to try it out this is available in the the cloud app stores is that right yep the cloud marketplace so all of our solutions um we have a no license trial which we can do for 45 days customers can go out there and get their hands on it play with it just responsible for the underlying infrastructure costs if you're looking for something a little bit more involved again you can reach out to your partner or your pure teams and we can do a more of a hands-on deep dive poc and kind of help you a lot more and have a little bit more of a feature rich no license poc very nice all right well david it's been great having you on thanks so much all right thank you and thank you to pure storage of course for supporting the event and presenting here today check out the link it's there in the handouts tab that will take you to purestorage.com and specifically to the pure cloud block storage page and there's a lot of great resources on that page that will answer many of the common questions as well as case studies from customers and research papers as well and then as david said you can go to the aws and azure marketplaces and try out pure cloud block store for yourself for free even so thank you to everyone who asked a question and thank you to everyone who responded there to the poll it's now time for our next prize winner on the mega cast today we have an amazon 500 gift card going to amanda husky from virginia congratulations amanda husky from virginia all right and with that it's time to keep the megacast rolling here with another awesome expert presentation i'm excited now to introduce you to will stow who is a principal architect at netapp will it's great to have you on take it away hello and welcome to the webinar today my name is will stowe and i am a principal architect here in the office of the cto at netapp and today we're going to be talking about doing more with azure using the netapp cloud portfolio and we're going to start this off by taking a quick look at a data point from microsoft's earnings from their q4 2021 report showing the massive year-over-year growth 51 growth that they have seen in cloud services you know and with that this is important to note that azure as a platform is amazing it provides lots of value lots of services lots of features for customers the reality is is that with that it doesn't solve every single problem perfectly for every customer microsoft acknowledges that they're well aware that uh you know they're gonna need assistance in their ecosystem with a variety of solutions and that's where netapp really fits in providing customers additional agility for their workloads and solving challenges for them so let's cover that starting off with microsoft and netapp our relationship that we have had over the years we've worked with microsoft for a very long time going back to the early 2000s where we began working with them in partnership around data center file services providing nfs and smb shares for a wide variety of workloads inside of the data center moving on from there into the enterprise application space so think of things like microsoft sql server exchange server sharepoint those kind of things progressing from the applications and building upon that momentum we moved into the private cloud space this really kind of predates azure before it really began to ramp up this is again customers who want the cloud experience but they're trying to take advantage of their capital expenditure inside the data center and then moving on from that rapidly into the azure space where we have worked with microsoft over the years with azure in terms of providing uh you know individual features services uh we'll be talking about those today but just so you can see on the kind of the right hand side of the slide there this is been building into the crescendo where we are today lots of services that address many many needs inside the azure ecosystem for customers today to finish this piece off just a quick accolade the most recent partner of the euro where we have around customer experience here in 2021 all that to say we are definitely not new to the microsoft ecosystem we've been working with them for many a year now and will continue to do so in the future let's talk about a few of the scenarios that you're going to see with customers who are uh looking to solve a variety of challenges both in data center and in cloud the way that they typically look at these problems in terms of how to solve them you know from some perspective you see customers that are looking at the lift and shift paradigm i want to move my workloads from my data center to cloud 100 this could be a real estate question maybe we are in a data center that no longer can facilitate our workload uh it could be a variety of reasons that would that would present this as an option for customers the next piece is the hybrid cloud which is i think predominantly what we see not only with azure but with really all the providers today where they're going to have workloads that span data center and cloud right some workloads are going to be cloud first some are going to be data center or edge depending on the requirements depending on the customer's needs and then there's azure native right we have applications that were born in the cloud if you will and that's where they're going to run and stay in perpetuity let's cover that first scenario in terms of lift and shift what does that look like and from a challenge uh from the customer's perspective and how does netapp address some of those challenges you know start off with you know movement of data data movement is i think probably the most paramount uh challenge that customers run into when they start looking at we need to move some of our data sets or our entire data set our workload into the cloud this is where netapp provides a ton of value you know first and foremost for customers that are using netapp solutions clustered on tap all flash fast hybrid fast systems in the data center at the edge today and they're looking to move those workloads into the cloud this is where things like netapp snapmirror provide absolutely tons of value to the customer today we have the ability to spin up virtual appliance inside of azure via the marketplace we call that cloud volumes on tap and it is a great landing zone for this type of scenario customers can simply go through and set up a snap mirror relationship from their data center across their networking of their choice to their azure networking their virtual network by way of say azure express route or azure vpn and within minutes they can set up a snapmirror relationship and start moving data in real time to the cloud i think this is a game changer in a lot of ways because it allows for customers to be able to do this in such a way that they can quickly and easily start replicating data into the cloud and let that process run even as they're working through the process in real time of what do we move there's this analysis paralysis that can sort of take hold the great thing about snapmirror is it can run in the background while you're figuring that problem out so it's again very very powerful it doesn't stop there though you know we have customers who are you know taking advantage of netapp solutions today and they're using uh they may not be using our storage platform at their data center or at their edge and that's perfectly fine we have a solution we call cloud sync for everything else this would be for replicating file data uh object data into um into azure either the cloud volumes on tap or to uh other solutions like object storage or azure netapp files etc but it is there as a way to um move data intelligently uh efficiently with a very easy to use and navigate user interface that we provide uh to do that process very very simply done so it's great for you know moving any other workloads that may fall outside of the netapp to netapp replication option this is what you would use for the bulk of everything else on the truck the thing to keep in mind here is that you know microsoft and azure they have lots of solutions that they also provide to do data movement and replication and this is what i was getting through before their previous point about um you know starting the replication of data while you're figuring out what your ultimate goal is of how you want to move your workloads there's usually going to be a few tools that are involved here really kind of the crux of the argument so we would be one tool of a few that are probably going to be a candidate for actually moving all of your data sets so things in this will include like tools like azure migrate or azure site recovery uh or third-party virtual machine replication tools things like zerto or jet stream those kind of things if you're going to consider if you have a vmware based solution on your in on-prem today uh looking at something like using um azure vmware solution inside of azure with hcx all these work together in tandem with each other very very well so the question often comes up you know why would i not use the default feature in azure meaning you know there's lots of native services in azure that i can use for my data storage today the thing to keep in mind though is that there's really not a answer there are multiple answers right there's many ways to solve the problem uh other adages you can include in that in that idea there but to make an example for storage uh you could look at say object storage an azure storage account is one way to solve my storage needs or azure disks behind a virtual machine or azure not files that we actually provide as a native service with microsoft inside of the azure ecosystem ultimately what microsoft is driving for is they want customers to solve their challenges in the most efficient way possible that solves the problem right it might be that object storage is the best best answer for you it might not be and that's okay uh using the native services using the marketplace um all these work in tandem as a cohesive way to solve a variety of problems be it storage or networking or compute analytics you name it this is really i think the idea and the goal for uh for microsoft is to ensure that they are solving the customers uh challenges in the most efficient way possible and that's where netapp really has a lot of value because we are able to do that both in a native perspective with say action and up files uh to the marketplace with cloud volumes on tap things like that to really you know address the customers needs based upon the requirements let's move on to the hybrid cloud component of this conversation again so we've moved from lift and shift to the next phase which would be the hybrid cloud which is the you know the answer is you know where do i want my workload i want it to be as close to either the line of business or to the piece of manufacturing or whatever the case is i want my application uh to perform the most efficient way possible given the set of requirements that it has and this is where hybrid cloud really shines being able to take advantage of the best of both worlds uh again netapp addressing the call here with things like our cloud manager uh which is think of it as our gateway into our cloud solutions which is available on our website at cloud.net.com as an aside uh you can go out there and register a free account today and start having a look around for yourself inside of cloud manager you can find services like cloud volumes on tap which is our virtual appliances available inside of the marketplaces of all the uh the providers azure included obviously but also things like cloud tiering which would be the ability to take your netapp engineered system uh inside of your data center or your cloud volumes on tap appliance inside of azure and be able to intelligently tier blocks cold blocks of data from that appliance from that engineer kit of gear in a data center over to object storage to lower your your your your tco for that that workload i can drive my cool blocks of storage over to object storage to be able to free up additional capacity for other workloads inside of those systems other things like global file cache is a great example of this too where i can use netapp global file cache to keep my sort of my gold standard or my pristine set of data inside the cloud as my central repo for that content but at the same time be able to replicate and cache that at the edge wherever it needs to be globally so that i have things like uh intelligent replication to get the data where it needs to be uh so i'm only moving with the data that i really need to have at that site also to handle things like global file locking as another example of value that's going to provide to you all these things again are working in tandem to address the customer need of i may not want to have everything in the cloud i may not want to have everything at the edge or at my data center so this is that nice middle ground for customers who again this is what we find the majority of we see customers today in the in the field but have uh have both options in front of them to address needs as they come all right so let's take it uh take a look at the third example here and this would be the azure native or you know all in on azure uh this is you know for me for customers who are you know looking at uh we may be starting up a new business a new a new project the new line of business whatever the case might be and there's uh the as far as we are going to get started with our workloads we're gonna spin up an azure account uh we're gonna start deploying services day one inside the cloud and that's where everything's going to run you know we're seeing more and more of this as we uh you know go further into the maturity of cloud adoption uh as it offers more and more services especially for enterprise customers i think in the past you've had some customers who may aspire to be all native inside the cloud but they were blockers that were prohibiting them from from making the jump from you know lift and shift or hybrid into the cloud in general um and also for you know net new startups who just are going to have their workload inside the cloud as well uh 100 so you know from this perspective you know a few things to keep in mind so you know with with the the move into native cloud uh you're going to have some table stakes things that you have as requirements for running your business your applications one is going to be for central data storage and in many cases again getting back to the question before about you know why would i not use a default feature in azure in many cases when customers think of cloud storage they usually default to object storage and in many cases you can view that as unlimited storage uh in a lot of respects however many many applications uh do not have the ability to read and write to and from object storage today they require a file system a file share if you will or some device to be able to communicate with as part of their applications this is where azure netapp files really fits that bill for a native azure solution that offers nas protocols and i'm talking about nas i'm referring to nfs version 3 nfs version 4 4.1 smb protocols these are going to be for your linux stroke open source applications virtual machines containers docker images etc that can use nas for shared storage for their applications databases can run on this you name it pretty much anything inside of linux can consume nfs right out of the box also for windows based applications i'm running windows server i'm running sql server i'm running other applications say that that require windows based services and access to smb shares to share data across you know multiple systems locations user uh user teams the that kind of thing a f or azure.files fits this build perfectly to be able to spin up hundreds of terabytes of storage within just a few clicks of the button or by way of an api a terraform ansible you name it right depending on how you want to consume that there is a way to do it either through the azure portal or by way of a programmatic solution take your pick and it's a great way i guess to close that piece out in terms of storage a f is a fantastic solution uh that you can use for those cloud native applications that need to run inside the cloud automatically a f is really there to fit that bill the next piece is for me going to be for compute so you know you ultimately are going to have some virtual machines running inside of your application space some of those could be running applications natively inside the vm some of those could be virtual machines running containerized based applications either through a simple container engine like like docker runtime as an example it could be through kubernetes this is where our spot portfolio really shines being able to provide value of taking advantage of running against azure vm spot instances with your compute to drive down the overall tco of those applications things like elastigroup as an example allows you to run you know think of your scale out applications where i want to be able to deploy you know one or more copies of an application on compute elastograph can do that with spot instances in a reliable and predictive manner so that you can remove that element of failure of losing that spot instance which can be removed you know taken from you with very very short notice elastigroup does a great way of predictive analysis to be able to run those applications in an intelligent way same thing goes for ocean in this case this would be for your kubernetes-based workloads around azure kubernetes services as you're running those applications uh inside of aks there's still a compute cost because there are your nodes that are running uh worker nodes that are running those applications inside of kubernetes uh inside the domain of service again a spot with ocean which is an element of spot can be used to drive down those costs using spot instances to do that piece very quickly quickly deployed and easily consumed the last piece i'll talk about here is astra this is our kubernetes based data protection and data mobility solution for running containerized based applications stateful applications inside of aks and being able to protect those workloads very quickly and easily through a no software based uh install required solution all sas all right so moving on from here um just a a couple uh some having some fun with this we would say that 110 percent of cloud customers uh want to get the maximum return of their investment on their technology this is where netapp really comes in and does a a great job of adding tons of value of allowing you to save time and money on your cloud consumptions one last thing here to kind of cover before we wrap things up some takeaways we are big fans of microsoft we've been working with them for 20 uh 20 years now 20 plus years now working with microsoft uh our friends in redmond that will continue um we are here to make azure better azure is a awesome portfolio of solutions and services we are plugging in where we can to add some really cool technology to solve some interesting challenges that we see customers experiencing and make their overall journey into azure a more palatable one and i think to close this out you know we're just getting warmed up you know we are looking to provide additional features and services and technology for ongoing challenges we see customers having as they're making their way into the cloud or the ones that have been there for a few years solving those problems in creative ways and with that i'll close this out last slide here netapp unlocks the best of cloud uh my name is will stowe um it's been great to be able to speak with you today i want to thank you for your time and have a great day great presentation will thanks so much we just brought up a poll question for everyone out there that says what additional information would you like about the netapp solution and we'll leave that up while we take your questions lots of great questions have been coming in during will's presentation and will are you there are you ready for some q a i'm here awesome all right so let's see first question i wanted to ask you will this one comes in from rob who's wanting to know how is netapp addressing immutability regarding its backups so a few different ways the the the i guess the net of it is that our backups are effectively read only right so they're not able to be changed you can pull them back and restore them but the actual uh data by way of a recovery point inside of a snapshot is read only by design so across the portfolio of cloud volumes on tap to azure nano files to our gear you can buy in the data center at the edge all the work in that same fashion good good very very good that's important these days to protect that backup data another question here casey's asking i heard that azure uses netapp storage for its regular storage account what's the difference with netapp files on azure that's a really good question so i really can't get into the details of how microsoft uh architects their native storage solutions i can tell you that it's uh different than azure nano files in the fact that with with azure data files we are jointly engineering that solution with microsoft so it does have physical kits of netapp engineer systems inside the data center so all flash systems inside the data center so it is uh it is a alike and the fact that all those solutions are inside of that actual region uh however azure nano files is built a little bit different from things like azure files or azure shortage accounts which is a slightly different type of architecture got it got it okay yeah thank you for clarifying that uh let's see here's another question uh kind of a long question i'll try to paraphrase they say they have a netapp storage replication going between two data centers can i replicate netapp to azure then failover vms to azure for a temporary landing space while we're migrating workloads to the new data center do i pay netapp on demand for a short period of time how is replicated data deleted from the cloud a lot going on there but any thoughts on that one yeah yeah right on i think the the best way to answer that question would be uh to look at uh cloud volumes on pap and azure site recovery i think it was putting both of those two solutions together this gets back to that lift and shift model of using kind of a one-two punch the netapp solution replicate the netapp storage data and also using azure site recovery to replicate the virtual machines themselves over to azure so that when you want to test you know your vr plan to make sure things work as expected you can do that in the most cost efficient way possible that gives you all the tools you need the best of both worlds if you will to get that done excellent excellent and then another question here they're asking does your solution have the capability for customers to choose where their data is stored specifically like not to store the data in certain places would they have that option yeah so that's a great question as well so with azure man up files uh and cloud volumes on top uh you have the ability with those solutions to pick the region where you want to deploy the actual service so you would from there you know they want to connect to say u.s south central i would deploy the service in that region so that it doesn't uh i can prevent any type of uh data uh moving past any type of geopolitical boundary or whatever the factor might be for keeping that data in a certain locale that's easily done when you deploy the service uh the first few clicks okay okay good yeah that's important to know i mean you get you got to know where your data is this person they're asking is netapp only for azure cloud uh no we actually are in the the three major hyperscalers so we have storage uh in compute and uh and kubernetes solutions and and others monitoring et cetera for all the hyperscalers so for aws for azure and for gcp we have uh solutions across the board for all three of those providers very nice very nice and then what about this question they want to know what have you seen as the greatest barrier to um to migrate highly customized legacy applications to the cloud and how do you overcome them uh the timing again the challenge seems to be with customers of being able to take applications that are uh like using air quotes legacy they typically have uh uh performance characteristics or capacity requirements that just you know uh they exceed what's available inside of the the traditional solutions from the cloud providers today be it object or via virtual machines or whatever azure nano files specifically for this piece is unique in the fact that we're using an engineered system that is that married nicely into the azure experience so that you can have that enterprise uh storage experience but in the cloud be able to consume it in that fashion and we've found customers time and again have been able to leverage azure internet files a you know a way to move those big difficult applications into the cloud successfully got it okay nice yeah that could help a lot of companies out there i mean there's so many highly customized legacy apps that that need that cloud migration um next question can the storage be encrypted at rest they say our institution is very concerned about storing sensitive data in the cloud absolutely so azure now files can store data at rest by default it is nothing you have to enable or anything it's just turned on by default that's going to use fence 140-2 and the keys are managed today by microsoft says something is managed by the service itself you don't have to think about it you just store your data on there and it's all encrypted at rest very nice very nice so how should folks get started with this will so with with azure download files uh with uh uh you can simply go into the azure portal today and look for neta and you will find azure nano files there um for our other uh cloud solutions go to cloud.net.com you'll find everything you need there all the breadcrumbs you need to get to the right locations from that one location it'll get you directed in the right lane right the right the right way excellent all right well well it's been great having you on the mega cast today thank you so much thank you and we've got it looks like 20 more questions there still for you uh will in the electronic queue so maybe you can get back to some of those folks for more information on netapp of course check out the handouts tab as well there is a link in the handouts tab this will take you over to netapp.com specifically to the azure netapp files landing page where they've got an ebook there's a getting started link there top five questions to ask and lots of great information on the netapp azure files solution so make sure that you check that out all right if you haven't answered the poll question on the screen it's going to be up there for just a few more moments so we encourage you to do that and i'll just leave it up while i announce our next prize winner this is for an amazon 500 gift card and it's going out to james lodenkoi from california congratulations james lodenkoi from california all right and with that i'm excited now to bring in our next presenter on today's cloud megacast i'd like to introduce you to uh he goes by suds vice president of product management at plyops suds are you there hey yes i'm here hello everyone thank you so much for being on take it away thank you thank you thank you everyone welcome to uh megacast event um [Music] webinar your cloud journey with biops uh my name is serge chain i'm a vp of product management and strategic alliance at playoffs a little bit background on money before we go to the company uh i recently joined the playoffs from vmware um i've been coming with 10 years at vmware driving the product management for cloud infrastructure team uh basically uh bringing the cloud for core platform on on-prem host environment managed environment as well in hyper cloud environment with azure um aws as well as gcp and ali cloud across the board so basically the how the vmware core platform is brought up on different environment that is what my focus was i think to be here with flyops and that's where i'm going to talk about today um so i have very high level agenda here company introduction basically start with some of the challenges we see from playoffs perspective uh when we talk to the customers and introduce players extreme data processor and some of the customer use case and journey we have been engaged with customers and wrap up with key takeaways so yeah i think uh claps is a pretty young company uh it's been exchanged for less than three or four years now our mission is basically here focus on uh accelerating the performance and dramatically dramatically reducing the overall infrastructure cost for various uh infectious databases let's see rational or or nosql in our database application and now we're going into deep into airml and deep learning kind of data needs and then expanding into 5g iot and other use cases so these are all data intensive outputs and these try to address the performance aspect as well as the eq aspect of those applications it's a very powerful and [Music] and experienced team uh coming from strong background in database storage semiconductors cloud from different companies and uh very very excited it's a lot of strategic investment from intel capitals and media software and western digital xilinx and uh and supposedly of other investors as well we have been uh as i said young company but getting a lot of recognition and uh starting from 2019 onwards to 2021 a whole slew of recognition from the industry uh for playoffs and um very excited and very proud uh to be part of the team here um whereas in flash movie summit whether it's in crn whether it's in uh gartner's or across the board we have been getting a lot of i will say recognition is just a start for us okay so let me step up a little bit on uh how we see the customer uh challenges especially because outside of our class focus is around data so as we go and engage with the customers we see across the board uh you know there's a explosive growth of the data whether it's in the edge or whether in the cloud or across the board uh even in you know when you talk about network or cdn there's a lot of data work as well on that front and as this data is going there's more and more data need to be processed in real time and that's where there's a desire and need to process the data faster and faster and that's where uh you see a lot of innovation happening in different front but uh that's also causing a lot of uh imbalances so to speak within cpu versus storage versus network and uh you know there's a always uh you know for the last 12 20 years cpu was the smart kid on the blog which has been processing it and is able to keep up with the customer needs but uh increasingly you see there's a sort of a shortage on the cpu cycle that's why gpu is becoming more mainstream in certain use cases and then of course uh beyond that uh dpu and other purpose-based supplier processors coming into the frame to address such certain application needs and that the values in storage whether in data network uh processing as well and then beyond that i think uh what we see is as the data is going uh applications are not optimized to sort of handle this kind of data load and that's why these applications are not optimized and that's what we're resulting in i will call it infrastructure sprawl uh basically the infrastructure is going uh a lot faster than it used to it need to be and for today uh most of the time when you go and talk to a customer they just throw more and more two socket servers to the problem and try to address that but it's not the most optimized version of the infection you like to see both from power perspective whereas for as a real estate perspective as well as from tto across the board it is non-optimized that's how i seat when i go and talk to the customer and then as you go over here and as well and you access this infrastructure uh sort of a more non-optimized way it's impacting your app reliability availability and stability of the infrastructure and serviceability of the infrastructure as well and then of course as we discussed earlier the cost and the value you get per dollar you spend is uh sort of uh going down as well so that just because those are some of the challenges but uh in any in action back there's a stronger and stronger need and desire from the corporates especially to be more and more carbon neutral and but the problems which there's the way they're handling is not the right way because they are throwing more and more power to the problem and they want to reduce the carbon footprint they want to be carbon neutral i don't think uh we have a very strong extra strategy in most of the corporate customers we are talking to as well and how what is the playoffs value kars is trying to address uh some of the success uh with uh for from some of the value proportions we are creating uh with this around workload acceleration uh data efficiency data availability uh and overall tco and as a google basically we are sort of so i call it uh solving the hard column around how uh cpu data and storage need to live in harmony uh both from processing perspective as well as from handling and transportation and storage of the data and of course we want to make sure that we create uh what i call lower latency with intelligent hardware upload that is where we are putting a lot of focus from um the value provision of playoffs is concerned and once we hardly upload we are talking about hardware offload on-prem as well in cloud and i'll talk about a little bit on that as well and data efficiency i think uh one of the main elements of uh value proportions for flyops is how do we handle the data more efficiently not just accelerate the data from possible perspective but also how do we handle it from storage capacity better compaction better encryption better media efficiency handling better visualization of the data as we in and out of the data from the storage tier that's where also there's a lot of value proposition collapses trying to create and data availability i think uh this is something a very strong value of triops as well where we are able to provide you a dry pair protection with virtual hot capacity that means you get no compromise on your cost or uh or for that matter in the capacity but you get uh hardware related uh full drive production uh with no additional cost and no performance penalty whether it's in the when you're doing the data as well as when you're recovering the data in both scenario uh performance penalty is very minimal when you're recovering and no parenting when you're handling the data faster rebuilt uh with the minimum performance drop as i mentioned earlier uh overall tco we are putting a lot of investors on kco as well uh higher workload consolidation uh significant reduction in data pressure cost with better consolidation better processing uh of the data and also making sure that it's free and we have we've done a lot of sort of uh value proposition around that and i'm more happy to dive deep with customers if they are interested to follow up around how we can articulate the value i'm not going to go all the details about these value proportions that might take more than two hours for me to go over it and make sure that uh it's clear to the customer but i'm happy to uh deep dive on this value proposition okay uh with that said let me sort of uh introduce uh what i call um um carbs xcp data virtualization engine uh i'll go a little bit bottom up here so if you see it here uh we have our hardware as i mentioned earlier we call it extreme data for http uh which is running uh in on-prem with uh a pci card which comes from flaps but equally important uh we can also make it available as part of the hyper cloud environment whether it's in npv one or f1 that's more happy to follow up if anybody is interested in cloud environment and it works with across the board storage tier it can work with local ssds whether it's ip and instance or or on-prem in your nvmes or any other stores here as well as it can work with the over the fabric j-box which are sort of emerging category of storage uh to be used and as well as uh in the cloud environment whether you are using ro2 or using some other storage in the cloud uh we can make it work so we work with your extreme storage and make it better that is uh and make it better for your data processing data storage data access data transport as well uh and we make that happen uh with this what we call an xtv data virtualization engine that's where the uh our secure sources in the hardware plus data virtualization engine that basically offer you a a common platform for accessing it in different application needs for example you can access a native block you can access the native kv or roxtv if you are using rox tv based application or in future we may expand it to different uh might or even go into different uh database or file system as well and we are trying to make sure that we are able to explain a different uh environment whether it's in storage or rdbms nosql analytical uh doesn't matter where we think with this common data virtualization engine based on ftp we can offer you value for different use cases and we have done a lot of benchmarking internally as well more than happy to uh deep guide but this is something uh uh available today uh in commercial uh form and uh if you are interested please feel free to reach out to claps on that one and the heart of this is our innovation in the hardware as well which is running either in cloud environment on uh on f1 or in one kind of instances or is running in our environment with our pci compactor device today it gives you order magnitude better performance better reliability um basically we can give you uh dry fill protection with the 2x performance of rate 0 which is unheard uh in in store industry as of today but we are able to provide and make it available to the customers in production environment and of course we provide a lot of uh compaction and utilization of the data so it gives you order magnitude uh capacity enhancement as well and we make it available with a very very low cost you know sort of media drives in fact we can make qlc better than your enterprise great media you're using today tlc or media and that's also a very very important spectrum where we drive the cost as well as efficiency so that's the where the value proportion of uh http hardware is and as i said earlier we've been working with many applications and use cases and very excited to uh see that order magnitude better benefits for different use cases whether you're using tv plc blogs tcu almost 75 saving uh when you're using radius or the block and the cost as well in terms of scaling as well so we can give you a lot more benefits both in on-prem as well as in hosted or cloud environment um yeah i think um let me sort of oh sorry i think i double click on it i don't know um yeah yeah i think um sorry there's a lag on my end so click uh anyway so we have a lot of use cases uh and value proposition but i just highlighting few uh for example if you're using memory database or for example ready for any other uh with playoffs powered environment you can do simultaneous magnitude consolidation and bring down the tco as well as improve your performance and that is what we have to demonstrate with redis and here in some other active trials with a very very large customer on this kind of use as well as you know trying to make this happen uh on you know whether it's an edge or is it in uh you know your far edge you call it or you can't call it thick edge or even core data centers uh if you want to run your analytics workload and use some of this uh s tab or rdmf server or nosql application uh we should will deflate and make it available with a better tco better cargo footprint and a common platform for varied uh infrastructural needs as well and we are able to also uh provide better predictability in better uniformity in terms of access and address relays so to speak as well and we are able to also consolidate uh overall your expert training so there's a very strong value proportion when you are trying to run let's say video analytics or any other stretch application at the far edge or in the thick edge environment we are able to provide a strong value proposition this is another use case where we sort of the software defined storage where the storage cluster is providing multiple sort of access to the compute workloads but the way they will do today and i think most of the storage uh software defined storage vendors are addressing this in that form where uh they provide there is coding uh where they do this uh drive block uh grouping across the servers uh i think that works but it's just that it provides inherent inefficiency with image recording and as well as it gives you uh anytime a drive fails uh it ends uh reducing your cost of performance because you end up building over the network what uh our customer is able to do is consolidate uh drive groups within the server before it's grouping it across across the cluster with that basically it's uh isolate any drive failure from from you know crystal level performance in the variables [Music] within the system before it goes to the um before it goes toward the cluster so with that basically the benefit is you you have a added fire driveway protection so that means if you're running the septic one or two you get additional kitty uh failure to tolerance uh and as well as if any time you drive fail you completely isolate that from the cluster level degradation performance in fact many of the software defined storage vendors today they recommend replication uh instead of errors recording if you want higher performance and we are able to do that at a lot more value proportion for that customer so i think these are the varied use cases for playoffs in storage uh database analyticals we talk about it and in addition to that we think uh um it is not just that we also are addressing one of the biggest uh sort of concerns with large customers who are handing praise around how do i reduce my color for carbon footprint uh so we are able to do that significantly for a customer in terms of the server rack data center consolidation um providing uh better wear and tear on your storage uh as well as on your network and efficiency manage the data growth in data centers but we are doing it very efficiently uh data storage and data compaction and data access that helps a lot uh in terms of uh your carbon footprint and not just that we also accelerating your you know cpu bottling so that way you can consolidate further so that's where we are in terms of uh overall um i call it uh green initiative and the value proposition uh of uh what uh we are able to offer using uh claps so hdp data virtualization engine and very excited to be on board a new person in the company but very excited to see what uh companies have done and help us out in going forward how do we um cut the needs around customers data customers are sort of in a mode where there's a whole lot of growth in the data and data is trapped they sort of need more and more exploration moore's law is not healthy uh anymore uh on that front and uh is definitely coming in and addressing some of the challenges and uh the good part is uh customers already accept that heterogeneous computing with cpu and other smart devices is here to stay with the gpu dpu and and we are sort of coming in and trying to make sure that we provide for a order magnitude advantage for customers in increasing the performance uh increasing the data production with full speed no performance penalty and no capacity penalty and no rebuild penalty or minimum rebuild penalty on that one seamless integration uh we are trying to make it very very seamless for customers to consume whether it's in native black dominated kpk tv for if you're not tb or using bike in future or other areas we can make it very similar for customers to consume and we want to make it uh more and more efficient uh media accessible to broader and broader use cases that's something we are trying to focus a lot on and it's a common platform for varied application as i mentioned and very very excited to see a lot of traction with customers and happy to have more deeper dives with the customers who are interested to follow up and feel free to reach out to me my email address is in the first slide or to our marketing team or you can tag us in social media anywhere you want um yeah i think um we are available on linkedin tutor and as well as to reach out easily that's pretty much it on my end thank you very much um from my side i'm happy to take questions at this time absolutely yeah there is there's time for questions a great presentation suds we appreciate that we do have a poll question i want to bring up for everyone out there in the audience that says what additional information would you like about the plyop solution and obviously there's a lot of interest in what you are doing at plyops we got i see 15 almost 20 20 questions here uh to cover we won't have time for all of them but lots of great questions have come in and so i'll just go ahead and start with this one they're asking for a dba or developer to use xdp for any of my databases or nosql applications do i need to make changes in the software to work with plyops xdp oh look very good question i think uh from um perspective as i mentioned in my flight number four or five where victoria shows basically um we are making it available as a block interface so uh that means uh so it goes into sort of a very low end of the tier in the stack that means all the application and uh even most of your storage stack everything else does not need to change uh with that basically you are able to integrate any application uh if you're using block or kv if there's a cabling you should go to a natick kv you should give this labs based model as well and also if you're using docs tv you can sort of go with the rough tv based implementation as well so from our perspective we are trying to make it very frictionless for customers to consume their application their infrastructure on top of claps and the resistance block kv or other interfaces excellent yeah i like that sounds easy another question here they say as an architect how do i utilize xcp to increase application protection from ssd failures uh does it mean compromise on the performance or what do they need to know there suds yeah very good question i think um okay so the way it works is uh we are taking care of the drive field production that means it's built into it as long as you are using hdpe or plano solution uh you don't have to worry about it and you can be you know hot plug these guys and we present the capacity to you in the black form or kv form or any other interface and that capacity is for you to consume within that uh we can sort of manage how we do the dry field production if uh any dry pills and you need regular uh we take care of the rebuild as well and if you add a new or you replace the drive let's say you hot replace the drive we recognize that and we should be able to detect and provide uh you know added capacity on that on top of it so we do that all the management within the playoffs hardware and software together and we offer you block and with that you are just consuming the capacity you don't worry about the how it's been managed excellent nice and then another question here can i deploy xdp in the cloud yes in my slide as well uh if you are interested in cloud not to work with you because http engine is available in fpga form and it's available we can make it available on f1 or npv one and make it available for your use case love to follow up if you're interested to have sdp on cloud excellent all right well so i'm afraid that's all the time we have here in our live q a time slot but there's a ton more questions for you there in the electronic cube maybe you can get back to some of those folks i'm sure they'd appreciate it great presentation thank you for being on the megacast today thank you thank you for having us for more information yeah for for more information on plyops check out the handouts tab there is a resource there you can download on how to scale your data center storage to meet explosive new demand with plyops and if you haven't answered the poll now is the time to do it because we are moving on to our prize drawing time we have another amazon 500 gift card and an iphone 13 in your choice of color the amazon 500 gift card this is going out to justin gunderson from wisconsin congratulations and grand prize number two for an iphone 13 and your choice of color is going to dan merwin from colorado congratulations still a ton more prizes to give out on the mega cast today so make sure you stay tuned for those as well as some really awesome presentations here still coming up turbonomic ping identity farm on outsystems red hat sophos and faction and so with that let's keep the mega cast moving i'm excited now to introduce you to dina henderson junior product marketing manager and mary nachimov cloud product manager at turbonomic dina and mary it's great to have you take it away thank you everybody for joining us today our discussion will be on developing your 2022 cloud plan um so on today's call we have myself dina and mary i am a product marketing manager here at turbonomic and i manage and often moderate our webinars like today and similarly our blogs oversee our content creation there now generally our team is focused on educating the market on new approaches and mindsets that are required to continuously assure application performance and of course succeed in this very complex world of cloud and containers mary would you like to introduce yourself yeah for sure so yeah uh thanks for having me today uh it's great to be here i'm a cloud product manager i do many things around cloud optimization work with a lot of 500 fortune companies to optimize their cloud and help them to really succeed in that journey um and i'm also a huge fan of user experience so you know if you guys are interested in that we can talk about that later as well thanks mary so just a quick disclaimer before we hop into the discussion that we have planned today um for this presentation we are focusing on what is generally available today and we'll be clear about what's available today um verse what's roadmap if that comes up but now that we've gotten that out of the way onto our discussion so to kick things off um we know that cloud adoption is rapidly increasing but success is not easy so by the end of 2021 67 or six yeah 67 of all enterprise infrastructure will be cloud-based eighty percent of companies um report receiving bills two to three times larger than what they budgeted for and the number one priority for organizations in 2021 is optimizing existing cloud resources for for performance and cost mary is there anything you would like to add on why cloud success is not easy yeah sure so um as you can see most of the companies will have to move to the cloud we see here some stats but it's really essential to keep up with the competition and grow your technologies further into the future a lot you know taking all the benefits that the clouds are offering with big data flexibility agility scalability so uh here we're we it's really hard because many times people come from the on-prem world or they're new to the cloud and um the cloud is much more complex to manage infrastructure than we ever saw before but today we're going to show you how we can break it down to a few easy steps really to take ownership of the cloud and own the cloud instead of cloud owning you thanks mary um so now mary can you talk us through what turbonomic application resource management or arm is yeah of course so uh first of all in any any time we talk about technology performance is the most important um metric because once we are the performance is not intact we're going to lose our users so everything we talk about here today will take performance integration duration first however to achieve performance sometimes it can be really expensive and um the goal of any company in the market today is to achieve performance of the lower cost and that's what we're here to show you today the platform that we're working with has um three steps as you can see here on the slide first we we show you what we discover that's the observability part we'll show you the full stack of the application um second of all our platform will take all that into the algorithm and give you actionable insights that will tell you exactly you know immediate steps of what you can do for your environment to keep the performance up while maintaining the cost as low as possible and lastly as you grow in your journey in the cloud and you're ready to let the software manage the infrastructure for you that can be really revolutionary for many people when you just click and slowly but surely automate um and start you know letting the software take control and just viewing the results of of the cost how the cost is going down how the performance going up etc etc and on the right side that you can see the full scope of what we're able to do we're working with uh the compute tiers storage tiers databases great optimizations on rise and other discounts plans like savings plans and of course kubernetes so mary just explained to us what application resource management is and that our three main focuses are observability actionable insight and automation all while achieving that continuous performance at the lowest cost so now that our listeners know what arm is mary can you explain what our approach to cloud optimization is uh yeah sure i would love to explain the you know four steps you can achieve for cloud optimization so the cloud can be really overwhelming in the beginning with all the services they um you know provide and when you start taking advantage of the services many times you will get a bill shock at the end of the month and you here we are to break this down for you and make it very simple to start using the cloud while keeping your cost um you know uh bill you know at the budget that you want so let's let's take a look here so we have four items suspend first of all you need to suspend everything that you're not using for example over the weekend you can put a schedule where you suspend all the your virtual machines and and other services over the weekend because nobody's using them over the weekend just a waste so unlike the on-prem world where things used to run 24 7 the cloud world where you basically um charge by the minute you should suspend things that you don't use you can do it um over the weekends or overnight secondly you should delete any items that you don't use in the cloud when you delete a virtual machine many times the volume that attached to that machine is not deleted automatically and remains in your system and you keep paying for that volume until you delete that so in addition to that you should also delete abandoned vms that we see a lot in customer environments uh anything that it's not used by you or your users you should delete that that's another huge savings mechanism scaling that one is more complicated because you can't just scale down virtual machines to the smallest size you really should think about what runs on these virtual machines but the performance of the virtual machines and that scalability area is really neatly done by our software where we look into all the metrics and we scale you down to the perfect size while taking performance into account and also your savings plans or any other saving the mechanism into account that's really neat and i'll show you this in a few minutes in the demo and lastly after you suspend at on times delete all the things that you're not using and scale everything to the perfect size you can start and buy reservations since your environment at that time will be already minimized to what you really need binary reservations only will give you bigger discount on top of that and that's how you really win the cloud game okay so mary just told us what turbonomics approach to cloud optimization is and how we achieve that mary can you now elaborate on why our listeners should focus on scaling workloads and not purchasing our eyes yeah of course so um look at the start over here when we purchasing our eyes we should really uh because i work with a lot of customers and customers many times when their bill grows they are really quick to buy reservations because um you know they need to just decrease the cost as soon as possible but once they buy your reservations they're locked into that amount of capacity for the next one year or three years depending on on the plan that they purchased however we can see here that any environment has the predicted capacity and the unpredicted capacity and what we should really aim for is first of all we need to shrink the capacity and that's where the scaling suspension and and deletion is coming along so once we shrink the predicted capacity only then we need to buy the reservations as the last step because now our environment is much smaller and we will be much more efficient when we bind the reservations you can see here that we also don't need to buy reservations for our full environment only only for the predicted amount that we're going to use and that also is neatly done with our software we have very similar charts to this one just to show you exactly what you need to buy and the cost and savings that are gonna produce okay thank you mary and now my favorite part um mary's going to share out her screen and give us a demo of what we were just talking about so i'm sharing the cloud homepage of turbonomic as you can see you have the menu over here here we have what we call supply chain this is internally what we discover the full stack from the application really to the region of of any cloud that you have uh we're multi-cloud we support azure and aws currently and more to come after you see and recognize your environment this is one of the cool things that many times nobody else can show you the full stack of your application but we we can um then we go and we see our environment so i'm gonna start with here we have all my accounts in the cloud so if i click on show all i can examine all the accounts that i have here you can see i have here a few clouds that's shown to me in a really neat way if i expand any azure cloud i can see the research group that i have in this cloud so that's really cool as well i can just examine my environment the cost i have the actions i have the savings potential everything is really organized here now if i just want to take a look and and see what turbonomic is offering me and take it back to the scaling deleting and buying reservations these two widgets really tells me the whole story so first of all necessary investment this is everything i should do in order to keep my performance and compliance in the right place so although i'm going to invest money i should really do it because if i'm not going to do it i'm going to hurt the performance and i'm going to lose customers so i we always advise for customers to start here and take all these actions later on when i want to start and save and be more efficient in the cloud i'm going to click here and see all the options i have so for the scaling category i can see i have here scaling virtual machines scaling volumes database servers and databases let's take a look and examine the virtual machines a little further i can see here that only for the virtual machines i can save around 4k a month that's just for the virtual machine if i want a further breakdown of how to achieve that each action over here will tell will have a breakdown of its own let's take a look at one action i have the virtual machine name so and i also can click in and see more details about it i know which account it belongs to i have the current instance type that it's on in the cloud i have zero ri coverage and here when we start the green bar this is where i should be on so i should be on this instance still with zero right coverage and my cost is going to be lower almost as you can see here 50 discount this is what i'm gonna save per month if i click on details it will give me even more um data about this virtual machine so i can see that this this is a really classic use case where we don't have real big usage on the on the virtual machine and that's why we're trying to size this down you might think that this is usually not what happens but actually many times in a customer environment we see a lot of virtual machines with almost zero usage and that's why we're resizing this down one of the really cool things here is that i can see the tags so tags are used to organize cloud environments by sometimes application or environment type like uh development or production so you can always use this to identify your virtual machine and also set some policies and rules around the actions that are based on tags so that's really cool um here it's a pretty classic use case where we're just sizing down a virtual machine that it's not being used a lot and i can flip through all my virtual machine actions here here is a virtual machine with more activity and and we also sizing this up for 50 ri coverage we're using all the aris in the system and we're optimizing them that's why we decided that this one should actually has an array coverage from here after i examine all these details i can share it with my application owner to gain more trust and build a relationship with you know the application team i can also go ahead and execute this action and that will take in place the virtual machine will be sized on the spot and will gain more right coverage after i do this a few times manually we always recommend for users to go ahead and automate the policy for example in the tag you can take your development environment and just automate all these actions on a daily basis or a weekly basis it will be automated for you and you can at the end of the day just examine uh the results with a report so that was on the virtual machines we have similar user interface for volumes so here you can see again the name the account here we can tell you that it's reversible as well some actions are non-disruptive and and reversible these are the ones that easiest to take uh because and it's a performance action too woohoo you see that's great um so these ones are the easiest to take because even if there is a reason that you need to turn it back to the bigger size you can always go back and it's also not disruptive so these are some of the actions that users will take first so that's a great use of case as well we have the database servers as you can see once you learn you know how one table works all of these are very similar in their concept so it's really easy to use um as soon you know as invest two minutes to kind of get in how this build and i'll continue to volumes these are the unattached volumes here we can see that this one for example is already 82 days unattached and we're paying for this 10 a month now it might not seem like a lot but it really accumulates in this environment we delete things pretty regularly so when you have a lot of um other touch volumes you can just throw money away from the window it's it's not really good so this is also can be just taken i can take all these actions at once with this button over here without going into the details and i can automate that you can see we're multi-cloud as well so we have aws here also azure has unattached volumes and gcps dvd it's still in private preview my favorite part buying reservations so as i said first you need to scale then you need to delete you also need to suspend on a schedule and lastly you should buy reservations here we also have a breakdown of the full environment let's take a look at azure for example we have a gov cloud account over here so it's recommended to me to buy all these rise for the govcloud subscription we see here the instance types the quantity uh the term some of the things are configurable for example the term you can buy for three years not only for one year that that's flexible you can buy for one account or for the whole environment so you really have all the options to fit your use case once i click over here i can see the chart you see how in this environment i can see that d4s v4 standard vm is really running it's a uptime over a typical week it's running non-stop so when i see a use case like this i would definitely recommend to do suspension over the weekend i can see that from monday until sunday there's no suspension over here it's really rare that this virtual machine should really be up at all times however even if that's the use case and it's hard to suspend or delete things you can always just go and purchase what's recommended and we have some details over here of let's say if you purchase this you you will utilize this almost 100 percent of the time this is really minor um because probably there was a restart over here that's why it was not hundred uh your total cost for this array your total savings over on demand so if you want buy this ri you will pay 40 more that's a lot i would not want to do that and the estimated savings overall will be um almost 900 so that would be only for one or i um and we have a list of arise this will update and based on your environment all their environments are growing over time so you'll have more and more and more arrive purchases over time so here you have the ri list and over time your environment will grow and your purchase more and more arise and that's it this is how you optimize the cloud if with a few small steps it's really all summarized for you in these two widgets this is important for performance this one is important for efficiency and i'm glad that you know i was here today to present and thank you so much and dina back to you all right and thank you mary for that wonderful demo i hope everybody got as much out of it um as i feel like i did felt like that was very insightful um and i just want to if it'll there we go um i just want to say thank you to everybody for joining us today again i hope you um learned a lot and if you would like to learn more please feel free to reach out to mary or i as well as try out play with turbonomic at try turbanomic.com try mary is there anything you'd like to add before we answer some questions cloud is great you know for technology you can unlock a lot of things let's be innovative i think it's really exciting times like they really the cloud changes a lot in our daily lives so i'll be happy to answer any questions and you know chat more about the cloud all right amazing let's hop into those questions great presentation dina and mary i just brought up a poll question for everyone out there that says what additional information would you like about the turbonomic solution and we appreciate your feedback on that mary and dina are you ready for some questions from the audience yes we are awesome all right excellent yeah we got a lot of good questions coming in already i can see um one of the first ones i saw here that came in they were asking they say you know my scaling process is difficult but what can what can this do to help me cut costs quickly yeah so we met customers that they have a long approval process of scaling virtual machines and what i showed in the demo uh looks really easy from the software side but sometimes a lot of people are involved in especially in big enterprises so for this use case i would recommend start purchasing rise there was some question about arise which right should you buy usually people buy three rise and that what i would recommend for this case just to start cutting the cost later on you can establish the processes and for approve scaling decisions and make your environment the right size over time so just the short answer you can start binarize and then scale over time and adjust the arise off in the back end side of things excellent yeah it sounds like that could help a lot of companies to save some money you know rather fast um next question are you taking suspension schedules into account oh great questions um yes we do take suspense suspension schedule into account and not only that will take also your all your uptime data into account if you have any policies um or anything like you know you want to use some certain families or not the others all of that will be taken into account and the ri recommendations will give you only a arrival that will save you money off of schedules to up time etc okay excellent um let's see another question here they want to know i'm sure you've heard this one before what makes turbonomic different from other vendors who do cloud cost optimization yeah great question so what is different in turbonomic is that we take performance first of all into account so i you can always sign everything down and that will decrease your cost but then you're gonna lose all your customers because your quality of service will go down tremendously so what we do every action in our software will take performance into account and we'll never decrease that in fact some of our actions are going to actually ask you to invest more money because we see that you have some suggestion in your cpu our member will alert you and tell you you have a performance in here and increase that virtual machine size so that's the biggest differentiator from of turbonomic from other companies i would say that this is very essential and key for growing over time and generally servicing your customers at a high level absolutely um another question here when it comes to you know getting started with turbonomic what's the best way to do that is this uh is this something they would install is it is this a sas based service like what's the model and what's the best way to try it out yeah great question um so we have multiple offerings but for our cloud customers we have a fast solution that will you know once they contact our team we can work with them and actually we already have a tryout website as well where you can just log in and try it out yourself and that's really easy as it comes so solution very nice yeah sounds super easy to get started i can help a lot of companies out there to uh you know scale and and make their uh it organization more agile and also save some money you know potentially at the at the same time so i'm afraid we're running out of time here in our live q a slot i see there's at least 15 other questions i'm afraid we didn't have time to answer but maybe you can get back to those folks electronically thank you so much dina and mary great presentation thank you thank you and thank you to turbonomic of course for being on the event as well check out the handouts tab right there and you'll find a link to download the application resource management for cloud applications solution overview it looks like a really well done resource very visually appealing lots of stats here lots of good stuff so make sure you check that out it won't be as readily available as it is right now once the event ends so again thank you for all the great questions as well i'm routing these over to dina and mary and i'll leave up the poll question and we're going to announce our prize winners and to do that i'm going to hand it off to my fellow moderator mr scott becker scott you ready to announce the prize winner kaiser am david awesome well it looks like we got an amazon 500 gift card who's the big winner of that one scott the the big winner is belinda zhao from louisiana so belinda congratulations and uh we'll be getting in touch with the details of of that one so congratulations and uh and i'd say let's move on to our next uh presentation so we're gonna be hearing from ping identity and aubry turner who's an executive advisor at tang identity so let's have audrey take it away hello there my name is aubry turner i am an executive advisor at ping identity some career highlights include uh start started my career at deloitte and then went to an organization that eventually became optiv and now at ping identity so today as you can see the topic of this presentation is advising cloud strategies and solutions and then the the subtitle is cloud identity and access management and what i'm really going to talk about what i'm really hoping you take away from this presentation is identity's role and identity's pivotal role in your journey to cloud and there's really a couple overlapping access accesses access that are relevant so first we're moving services and apps and systems to the cloud we're also and this has been occurring for some time moving identity to cloud and so we're going to talk about both of those things and and their interrelationships and uh what some of those strategies look like what that journey to cloud and cloud identity looks like and again that's what i'm hoping that you will come away with but first uh we put customers first and this is one of the things being at ping that i'm most proud of is what you're looking at is our net promoter score that's on the left-hand side of your screen and that's really our customers uh honoring us uh and with their level of satisfaction so we are again honored to have earned a world-class nps score of 65. uh some of our company includes apple uh netflix that's the neighborhood that we are we are in uh to my knowledge uh to my knowledge recent knowledge i don't know if we have any competitors that uh acquired matching our nps score score so again this is driven by our unwavering commitment to our customer success hero success from an identity perspective and then on the right there is the 2021 gartner access management quadrant magic quadrant excuse me really just kind of highlighting uh our recognition as a leader in access management so just a couple couple bona fides that i wanted to share with you and the other thing that i'll say before i get into the sort of heart of the matter is we're proven in the enterprise we've got some of the largest customers as some of the largest companies as customers excuse me 60 of the fortune 100 and well over 2 billion identities secured so that's that's pretty exciting and pretty pretty awesome stuff uh and then what we're going to be talking about as it relates to this journey to cloud and journey to identity cloud identity is this cloud platform that uh that we have and the experience orchestration that that we are building to again help you deploy these solutions faster than ever before help you to integrate with a platform that is scalable and powerful and enterprise proven uh again you saw some of the some of the uh data on the on the previous slide but really what this looks like from a journey perspective right is the ability to again verify that user verify that identity identity proofing uh registering them being able to authenticate them strong adaptive authentication plus single sign-on and then being able to sort of authorize that user what they can do once they have access via web and api and doing that authorization dynamically based on purpose and context and managing that what we call consent flow authorization being able to dynamically do that and last but not least safeguarding the user so monitoring them elements of risk certainly api intelligence and security so this is what the the ping cloud identity intelligent identity platform looks like in sort of at a high level what we're looking to how we're looking to build a orchestration solution that that is built across these experiences in a platform that can deliver extraordinary experiences to users while at the same time being secure so cloud identity why does it matter and some of this is likely going to be very very obvious but nonetheless let me point out a few things certainly um operational efficiency we've all heard and you know since we've started talking about cloud over the last decade plus uh the efficiency that the the cloud can offer that plus most of the digital transformation has been naturally occurring in cloud as we look to build these new capabilities uh the fastest way that our teams have found to do that is naturally in cloud all right and so whether you know apis consumption of sas apps iot uh as well as other you know and now in the last 18 months remote work uh all all of that part of digital transformation strategies uh the cloud has been uh attractive from uh from an efficiency perspective so that plus end user experience um again all these things are are factors in terms of why the cloud matters and then second but not least is is market dynamics so the trends toward multi-cloud right and by the way i don't in my opinion multi-cloud is not necessarily anything new probably the aspect of it that's relevant is uh resiliency certainly we saw some organizations had some unexpected resiliency issues so they've looked to spread their workload across multiple environments but if you've been multiple clouds but if you've been consuming sas apps chances are those sap steps are running in various clouds and then you've been in multi-cloud for for some period of time and then this is one of the things i think our customers really really value is identity independence uh right and i think so i could say so this is a ping tenant um we don't want we try to limit vendor lock-in we don't really care what what cloud you live in but you know we we push for standards so that whichever cloud you are choosing to live in or whichever clouds you're choosing to live in we can adapt and we can continue to help support your identity independence and kind of that neutrality so that we can manage uh and orchestrate these experiences across whichever cloud and clouds you choose to be in but these are the reasons why and the factors why cloud identity matters and why it's it's so relevant today so we are all on we're all on uh life's journey and journey to cloud is no no different it mirrors it in in some ways and it's just another journey that that we are all on and and so uh another aubry with a far higher social media profile than mine once said sometimes it's the journey that teaches you a lot about your destination and i really like that quote because it illustrates again what you're what you're seeing here in terms of the steps in the journey to cloud and to cloud identity these these and i will get into them shortly they'll tell you a lot about your destination as you take those steps and again you'll see things related to speed agility and efficiency these are some of the underlying factors in that journey to generating cloud journey to identity in the cloud so uh if anybody if anybody knows the other aubry that that said that quote probably wouldn't be too hard to find out if uh you know how to use google uh shoot me at shoot me a line on linked linkedin or something see if you're all paying attention anyway i'll continue so uh in that journey to cloud uh unifying administration so these use cases and i touched on the the the in that experience orchestration slide the the ping platform we're we're building a platform we have a platform really um that's covering use cases for customer uh so b2c b2b and partner we're seeing more overlapping use cases so our our platform continues to evolve with those overlapping use cases in mind um and then certainly you know you're looking to administer various environments development testing prod as well as standard uh sas applications and then legacy and non-standard some of our favorites that we have to we have to figure out how we're going to manage as we move the apps and identities to cloud the the value of this step in that in that journey is to accelerate common tasks so that we make them those common tasks that are repeatable we unify that administration and then centralize access to all of your environments if they're decentralized and disparate makes uh makes it uh far less uh efficient and agile to to manage and administer those environments and then single sign onto all these management consoles and portals etc that's certainly part of it but this is your first stop to identity as the control plane uh and you're going to hear me refer to that before before my time here is done identity is the control plane and this is part of that journey to cloud journey to identity and cloud adopting identity as the control plane is a is a key element of that as well as you probably got some zero trust uh running concurrently or driving some of some of this is part of the security framework that you're again balancing risk and efficiency and user experience as part of this migration or move to cloud and consumption of of more more cloud and yes hybrid is or still a reality so these legacy and non-standard apps as you make your decision uh go through your decision criteria in terms of whether you're going to move them or whatever you're going to do with them we're still seeing a tremendous amount of hybrids so being able to bridge cloud to [Music] on-prem or legacy and non-standard apps that last mile integration is still a huge part of this it's still a key part of the journey to to cloud so again this idea of identity is a as a control plane establishing a cloud authentication authority so we're going to benefit from this centralized orchestration of user authentication flows and and where the the ping intelligent identity platform sits again regardless of the the user type customers employees partners again we're supporting uh various user types uh and then this this authentication authority what the the value here is that we can define these integration patterns and we can have one source of identity data this this one source will allow us to be agile in terms of self-service self-service onboarding for any application or systems or resource across your enterprise again whether those those applications your sas apps or you're migrating them to to the cloud or they're going to reside in private cloud again wherever they are having establishing a cloud authentication authority will have tremendous business value in terms of efficiency agility and speed and so consolidating these disparate identity systems processes and personnel can only lead to again that improved operational efficiency and again whatever that operating model is that you are aligning with so once you've done done that right we we want to sprinkle some multi-factor authentication an adaptive multi-factor authentication uh and as part of that right so we've got all these use cases all of these authentication flows uh you know web mobile vpn ssh these different authentication points that again form these authentication use cases we want to uh when when i say you know globalize your cloud authentication authority what we're talking about is adding additional uh features to it that strengthen it and again not only strengthen the security but also enhance the end user experience for example with passwordless uh so that's a that's another a way that we can effectively marry security and and the end user experience right and and have a a pretty good balance between security and end user experience so again we're talking about speed we can let you know we can enable users to log in to log in faster across numerous uh authentication methods right and and kind of meet the user on on their on their terms again in this centralized global authentication model uh launch apps in any cloud consistently and securely so that's that's huge and i talked about those those adapters uh or the ability to adapt to any cloud it's it's almost like um you know i just went to europe and you know you travel to europe you've you've got different outlets and and votes and amps and things like that uh but you you know you can bring an adapter and and in some cases uh whether the adapter or converter uh you can still you know power up and uh and be able to do that easily and securely and this is the kind of same idea here in terms of being adaptable to different clouds and again bridging different clouds and being able to securely do that improved efficiency so this is also part of part of this is again reducing help desk calls and call center costs um through end-user self-service so again uh multi-factor authentication huge piece of this let's then add risk right so we've got single sign-on uh mfa let's add risk to that and that's the intelligence piece of this uh by the way you know frictionless security is also something that i've talked about and this is how we uh frictionless user experiences this is how we get there uh so you know anonymous network um you know whether the device is you know jailbroken the os the version ip reputation and possible travel sort of all of those those device posture things we can aggregate that risk uh we can create a score that then makes a decision as to whether that user you know gets access i talked about frictionless end user experiences so we increase speed um we have these risk thresholds uh that are and policies that are dynamic so that can help us build these frictionless experiences um through you know machine learning and intelligence and artificial intelligence we can have security for again any apps deployed in any cloud kind of you know that analogy of again traveling to another country and still being able to sort of power up your power up your devices with uh sort of an adapter and or converter and then improved efficiency so reducing manual effort so uh you know your administrators can focus on on other higher level tasks more strategic tasks so we can use machine learning to detect abnormal behavior and then take action on that and again this is not just paying right we can also consume and aggregate risk from other from other sources that you may have in your environment to to again build a complete risk picture and detect abnormal behavior and then take uh make a decision from from there uh as we kind of kind of continue then you know we're talking about verification affirmation and that this could be document validation such as a driver's license i mentioned i just i went to europe recently uh international passports or some other some other valid uh type of document for identification uh or it could be it could be data driven it could be uh there could be other elements other factors that we use just based on the level of assurance that we need in order to verify the identity and one of the things that we're certainly seeing is as part of again these cloud journeys is is verification being more tightly integrated with uh registration and authentication so this is again part of that journey to cloud and having trust uh being a big sort of underlying factor right that's what this is this is all about so um the capabilities to again for know your customer or if you're in financial services uh anti-money laundering streamlining all of those things from account creation and onboarding as well as authentication i talked about frictionless things like passwordless uh so all of those things are all sort of you know chained together and then being able to standardize the onboarding for all new apps and services i talked about standards and so we want to be standards driven you know standards friendly and improving the efficiency so you know self-service account creation so as we build out this cloud authentication authority we you know single sign-on yeah and there's a there's a term that i've used um uh it's sso everything and mfa everywhere so those are those those concepts that we we can use and certainly we see them here and we don't want to we don't want to just do a single sign on an mfa that's kind of like peaking in high school we want to add risk we want to add verification we want to build out these these capabilities as part of this global authentication authority so uh one one step here that we can also take as part of this journey is consolidating legacy i am in the cloud and so we have legacy systems that we can again accelerate the the time to market for new apps with a single platform for authentication and actually authorization as well and the idea here is to replace these custom and priority systems with standard space i touched on standards and this is all part of this identity as the control plane for for cloud um you know limit reduce on-premise infrastructure again we can certainly bridge hybrid that's the sort that's the name of the game uh and again and so let me complete that thought around improving efficiency producing on-premises costs admin costs operational costs the infrastructure that you have on-prem so consolidating legacy identity in the cloud onto a single platform that's part of this journey as well and and your your steps may mirror these or you may uh you may run some of these things concurrently right this isn't necessarily sort of serialized you certainly can run pieces of this in parallel but that's roughly the the the journey to cloud journey to identity and cloud adopting identity as the control plane so let's let's talk about where you are in the journey right this is a current state of identity checklist i know this is very very busy there's quite a quite a lot on here to consume but let's talk about where you are in the journey what what you have done we all know the gratification that comes with checking off items on a on a to-do list and ping can certainly help with that so rather than having or me reading all of these to you i encourage further conversation around your use cases and let's let's imagine the future let's talk about your your journey to cloud your journey to identity in the cloud uh and let's let's see where you are so with that i want to thank you all for listening and sometimes in these virtual sessions it's it's challenging to get get direct and immediate feedback but again if you certainly have any questions and want to continue the conversation uh like i said i certainly encourage that thanks again and be well take care all right great so uh and i'll just put up the quick poll here about what additional information you'd like about the ping identity solution and we'll we'll leave that up during the the q and a portion but uh aubry really really nice presentation are you ready for some questions yeah definitely scott we'll see let there but let's go okay uh first one here how do you maintain visibility and security across multiple clouds particularly when environments sprawl and are in our hybrid yeah so much of the things that i that i shared and discussed uh doing during the presentation come into to play here and this is all about you know making sure we can move at the speed of business and uh you know what what we know that a lot of the digital transformation has been occurring in cloud right as um as organizations look to you know unlock different revenue sources or test out new business models again they've turned to cloud uh as as the venue to to support and again do that in an accelerated manner and and really the the the key here is is identity as a as a control plane as i mentioned knowing who what when uh is you know are accessing these you know these various clouds and these systems and services as well as data that are running uh you know across these you know different environments right and it could be in the cloud it could certainly still be things that are in you know data centers so that hybrid model we we continue to see quite a quite a bit of that uh just you know again even with all this accelerated um you know move to cloud the reality is a lot of mission critical systems still live in our data centers uh to continue to extract uh we continue we see business continue to extract value from from that hybrid model so to you know with the uh de-perimeterization if you will if you know we've hopefully heard that that term that the castle moat type um security that that we had is is long and gone we're not we're not moving back uh to something like that and at least in the foreseeable future so really what we have to lean on uh as part of our security model going forward is identity so that's going to allow us visibility into again who's accessing these systems you know what they're doing and as well as as even before we even you know allow them make a decision based on conditions and purpose and context uh be dynamic in term in terms of granting access right so we've authenticated you uh now you know are you authorized to perform whatever activity it is you're you're trying to to do and and candidly just um just network based controls aren't really sufficient anymore to do that so we've really got to leverage identity and again then it can help us accelerate a lot of these uh a lot of these things that we're trying to do in the cloud so that's that's one of the key key things in terms of gaining visibility across these various environments and to sum it all up scott i'd say again look at identity as the as the control plane okay great um and by the way you made a reference to uh to another aubrey uh for for a quote and and derek did a nice drake quote so that's that's correct is that correct that is correct derek okay and i i don't know if derek was quick with the google fig uh fingers or if he's uh he's a drake fan but uh unfortunately we don't have a prize for that one um you know you mentioned about identity uh and and casey had a question for you any comment on the industry phrase identity is the new perimeter yeah yeah i've got i've got a i've got a good friend of mine and and a colleague who uh you know he's he's got some varying perspectives on that phrase and kind of along the lines of it sort of being sort of overused some somewhat uh and it's it's not a new phrase by the way it's been around for for years now and time flies so it's hard for me to sort of really kind of pinpoint it but it's at least half a decade or more old right it's not a new phrase uh and and again despite its potential overuse the reality is is that it's it is accurate right in many ways so if you think about if you think about you know digital transformation and and you know especially in the last 18 months and that's another overused term by the way possibly digital trend transformation and i'll add another one to that to that zero trust it like we're we're doing a lot more with apis iot basically at some point every user is essentially going to be an external you know considered an external user if you're following zero trust um or an insider in in some cases depending on how what angle you want to take on that but you've got to sort of apply risk to that and then again where we are developing systems and where we're building um and and creating uh new business models or new business processes and new applications that's all occurring quote unquote outside sort of the four walls of the the companies that we have historically lived in or or been in or worked in and so this identity is the new perimeter that's essentially where it came from it's the most portable way in this new world of controlling access and controlling and maintaining security so the the perimeter is gone i talked about de-perimeterization well what do you have left what means do you have to take to apis that are running outside in public apis apps that are running in public cloud users that are business partners that need you know companies are global right so you have business partners that are maybe supporting you across the globe you know users that are now remote and working from home uh this is where that term really um lends itself to kind of a brief description of this new security paradigm identity is the new perimeter and i've actually seen some stuff and some folks have kind of taken it to the to the next level identity is the only perimeter so it's really just a reflection of the of the world that we live in uh the the state and how we're doing business how business is so moving quickly how we need to keep up with it and how identity is this sort of um sort of portable way almost like a passport for us to identify ourselves uh and be proven um we we claim to be who who we are as part of the authentication and authorization process uh identity in the context around identity is the way that we're going to be able to do that in a frictionless manner with applications data and services running everywhere so kind of connecting any user on any device to any service running anywhere the way that we're going to do that in a frictionless way is through identity and that again is where that term comes from however you feel about it and whether you feel it's sort of jumped the shark or overused or not um that's really the essence of it solid questions i always always appreciate a jump the shark reference in technology terms hey i i hate yeah i just i hate to wrap it up here we had a ton of audience questions i hope you can stick around a bit uh online and maybe maybe answer some of those um some of those that have have come in but uh aubry thanks for a great presentation and a really good introduction to ping uh identity and a fascinating topic yeah thanks for everybody's time and thank you derek for uh um for uh letting me know that you are you are paying attention um so uh thanks and and i look forward to answering any additional questions that may be out there i really appreciate the time as always uh stay well take care okay you too thanks okay so we're gonna move on to our next um amazon card prize drawing another 500 amazon gift card and i'm about to announce the winner of this one it is brett smith from kansas so brett smith uh congratulations you've won this 500 amazon gift card just in time for the holidays so uh we'll be getting in touch with you about uh about you know getting that card to you so let's move on to our next presentation um we're going to hear about firemon and we're going to hear from tim woods who's vp of technology alliances with firearm tim welcome uh welcome to the megacast happy to be here thank you very much yeah take it away very good well first of all yes i do want to thank our listening audience for being here today and i'm sure you guys have heard of a lot of information day so hopefully what i'm going to present uh adds on to that uh if you're not familiar with firemon hopefully you are but if you're not that's okay too firemon is the leader in the nspm arena nspm network security policy management and so we help companies every day overcome some of the many barriers to enterprise agility by extended visibility establishing continuous compliance change detection change analysis and last but certainly not least making sure that risk is not uh increasing within the respective environments as well so let's just let's jump right on in here so you've heard i'm sure today you've heard about storage i know you were just talking about identity access management i know you've you've probably heard some stuff about performance management there's so many things to take into consideration when we're talking about cloud especially when we're looking at our plans for next year and beyond today i'm going to talk about network security policy management and then also cloud security operations in the very short time that i have i've included a few extra slides here so uh i'll blow through some of these and but i wanted to keep the slides in there more more importantly for your reference later if uh if you need to so let's start first with security let me get rid of this pop-up that just came up here always at the most inopportune time for me uh let's start first with network security policy management and and one of the issues that we see this is the world that we live in and and and probably one of our biggest adversaries and i want to talk about that in a little more detail is is the growing complexity that we find within the environment today aubry was just talking about uh the the perimeter blurring or the perimeter going away and so you know the surface is definitely expanding uh and complexity is expanding as well and what we find and what we hear from our clients and our users and engagements today is that you know they're they're struggling to get their arms around the very policies that are helping to uh control some of the the security controls within their environment and so what happens here is when you fail to manage your policies adequately over time these policies can blow unnecessary complexity creeps into the equation but the very thing that's meant to protect us actually becomes a threat vector in and of itself and and that's what we want to get uh that's what we want to get our arms around i've heard uh you know i've heard you know all kinds of stories about bad actors and and and nefarious individuals trying to get into the network and and all of that is very true right you have to understand clearly what is it that i have that someone else would want and to what length they would go to get it and then of course you uh adjust your security posture accordingly to the uh to the level of threat but i think also one of our biggest adversaries again is unnecessary complexity that creeps in over time i originally created this particular slide to kind of show the sheer volume of rules increasing within the environment sheer volume of enforcement point security rules and uh going into the environment but you could you could apply this to many things here you could apply it to uh applications being deployed within the uh hybrid infrastructures today outside of the purview of i.t security you could apply it to some other factors as well but here's the thing that we know for sure for sure we know that if you don't challenge complexity if challenging complexity is not a part of your 2022 cloud plan unchallenged complexity will result in in a couple of ways it manifests itself in a couple of ways number one is risk is guaranteed to increase the probability of human error creeping into the equation more is going to happen uh and the probability of misconfiguration is going to happen and so we have to challenge this there's always going to be a certain amount of inherent complexity in any good security architecture implementation but what i take exception with is that unnecessary complexity that creeps into the equation over time and so what are some of our current challenges of course increased complexity at scale lack of automation uh and and the one to the right here limited budgets and staffing shortages that kind of goes hand in hand with automation if you're not adding more people to respond you know to the say take that slide for example that i just put up the growth in rules if i don't have the people the resources to manage that then i need to turn to automation if i'm not adding more people i need to make the people that i have more efficient i i try to create just a little kind of a picture of what um what i'm describing so that i can kind of draw you into the value proposition that part that that firemon provides but kind of what we see today uh and so this is just kind of a micro picture of what you know a hybrid what our hybrid infrastructures look like today you start at the bottom and you have remote locations the headquarters the data center our road warriors you're connecting into the environment via an sd-wan or some other connect predominantly today it's sd-wan maybe you're leveraging sassy today or you're or you're exploring sassy options as well and then up into your cloud provisioning but all along this path from ground to cloud there are policies and there are a lot of policies and those policies are growing every day and so how you get your arms around managing those policies is the is the value that fairmont provides we homogenize or normalize all those policies and bring them back into our platform under a single lens so to speak and then we allow you to do behavioral analysis of the policy compliance posturing making sure your compliance posture is where it's supposed to be and it doesn't drip and if it does drift that i can take uh i can provide you actionable data in order to perform compliance remediation making sure that risk awareness is where it needs to be i think at the end of the day we're always you know as a responsibility to the business we have to manage risk to a level that's acceptable and of course proactive change proactive change assessment meaning before a new before a new rule or new access is introduced into my environment am i able to sandbox that proposed rule and run my uh compliance assessments against it to make sure that it's not going to cause an impact to my business posture my business continuity posture my security posture compliance posture etc and then in the in the bottom there you see where we talk about api driven you have to also recognize i think for any good security application today you have to recognize that you are not the only part of the puzzle and subscribing to a robust api so that i can exchange and enrich information with other systems that are within the environment in order to grow the total value of the combined security solutions i think today is paramount and will become even more paramount and paramount in the future uh one thing i want to touch on here and so there's there's a lot of issues when we talk about some of the top challenges that that we're faced with today not just in the cloud but in the hybrid infrastructure itself and i'm going to touch on visibility uh as the first one given the time that we have there are a lot of impact areas that i will that that i'll call them uh as a result of a lack of visibility within the environment we hear this time and time again i don't have the visibility i need to manage the security at the level that i believe that i should be managing it i know it's not that we don't know what to do tim it's having the resources and the tools to do it but what are some of those impact areas i think first and foremost it starts with change i talk about change in the upper left there change that means when a change happens in my environment and of course as we embark on our our uh obviously digital transformation and you know overused term but digi our digital transformation area uh efforts or our our cloud first strategies detecting change across a larger surface becomes more of a challenge in and of itself but it's so critically important to understand when a change happens within our environment how to how to assess that change in other words answer some very basic questions around that change did i expect to change probably first and foremost was it authorized change does it have a valid business justification behind the change did it happen at the right time of day some very simple things and is there documentation associated with that change just things like that become very important in our day-to-day operations but not being able to detect change as change happens especially as it relates to the access that's being granted within the environment can be catastrophic and then of course deployments you can't manage what you can't see so things that are being deployed within the environment outside the purview of i.t security how do you control something that you don't know about policy behavior we talked about complexity i talked about maintaining compliance hygiene probably is at the top of the list keeping those policies clean and making sure that we that they don't bloat and they don't become uncontrollable over time um is very important so all of these things kind of culminate come come together and form a bigger problem if they're not addressed and this is all one way of you know of challenging when we talk about challenging complexity within the environment this is a way to challenge that complexity i wanted to touch i i saw one of these slides off of a we we did a webinar here recently we talked about mapping bureau trust the security policy management so i stole a couple of sides out of that that i just wanted to touch on but um you know it's so important to zero one in fact one of the first things that we that if you are trying to establish a zero trust architecture you know zta requires you to understand what is in your what is in your environment so even though that parameter is expanding even though the the the total breadth of that of that that perimeter is not what it might use we still have to know what are the things that are inside that that we have to protect and what are the security controls that we have to apply to those things so um and also policy as you start if you haven't already recognized this from your zero trust architecture effort you probably will if you're trying to establish zones of control segmentation and or micro segmentation the policies that are controlling that segmentation that are controlling those zones of control you need to be able to not only maintain those but as they change or as change happens you need to be able to make sure that there is not drift that moves you away from your security intent uh as it relates to your zero trust uh posture as well uh developing uh this is more just i just wanted to bring it up because there is in in the cloud always there is a shared responsibility most of you i'm quite sure you clearly understand that but i would just touch on this as a component of your 2022 cloud plan is to make sure that you're that your people in your organizations understand what those lines of demarcation look like and where the delineation is between what the cloud provisioners cloud providers have responsibility for what it is that you have to take responsibility for uh so i want to touch very quickly on the firemind value proposition this is what we believe these are the key essential tenets of security policy management if you want to ensure uh consistent security controls a lot across you know that hybrid expense in the first as i said earlier and i'll say it again you're going to say this again uh before the end of the broadcast here but you know being able to see everything visibility visibility visibility is so critically important again you can't you can't secure what you can't see and you can't manage the unknown it's very hard to do and so we have to be able to to have the necessary visibility across our environment in order to manage it in order to apply the appropriate security control scale doesn't matter how good the technology is you can have the best technology on the planet but if it doesn't scale to the size of the environment then you're not going to recognize the return on the investment that the companies make adapting to change when change happens you need to be able to like i said earlier answer those very basic questions and understand uh what the impact to change is and and how i react to that change if necessary broad breath of support is is very important and of course the importance of the api is growing every day too the ability to exchange information with those other uh security platforms that you have within your environment is going to continue to be important and grow in importance over over the years so if you weren't familiar now we're going to switch gears here right quick and go into security operation so there's a lot of things i could continue to talk about on security policy management but you know hopefully if there's if you have interest there that uh you know that's picked your interest enough that you'd reach out to us and we can talk more about your unique environment uh but we acquired barely three months ago we acquired a company uh by the name of disrupt ops and disrupt ops is a uh is a security operations platform uh the cool thing the absolute cool thing about disruptoff is the visionary behind firemon was actually the ceo at the disrupt docs and has now come back to firemont and is the current ceo of farmlawn today so very exciting for us but also very exciting for where that's going to take us security operations the the main question that they ask whenever you get into the operational uh aspect of managing your cloud security is how do you currently detect and respond to security events in your cloud in your cloud environment uh there's a lot of challenges a lot of security challenges that you're faced with right as you start looking at cloud native services and not just cloud-native services because you probably have some third-party uh services as well that you're that you have to manage uh the the probably number one one of the biggest fears in the cloud of course is misconfiguring a resource and and uh and exposing the data of that resource you know to the internet uh we've seen that we've seen that hit the front page of the wall street journal and other trade magazines you know s3 buckets un encrypted s3 buckets making its way onto the internet uh definitely not something that you want to happen distributed operations threat monitoring you know one of the toughest challenges with cloud security is solving simple problems at scale again you can have the best technology on the planet but it has to scale to the size of the environment as as well so the disrupt ops approach here is from the time that you turn it on as you start finding uh those issues from day one or day zero operations as we start looking and finding those those issues that we can add value to uh visibility again visibility visibility doesn't matter what is visibility becomes so important uh across all of these operational parameters making sure that you can see the threats coming in in real time you know today in in the environment as dynamic as they are today it doesn't do us a lot of good if we catch something two three hours later we need to catch things as they happen and then we need to get it in the hands of the right people so that they can take action on that in their current um in their in the way that they their their workload the way that they can respond to it within their respective workflows uh today as well so i'm going to show you and this is a little bit of an eye chart and i'm going to blow this up the response section on the right hand side here i'm going to blow that up for you so you can kind of get an idea of what i'm talking about but on the left hand side here i mean we have all of these alerts and events if you've ever i'm sure some of you are probably managing your your amazon security hub and stuff you know all the different places that that events can can come from and coming to but being able to parse through those events so that i can get to the things that are most important that i may need to take action on is really the key and then being able to get those again in the hands of the people that can take action on them that need to to to commit to decide whether remediation needs to take place or not is is very important too so in this example here you can see where a particular alert came in it was distributed to the teams within their chat ops in this case it was it was an identity access management credentials were used by somebody by an external ip address and now we have some options not only am i going to tell you about it but i'm also going to give you the option to take action on that so do i want to disable this user uh but but and this is just one example it could be other things but the key here is i don't want to just tell you that i've discovered a misconfiguration or i've discovered an event that requires somebody's attention but i want to give you an action where you can correct it as well that could be it could be an automation action it could be a snippet that i use to copy cut paste could be a number of different ways that we react to that but i want to be able to do it within the workflow of the process that i'm currently that my that my teams are using today deploying disrupt arcs in your environment you're going to see the uh value of the features the the features at the top the value proposition of the features at the top and then you'll see the value outcomes there at the bottom in the blue and we're almost out of time already so i'm just going to hit on a couple of these but again better visibility real-time monitoring for threats and misconfigurations gonna give give you better visibility to those things as they take place team collaboration reducing those reducing alert fatigue so that i can focus in on the things that i think deserve the highest level of attention simplifying things helping my people to become more efficient and again as i said earlier responding to risk and or reducing risk at the end of the day we need to manage risk to a level that is acceptable by the business so better cloud security outcomes from automating and improving cloud security operations and that pretty much puts me on the 20-minute mark so let's go ahead and open it up for some q a that sounds good tim uh really really nice presentation and uh you know ramachandran uh uh chimed in said uh love this update firemon so i think i think they speak for a lot of us but uh one of the first questions here how fast is the time to value once i've deployed firemon uh it's almost you know it's almost immediate once you put it in there there's things that i would call you know a day zero just like with disrupt ops we start looking at things from from day one or day zero um the the security policy management platform is the same thing you know when i start thinking about looking at technical mistakes within a policy a lot of people like to the first thing they want to do is like i've got a really ugly firewall policy this legacy that's come over over time and i want to you know i want to take a look at that it's plagued us for years and that's the first thing i want to do and there's certain things that we can do uh immediately day zero day one that's going to help you clean up the policy if we're just talking about the hygiene of a of a security policy that may have been there over time and over time nowadays is not very long a policy that's three or four years old can become incredibly uh bloated uh surprisingly enough it's not it's not it's not a surprise to us sometimes we find policies that have 40 sometimes 50 of the rule base that's not used finding those unused rules or those technical mistakes shadowed rules duplicate rules things that just are never going to get hit and serve no purpose for being there and expose you to inadvertent access accidentally and then there's the second side of the coin where over time as we start looking at policy behavior and applying behavioral analytics to the policy over time then we become we we add incremental value over time so we can start looking at rules that may not be a technical mistake and they may not be redundant or shadowed or anything like that but it may just be a rule that's become stagnant and is no longer needed and they yes they decommissioned the resource that was allowing you know that that access went into the policy for in the first place but they didn't take that rule out of the policy and so the problem with that is if that if another resource is put in place and that rule just happens to intersect with that then it may provide access to a resource that it never intended to provide access to and that's where auditors and qsa's look for inadvertent access things like that so yeah that's a great question but yeah really from day one that's how quickly you'll start recognizing some value from the platform day one okay super um next question comes from guy he's asking uh how can firemon quickly detect fatal cloud misconfigurations so again you know parsing through uh and then i'm gonna switch over to the disrupt outside of it here but look parching through those alerts and bubbling up those things that we have thousands of kind of profiles that they're looking for against some of the alerts and the events that come in there and we want to bubble those up to the top and again those are the things the things that need the the the most attention or that somebody should you know that we should assign uh somebody to take a loose a look at and it may not be an individual maybe a group of individuals in other words i want to get this into my devops teams or my devsecops teams and i would but i want to promote that so that somebody they can collaborate to take a look at that very quickly so it's really parsing through these and as the environments as these environments grow and and it's not just one environment right we have to take into consideration that most large enterprises have um have a multi-cloud strategy anyway they have multiple cloud provisioning providers and so you have to take the total kind of you have to look at it more holistically across your infrastructure and that and the totality of the cloud providers that you're using for the events that you need to parse through okay how does firemon provide risk insights uh so great question there you know one is you know looking at those events and making sure like i said promoting those things that need to be promoted so that we can take action on them immediately another kind of [Music] maybe not as talked about feature but i think equally important on the policy side of things is the ability to ingest vulnerability scan data and overlay that onto the actual network security policies themselves and making sure if there's any intersection of you know ports applications address ranges etc uh contain you know that that may be uh are relationally linked to a particular cbe or something like that that we can identify those you know when you're trying to when you look at a new cve or something that comes out and you're trying and you have hundreds of firewalls and thousands upon thousands hundreds of thousands of firewall rules trying to identify if any of those rules fall within the scope of allowing access for a particular cve within your environment very quickly can if you don't have an automation platform like firemind for automating your policy management then it becomes very difficult to identify those type of rules that could potentially be a threat vector within the environment and so we pre-index everything we use elastic search and we have the ability to not only dynamically find those things by ingesting vulnerability scan data and overlay those onto the policies but we can search against anything that's contained within the database and come back with a result within seconds so if a particular new cve for some new polymorphic virus that came out and it's attacking a particular high port and i want to see very quickly across my entire environment holistically for all the policies that i'm managing then i could i could very quickly query that using a natural language search to determine if that if that porter application is exposed over any of the rules across my environment so things like that can very quickly help you to eliminate risk within your environment but yeah overlaying that we also have the ability to run what we call risk threat vectoring we we dynamically build a depiction of the network topology and we also overlay the risk onto that or the vulnerability scan data onto that so that we can do what we call risk threat vectoring so in other words if a bad actor came in through a well-known threat entry point of my network how far could that bad actor get and we base our knowledge on you know based on the route intelligence and the compensating controls and our understanding of risk vulnerabilities that are known vulnerabilities within the environment then we can we can we can run these patented risk threat vectoring routines to determine uh you know how far a bad actor could potentially get within the environment okay well tim i i hate to do this but we're gonna have to leave it there we're uh out of time uh but great presentation and really appreciate you coming on uh and talking about fire now it's my pleasure and i again i appreciate everyone's participation yeah and there's a lot of questions there in the uh in the console still and if you know if you have a few minutes and you can um you know kind of hang out and maybe hit some of those uh greatly appreciate it absolutely super um thank you we're going to move on to our our next uh prize drawing so uh the first prize here is an amazon gift card for 500 and the winner of that one is lyla davidson from new jersey so lyla congratulations and then we have another grand prize drawing this is for an iphone 13 in the color of your choice in the winner of this one is monique johnson from ohio so congratulations to monique we'll be reaching out to both of you with uh with details of of how to claim your prize so let's move on to our next presentation um we have ed avisa from outsystems ed welcome to the megacast hello thank you for having me i appreciate it very much yeah thanks for coming on so take it away all right fantastic so good afternoon everybody thank you for taking some time to review with me the outsystems presentation my name is uh said as ed avisa and i'm a solution architect here at outsystems today i'll be showcasing the outsystems platform with an eye towards helping you understand the value we can add to your cloud strategy a cloud plan necessarily includes many elements that need to be considered as critical to your success in adopting cloud technologies we believe that software development is a crucial piece of the cloud adoption process and today i'll provide you with an overview of outsystems as well as a short demonstration of the platform so you can better understand how we can help you achieve your digital transformation goals but first a few quotes according to the gartner group if you have not yet developed a cloud-first strategy you are likely falling behind your competitors companies across the business spectrum have accelerated their adoption of cloud technologies in order to save money become more efficient and reduce on-premise infrastructure costs a cloud strategy doesn't necessarily mean moving 100 of your workloads to the cloud but if cloud is not yet part of your planning it's probably time to take a serious look at your technology strategy because your competitors are already out there and they're taking advantage of the benefits of the cloud in fact gartner also states that i t organizations are now commonly accepting the pace and innovation of cloud providers as foundational to their business pace and innovation are highlighted here because one thing that has happened due to the growth of cloud adoption is that the pace of technology change has accelerated rapidly if you don't have the ability to keep up with the speed of change your competitors will always be two steps ahead of you so how do companies keep up with the current rapid pace of software development well according to forbes software builds are now measured in hours not weeks and there are multiple deployments a day as development cycles shrink from months to days keeping up with the speed of innovation is difficult in any circumstance but using traditional software development tools and techniques increases the challenge of keeping up with changing technology organizations that want to innovate rapidly need new approaches to building applications and new tools that support agile fast and high quality software development outsystems can help because we offer a fully integrated low code application development platform that empowers developers to build software fast right and for the future again according to gartner group by 2025 70 percent of new applications developed by enterprises will be will use low code or no code technologies up from less than 25 percent back in 2020 low code technologies enable businesses to create and change applications and a fraction of the time it takes for them to build using traditional technologies outsystems gives you a platform to not only adapt to change but to embrace it and these days change is a constant as we see it winners are built for change so what is outsystems sound systems is a modern application development platform that empowers your organization to build applications fast by using a visual development language that makes it easy to create serious applications quickly but you're not only building things fast you're also building them right our platform creates industry standard code behind the scenes and you own that code we don't lock you into any proprietary software so there's no risk of the dreaded vendor lock-in with just outsystems our platform is ready for the future ready for change and it solves the broadest set of challenges faced by businesses today a successful cloud strategy must address four key pillars for comprehensive business transformation first your customer facing assets must provide compelling experiences to end users these days difficult to use interfaces are no longer tolerated by customers people are used to having powerful easy to use applications that are available to them from a multitude of devices such as computers laptops tablets and smartphones providing fantastic customer experiences will increase engage in engagement grow your revenue and improve customer satisfaction from an internal perspective your employees also expect easy to use applications that allow them to accomplish their tasks without headaches employee satisfaction is not only critical to increase retention but also to improving your company's ability to attract top talent happy employees are more productive employees and also automating processes leads to higher efficiency and cost reduction across the organization with our built-in automation your developers will be liberated from menial work and empowered to focus on higher value initiatives and finally by modernizing your applications both internal and external this will help you reduce infrastructure costs simplify your i.t landscape and potentially even help you reduce your dependence on third party software that may or may not provide you with the capabilities that your company needs in short the outsystems platform will give your organization the power to innovate through software and that's our mission and our promise so let's take a look at the idea of digital transformation in a little more detail to begin with digitalization is not the same as digital transformation if you lift and shift servers from on premise to the cloud you haven't really transformed anything rather you've just moved some stuff from here to there digital transformation is about innovation that changes how you do business it is about creating applications that improve the way your business works and that is where the rubber meets the road to help companies transform how they do business outsystems has created a well-defined and proven adoption journey that we will tailor to your specific case when companies start working without systems we focus our efforts on helping you accelerate and de-risk time to first value by selecting the right projects and the right enablement program to get your first outsystems apps deployed moving forward to the evolving phase we help you increase your throughput delivering digital solutions our data shows that by increasing delivery throughput the alignment between i.t and the business increases significantly as a consequence transformation becomes self-sufficient as business stakeholders agree to increased investment in your transformation program and the company moves forward into the scaling phase where deployment of the platform is expanded throughout the organization finally our most successful customers reach an optimizing phase in which they have deployed outsystems to support multiple lines of business subsidiaries and users around the globe leading to self-sustained rapid innovation our level of involvement will vary depending on your needs over time and we have a full complement of partners who are ready and able to help you succeed companies that successfully transform into digital leaders focus on what we call the value chain when you are able to remove intermediaries in your supply chain by using software to create frictionless experiences for customers you are able to acceler accelerate value creation for those customers software becomes a critical conduit in your interactions with customers providing them with real-time interaction with your business by focusing on the value chain and taking control of it you will be able to create those amazing experiences with digital that digital leaders like apple uber and netflix have made commonplace without systems you'll have the power to create those experiences rapidly with the speed of change customers are accustomed to you'll be able to innovate quickly using a modern application architecture that minimizes or even eliminates the technical debt of legacy systems and by distinguishing your company as a digital leader you will be able to find and attract top technology talent that will further increase your reach and grow your digital footprint and productivity so now let's take a look at an overview of the platform we help companies and organizations change how they think about software and delivery of software applying automation and artificial intelligence to software delivery you can visually develop your applications easily integrate with any system and change applications with no limits but let me dive in a little deeper here when we call outsystems a full stack application development platform what we mean is that you can build your apps from the ground up everything needed from the front end to the back end is included in the platform think of it as building with powerful building blocks that automate and accelerate tedious and frequently error-prone tasks the creation is done in a visual environment where you assemble the layers of your applications from this set of building blocks front end back end process orchestration integration data everything you need to create complex enterprise applications can be done visually using outsystems less experienced developers are able to become effective full-stack developers in a matter of weeks allowing them to be upskilled easily and experienced developers are able to move even faster avoiding many of the mundane tasks that consume much of their time on coding by hand and what's amazing is that this visual approach is the same for every type of application you build many organizations want to know how we fit in with the i.t complexity they already have in place our platform is extremely simple to integrate and extend without systems you have everything you need to create and deploy applications that integrate with your existing systems or external services and can run on any device all in a single platform you can connect out of the box to the most common enterprise databases like oracle sql server or db2 you can quickly integrate web services rest or soap apis cloud services and enterprise systems such as crms and erp systems and you can extend the platform with custom code if you need to without systems developers are always in control of what they create sometimes developers worry that they may encounter platform limitations later in the development process that might prevent them from really building enterprise grade applications but rest assured they won't without systems outsystems is open by design to allow all layers of applications to be extended with your own code libraries integration components and any custom code can be published and managed for later reuse we also have an open source community code repository called the forge there you'll find code modules connectors ui components and other and other components to help speed up application delivery time we have over 2500 available connectors with over a million downloads you can use pre-built connectors to enterprise platforms like salesforce and netsuite or easily incorporate the latest ai and machine learning capabilities from azure or aws without having to learn the technical details of integration if there isn't an existing connector that meets your needs you can also create your own with our platform and share it with your team the platform also provides full devops solution out of the box that supports continuous delivery and continuous integration of your applications and this can also be integrated with your existing enterprise devops tools we support the entire development life cycle you can create and debug applications visually test them directly on mobile devices or in the browser deploy them across environments capture feedback directly in the apps and monitor them with sophisticated dashboards and management tools the platforms handles all the dependency analysis code compilation database scripts and deployment so your architecture never breaks and finally the outsystems platform enables your team to create a wide range of rich and engaging digital experiences not only can you create user interfaces for any type of form factor but also things they may not be thinking about yet like chat bots and integrations with virtual assistants like siri and alexa imagine your digital experiences reaching any and all customer touch points we ensure the user interface looks fantastic regardless of the design skills of the developer outsystems provides a brut a rich beautiful ui kit to create great experiences with best practices built in apologize and get backed up a little bit sorry about that these are packages themes templates and patterns that are fully reusable you can customize your interfaces adapt them or modify them to create brand consistency across all your teams you have complete control of the user interface down to the individual level of pixels it is also easy to include your own custom javascript and css if you have components that you'd like to leverage and finally the platform allows you to expose apis and services that can power other experience and applications in your environment we are a complete fully integrated application platform not a bunch of components cobbled together your applications are architected for scale security and performance out of the box we have over 200 automatic security checks across the full stack and the apps you build and the applications created without systems are also highly scalable we have customers with millions of end users heading our servers and we also offer deployment flexibility while most of our customers use our cloud offering many use their own public or private clouds or on premise in their own data centers okay so now that you know what our systems is and what it can do let's take a look at a short two minute video demonstration of this platform this is the number one platform to deliver mobile apps portals and web applications voice and chatbot powered virtual assistants even core business systems without systems you can create and manage large application portfolios that integrate existing systems and reach millions of users applications are built and changed in a visual environment where you can define your application's workflow processes user interfaces for both web and mobile devices including offline functionality and access to device sensors business logic and data models all development is done in a powerful yet easy to learn drag and drop development environment that you can even extend with your own custom code a platform offers ai assisted development to speed up app creation it's like having a smart and friendly expert helping you it also makes it really easy to integrate with any system such as legacy platforms large enterprise packages public or private web services and enterprise integration platforms when you're done one click is all you need to deploy your apps and for mobile apps you can create the package to submit the app stores with just one click or deploy them as progressive web apps once your apps are live testers and end users provide feedback right in the app all your applications are generated for standard open development stacks they can run in the outsystems cloud or any public or private cloud or on-premises they are fully portable so you'll never be locked into a proprietary cloud environment or a single provider when deploying your applications across environments outsystems runs a complete impact analysis to validate all application dependencies and ensure robust and risk-free staging process when everything is ready the platform automates the devops process for you making it easy and fast to roll applications into production while giving you complete visibility and control over the entire process real-time performance monitoring is built in for all your applications you'll be able to see exactly how each application is performing and ensure your apps are providing an amazing user experience whatever type of application you build without systems you have everything you need to rapidly and easily deliver your next great business application with no limits so what does the marketplace think about outsystems according to gartner group outsystems has been considered a leader in their low code application development or lcap magic quadrant for five consecutive years now in the 2021 report which was just recently published outsystems is the number one low code platform from the ability to execute perspective and compares favorably with much larger players in the space such as salesforce servicenow microsoft and mendix which is owned by stop me siemens the forester low code wave leader reports for both general application development and mobile development also place outsystems far up and to the right in the leader's zone technology analysts across the software spectrum consider outsystems to be one of the finest technologies available for building disruptive applications quickly but what do our customers think as you can see from quick results on gartner peer insights outsystems by far has the most positive engagement of any company in this space to see why we're so highly esteemed by both industry leaders and customers in general let's take a quick look at two case studies before we wrap up the presentation think money is a digital disrupter in the financial services industry the company was looking for an agile way to improve the customer experience of their digital banking services in just six months we led the delivery of a b2c mobile banking app that allowed them to retire an off-the-shelf banking platform and launch a new customer onboarding process all while enabling a newly formed think money team to deliver three new customer-facing apps in only six months according to the customer adopting outsystems for our digital initiatives has been a game changer for us schneider electric set up on outsystems powered digital capability to support organization-wide transformation they were able to deliver their first project three times faster than initially anticipated schneider told us that they saw low code platform as a catalyst to bridge the gap between business demands and the available i.t resources and that's why they chose outsystems to wrap it up i invite you to visit our website at outsystems.com you can learn more about who we are and what we do and you can download our free edition to get started learning and building your digital transformation without systems thank you great thanks ed really nice presentation appreciate the demo as well um let me switch things up onto the poll here just so you know once again this is the poll that you'll you'll recognize this one is about you know outsystems solution uh you can just click on there what uh what you'd like to see from from outsystems um but ed we do have a number of questions that have rolled in here um and uh the first one i want to ask you is from bradford says how easy is outsystems to learn from my development team and then how about my less technical team members can they learn low code from outsystems well that's a great question so outsystems is really designed to make the professional developer more productive so we didn't start out with the admission of enabling non-developers to really build code however because we are a low code platform non-developers can get started very easily so existing developers people with development skills can take to this very quickly frankly when i interviewed for the for the role that i have at outsystems i had to create a demonstration and i had about three days to do it and i managed to integrate uh with the google cloud backend database create a great user interface that allowed me to have full security all these different capabilities and that was i'm an experienced developer but it took me about two or three days just to get up to speed and get going with it non-experienced developers can come up to speed very quickly as well so we definitely address the citizen developer market as well but experienced developers will be able to come up to speed very quickly super uh next question comes from paul he's asking can the outsystems digital transformation platform apply to more agile smaller scale smbs as well as they might apply to a larger organization absolutely we have a licensing model that basically allows for smaller businesses to get started or larger businesses to all the way to the enterprise level uh in fact as i showed in that last slide we have a free edition that anyone can download and that allows you to get started obviously there will be some limitations into what you can do in terms of deployment but a free edition is available and again our licensing model has lower cost entry points so you can get started without systems uh for only a couple of users if you have to you know if you have a small organization that's trying to do it so uh we cater to all levels of businesses based based from the startups all the way to large enterprises okay super hey you know similarly like on the infrastructure side uh so steve asks are there minimum software standards that outsystems needs to have in order to work uh and he's mentioning that they have multiple legacy systems that you know run on older technology what do you need to be you know to get up and going without systems so in itself outsystems is a complete platform that is deployed and it will deploy with whatever dependencies it needs there are basically the ide the integrated development environment that you work in is either a windows based or a macintosh based application so to start as a developer you need to just have a you know computer that has normal computer specs these days you know eight maybe eight gigs of ram that type of thing um maybe a gigabyte of disk it's not a huge um time or it's not a huge space consump consumer on the laptop or on the desktop but from the back end integration as long as those backend systems can expose services via web services or soap or rest apis we have no problem integrating with any of those systems okay great and we've probably got time for for one more audience question here and this one is uh do i avoid vendor lock-in without systems absolutely i i mentioned this in the presentation that you do take you can take the code with you what we call it the detach process basically if you want to leave out systems after some period of time once you've adopted it we will happily work with you to give you all the code you created in the system back before we disconnect the platform so vendor lock-in is one of the things that was kind of top of mind during our development process for this platform because we wanted to ensure that customers can jump in and get involved with this adopt this platform build some applications but if they find that it's not working out for them we give them an easy way to you know remove them their code and take it with them for deployment anywhere else they'd like okay great well ed thank you for that presentation thanks for bringing us up to speed on outsystems really appreciate your participation today i appreciate it very much thank you all for your time and attention great okay so let's move on to our next uh gift card drawing so this is for another amazon.com gift card this one for 500. and this one goes to anna shirley from south carolina so anna congratulations we'll be getting in touch um about claiming your gift card and now i'd like to move on to our next presentation we're going to hear from stu miniman at red hat he's director of market insights for hybrid platforms great to be with uh you in this community again uh by way of introduction again i'm stu miniman i'm the director of market insights with red hat we're going to be talking as this whole event is 2022 cloud planning uh i'm sure most of us you know a little bit shocked that we're turning the page to 2022 already um because a lot has definitely changed in the last two years and we're going to talk a bit about that in the session so uh real quick the agenda uh we're gonna uh of course talk a little bit about the pandemic impact what that's happened on cloud and then we're gonna turn the page and look a little bit at 2022. so just as most of us kind of look and say what uh resolutions am i going to be making when the new year comes around what habits do i want to set we as an organization should also be doing some of the same thing so if i look back uh you know the the the pandemic did hit a lot of us like that wrecking ball especially uh i'm sure most of us had our plans for 2020 uh i had certain trips on the agenda i had certain you know business and personal objectives and most of that went out the window by the time we hit the end of q1 of 2020. um when it comes to cloud it's been an interesting uh what what what has happened many companies have found that uh the working remote uh in the impacts and ripple effects of what's happened to not only our businesses but our customers and our partners is that the cloud businesses have actually accelerated so uh one of the pictures i'm showing here uh is a graph uh from my my friends at wikibon uh tracking the public cloud providers and their growth rate um and this is growth rate so if you look when we get through 2020 it looks like growth rate went down a little bit um but mostly that is just because you reach the the larger numbers uh it is tough if you're you know multi-billions or tens of billions of dollars to keep growing at a 20 30 or even higher percentage of growth rate um but all of the cloud providers amazon google microsoft and alibaba listed here are growing at phenomenal rates for you know multi-billion dollar rates uh we're over 100 billion dollars worth of combined annual revenue uh between the you know the top four providers now and if you look even amazon actually accelerated in the last couple of quarters uh going back to growth rate that they hadn't seen for a couple of years so amazing to see that from kind of the macro level of what's happening in cloud but what did that mean for individual businesses what were some of the lessons that we've learned so first of all we did need to change we know change is one of the toughest things uh for any of us to do and organizationally you go through months of planning uh for the year i i know i've been sitting through lots of meetings getting ready for 2022 and you have your plan you're in your first quarter you've probably trained your sales team you've worked with your partners everything's all set and then we need to majorly readjust what's going on so what do we need to do we need to make sure that we can be very adaptable to these environments because we need to be able to respond to the change we can't just keep running the same playbook uh you know i'm a fan of pro football uh you know here in the us and it is one of those things that some of the best teams are the ones that when you get through a quarter of the game or half the game you need to reassess your game plan and adjust accordingly to make sure that you can react to what the other team was doing and what is working what is not that's something that we tend to do in business but it needs to happen much faster today we all know that the pace of cloud and the pace of change in our industry has been rapid well the pandemic did accelerate some of those things um we so many companies as i said uh had to dive into cloud uh you know conversation with some companies if i had you know a call center in a physical location and within a matter of a week or two i often needed to be able to enable that remotely i need to make sure my security processes were still in place when rather than making sure that only the people in the building could access things well now i need everybody to be remote and it's not like we haven't had vpn for uh decades and the like but security needed to be checked as to what was happening and there was uh as i have on the list here uh some experimentation so what works now that might not have worked in before what lessons can we learn where can we learn from our organization learn from our peer organizations uh i had actually at the beginning of 2020 sat down with the head of remote for git lab talking about how you know modern developers often are working asynchronously and remote so this is not necessarily all new but for many companies it did accelerate that move uh to to be more remote uh to be able to have flexibility where people work from uh and there was also that move to leverage the public cloud more um because i could not go into my data center uh whether it is i have a networking issue and i need to reset some things and rewire whether i needed to swap out or add some components if i go to the public cloud they can take care of those pieces for me that flexibility is also super important something i've talked about if i have to be flexible and you have to be ready to be able to change um and just the last thing on this slide is uh there's been almost a trope in the industry for a number of years that you know oh you know software's even the world is everyone becoming a software company well not every company is monetizing the software that that they are working with but it is a software experience at the center of what we're doing so explosive growth in public cloud and in sas and those are some of the things that that you need to look at so uh once again as we we turn the page to 2022 think about the the the habits that you were making uh as an organization and we know that it is not something that just overnight you say you're going to do something different or i'm going to march down a path but it normally takes it can take you know 30 to 60 to 90 days of doing things a new way before it really gets ingrained as a habit and that's something we want to look into so the pandemic affected lots of companies in lots of different ways so let's actually dig in a bit and talk about um where people are and some of the steps they can be taking as we uh are hopefully emerging into the next phase of what's happening globally um you're probably familiar hopefully with uh maslow's hierarchy which is we need to make sure that some of our base needs are met before we go up the up the stack on it so you know from from a personal standpoint i need to have food i need to have shelter i need to have stability and then i can start working on you know greater fitness uh you know working on my my further education and in it is the same way that there's a joke in the industry you know if i don't have wi-fi i'm probably not doing anything else well here you know i have to start with the basic uh base of the pyramid you know infrastructure and connectivity making sure that i have a secure environment and then i can start going up the stack more to really make sure that my it is differentiated and responding to the needs of the business so just a framework uh so something to think about that of course uh you know what whether i'm building a house whether i'm building my it you you start with the base and then you build your way up and you want to make sure that that you're meeting the needs of the business so when we break down uh looking at companies as they've been responding to uh the pandemic not every company just hit the accelerator and said wow let me dive into cloud obviously there were certain industries uh in in certain regions that were hit very differently when it comes to uh the the pandemic so we might not be able to you know take on that brand new project we might have needed to take care of some basic uh things to make sure my business is wrong so if i think about uh you know the restaurant and hospitality industry need to make sure that i have online ordering or take out so you saw you know huge growth of some of the sas companies that support that type of environment so um if i was a company uh that maybe the the pandemic slowed me down um and i'm using cloud i've dipped my toe in might have actually had me start going to the cloud for the first time um we're going to look at some of those base pieces of the pyramid so what are the things that i can do in 2022 to help my business start moving a little faster and taking advantage of cloud the first one on the list there is automation so automation is something we've been talking about in our industry for decades what is different today is it's not just point let me think about something that i was doing manually and let me script it and let me automate it but i want to really look at it across the board and automate holistically of course from a red hat standpoint we have a number of solutions that fit into that environment ansible being the one that is front and center but even if we look at from a linux standpoint from kubernetes standpoint automation is baked into everything we're doing because for the most part most companies have reached the point where they are beyond that human scale and you can't have your people just scrambling constantly to deal with all the changes we need to put the automation in place to be able to take advantage of what's happening um data is a theme that you'll see weaved throughout this presentation because we know uh that our people and our data are some of the most valuable resources that we have so how do i start getting insight out of my data so have i do i understand where my data is can i take advantage of it from a base standpoint am i making decisions based on data uh the cloud when i go to the public cloud i should not just keep doing things the way that i was doing things uh before you need to relearn things you need to make sure you take the time to do training and your people understand this uh the the value and power that you're taking advantage of with the cloud and one of the best ways to do that is through what we call manage cloud services so that means that many of the things that i would have done if i was managing the it environment when i go to the cloud is there you know a service that i can take advantage of that takes care of that for me we're basically just just shifting that responsibility onto the vendor onto the platform they have people that can take care of that from a red hat standpoint it's where we've seen some of our largest growth our partnerships with azure and with aws for our managed kubernetes services based off of openshift as well as we've managed services uh uh on google uh and we have upper layer uh managed services that are built on top of uh those for more advanced things like uh streaming and api management uh and the like and the last one something that i mentioned uh earlier is security um security is something that is everyone's responsibility is something that we need to really think about at every layer of the stack um it is no longer the case that i can just say oh well security is a team off on the side where it's a firewall that makes sure that nobody comes into my environment into something that everybody from the application developer through all of the infrastructure pieces and the architects working with the cloud need to make sure that security uh is is is front and center uh and unfortunately when everybody went home with the pandemic uh that did not stop uh some of the uh the bad actors um because there's even more that's going on we actually did uh did a whole event on this uh with the mega cast earlier this year to talk about uh some of the security imperatives uh in 2022 just one of one of the hot items that that's come on is the secure supply chain so make sure you look into that you will find now that most of your software providers are going to have signatures uh so that you can make sure that you're getting the right components and that they are secure and everything can we we can watch holistically on the system to make sure that i'm using good software i'm up to date with my latest patches uh and the like um what if i've i've already been using cloud for a bit i've taken care of some of those base pieces of automation and security and taking care of them here are some of the you know i guess we go from kind of a 101 to the 201 series of you know going a little bit faster so we're strapped in using cloud um how do we take advantage of things more so one of the pieces that is the toughest is really looking at our application portfolio so how do i take my application portfolio and how do we make sure we migrate it more to a cloud native environment so if you've looked at your applications there are some applications of that we can just offload them completely so you know these examples something like email if you would manage that in your environment hopefully you've moved to an office 365 or google suite or the like where you know that's just a sas offering so often sass what you can and then when you go to the cloud environment there are some things that you will build new and there are other things that you will need to migrate it in if you lift it and shift it in there you will hopefully then modernize it because if you don't modernize it you will often find that both the capital expense in the cloud as well as your operations team managing the cloud will quickly match if not exceed what you were doing in your data center so the the goal is if you can modernize things uh have the right architecture for for things in the cloud it will match the economics that makes sense as well as the operations that make sense goes somewhat in line with what i was just talking about on the previous slide with the cloud services themselves i talked about data before if i have my data and i understand my data if i have data scientists on my team and i can start digging into machine learning and artificial intelligence there are ways that i can really create new businesses uh for uh my uh my company i can work with uh our partners we can really respond to the needs of our customers uh even faster so we've seen large growth machine learning and artificial intelligence over the last few years it's really exciting almost every single vertical has you know great success stories of how these are fitting in and they tie very well to not only what's happening in the public cloud but that starts to go out to edge environments and is an area where you know kubernetes is a key component uh from the team i'm on with openshift we have a lot of customers that are going down that path and starting to really reap the benefits from machine learning and artificial intelligence and the other piece i have on this slide is service mesh so once i've gotten to the cloud there's a lot of different components so how can i really have something that isn't just the base infrastructure but all of the services how do i tie them together how do i make it more of a plugable architecture for more things to be able to kind of tie in and come out with it um so now if i'm i'm more of a pro you really have some expertise in the cloud uh we've we've been uh you know leveraging the cloud we really feel that like our company might not have been born in the cloud um but you know we have some real strong knowledge of how to use the cloud how can we really uh really accelerate what we're doing so i'll start with one uh that's serverless so serverless when it came out now boy it's been a number of years since you know a lot of people would say that amazon's lambda service uh was really the the marker of the the era of serverless coming about uh it is how do i have uh really can all i want to do is write business logic and not think about the underlying infrastructure if you listen to your developers they don't want to have to think about you know server storage and networking underneath what they're doing they just want to write their applications build their application and let the platform take care of it if you've leveraged any of the the the alexa devices uh from amazon so you just ask it something and it goes and gets the information that's actually built off of amazon's serverless technology well what's been interesting over the last couple of years is serverless is actually from a architectural design and from a how software is built uh has permeated more of the industry so we're actually seeing coming together the container in kubernetes world and serverless so amazon has had a number of things that they've done to extend that and the open source community has a project called k-native which has reached the 1.0 generally available ready for production environments uh you know just recently on that uh red hat is heavily involved in k-native activity as are a number of our peers in the marketplace and again i don't need to think about the you know the underlying containers in all of the server storage and networking pieces underneath it uh my developers can really just build applications on top of it uh and they can leverage it and what's nice about k native and what we do with openshift serverless is that is not tied to a specific cloud we can use that across all of the cloud platforms uh which really uh you know amplifying extending the benefit of what we we saw from kubernetes um talking about the developers themselves there's a word i'm not sure if you've run across something called quarkus so i'm sure you know what java is and really what quarkus is is java for the cloud era so that is something uh that we have seen a lot of growth uh over the last few years uh lots of applications for written for java so how can i modernize how can i build for new at leveraging the skill set that we've had from java uh for a really long time uh and the last one on this slide is get up so git ops is really extending the conversation that we were talking about when it comes to automation so github is where a lot of your software code your projects a lot of things your developers are probably living in github itself so when it comes to managing the state of what code i have what version should i be using we can actually automate that through this get ops operation so once again build off open source it's argo cd and techcon are the projects on the open source community where code can live in github and we can make sure that as an enterprise i i understand what version of code we're using i have the latest patches and it will from an automated standpoint make sure that the the code in github and the configuration that i'm running in my various clusters and environments stay maintained so we don't get drift we don't have a group going outside of the bounds that we're doing we can make sure that we we manage the environment and going back to the needs of a security standpoint uh we can patch things faster something we've seen so some really fast uptick on this uh just earlier in 2021 is when uh the the the projects for git ops were released and at day one we had customers doing it uh now we have hundreds of customers that are embracing git ops everything from some of the public sector government environments that really want to make sure that they have you know control uh and security taken care of uh through you know financial services and many many other industries that are taking advantage of get ops so there's there's no shortage of you know cool new technologies uh to take advantage of in 2022 there's great road map in the open source community uh that we're excited for um so you know as we look across this spectrum no matter where you are whether you're early in your cloud journey or if you're really more experienced there's things that you can do to help accelerate what you're doing so aligning yourself and your business to what's happening in the cloud in multi-cloud in your hybrid environments as we said meet you where you are uh make sure uh that that that you can help accelerate what you're doing and we as red hat are here to partner with you we've got lots of your peers that you can talk to one of the things i love is that everything we do at red hat is built off of open source which means that there is a large community around it and it's not technology for technology's sake and you can learn from your peers and take advantage of new technologies so much faster so 20 20 22 we hope we will be able to make some plans and be able to stick with them a little bit further but we know that the pace of change will keep going on at a rapid clip and you want to be able to keep up with the needs of your business keep up with what your customers are having you do and therefore we can help you uh accelerate what you're doing so uh we're going to turn uh we have some time to answer so some q a from the audience uh i want to thank you for joining me you can always reach out to me i'm really easy to find on the twitters i'm just at stu from a red hat standpoint we have a bunch of resources in the site for for you to be able to get more information or to reach out and connect with us for more information uh so thank you and we'll go to the q a okay super yeah stu really nice overview of where we stand as a as an industry and in cloud adoption and where the sort of the state of the art is as far as living in the cloud also have to say i really like your advice about you know sas what you can and and your caution about the way you know and lift and shift you can end up doing as much in the cloud as you were in your data center that kind of leads to the the first question that we have here which is does this approach that you've been laying out does that work better if we're more on premises or if we're already in the public cloud yeah yeah that that's a great question because look we're talking about the cloud but you know as a reminder you know we're we only have as an industry about 20 to 25 of all applications you know in the cloud today so while we talk about cloud and you would think that you know oh boy i'm behind if i don't have you know 80 of my applications in the cloud the reality is is we know that this is a long journey these are the type of journeys that are more like you know five to ten years uh for them to go uh when i listened to uh some of the big analyst firms out there they said well we might reach 50 percent uh in the middle of this decade so you've still got a few years on that many companies i uh talked to you know you'll make a three-year strategy to get many of your applications to cloud but when it comes to the data center look data centers are not going away um when i look at you know amazon microsoft and google you see that they are building more solutions to be able to you know come to your environment or the hosted environments and it's going to be a mix so right many of the things we're talking about cloud is not just about a location it's an opera it's it's a mindset and it's it's operational capabilities so yes we can still do many of the the features and things that we talked about in in this presentation uh can be done in your data center uh the the thing i caution is you know can you take advantage of the the innovation that's going on there's certain applications in certain pieces of your business that probably don't need to change um but many things do need to change so uh it's a mix from a red hat standpoint obviously we have we have a strong history in the customers data center you know the proliferation of linux uh really started there and we you know we wouldn't have most of these public cloud providers if it wasn't for the underpinning of linux so uh yes it is a hybrid world i did an ebook uh uh earlier this year uh talking about hybrid strategy and uh the the big point i put it is today we are hybrid and if you look out the next five to ten years you're going to be even more hybrid because you know it's just your data center it's the public cloud the edge is coming into mix so uh the one of the rules i always have in i.t is that everything is always additive it is difficult to ever get rid of anything great well stuart thank you very much really appreciate you being here and uh and uh appreciate red hat's participation all right uh thank you everyone it's uh great great to be here and wishing everybody a happy and healthy and prosperous 2022. sorry about that so uh we've got a poll up here um for you know any additional information that you'd like about the red hat solution um so if you would fill that out we would greatly appreciate it and they can send you any materials that that you'd like and i'd also like to remind everybody about the handout section um you know where we have uh great resources from from all our presenters today uh including red hat so so please check those out as well um and we did have more questions than we had time for um but uh we'll be passing those along to red hat and hopefully they'll be able to get back to you on some of your specific questions um and i do want to mention um you know in addition to all the great questions that we've we've had come in there is a 50 prize an additional amazon gift card that's available if if you have the best question of the day so keep those questions coming we're still considering uh you know potential potential winners for that and at this point i do want to announce our next gift card um it's for it's going to jay dibble from tennessee so jay congratulations and we'll be in touch about how to uh how to claim that gift card and from there i would like to jump to our next presentation so we have richard beckett from sophos so i'm going to hand things over to richard hi everyone and welcome i'm richard beckett from the sophos public cloud security group and today i'm going to be talking about the 10 steps to cloud security success in 2022 and how sophos can help you and we're going to be covering three key topic areas first we'll examine the cybersecurity threat landscape as it relates to the cloud and the shifting tactics of attackers second a closer look at the 10 steps of cloud security success and how sophos can help you achieve your responsibilities for security in the cloud and thirdly why sophos's blend of automated protection and managed services in a single package is becoming so attractive to modern organizations embracing the cloud now for those not already familiar with sophos we are a cyber security company we've been around for more than 30 years at this point and our mission is simply stated is to protect people and businesses from cybercrime and we do this via a combination of powerful and intuitive products and services that provide effective and comprehensive cyber security for organizations of any size and when we say comprehensive what i really mean is that we cover networks and data security user and device protection and of course we do that for on-premise networks remote users and of course the cloud that we're here to talk about today and these are the types of security incidents that we're defending organizations from every day those that 70 of organizations running workloads and data in the public cloud have experienced in the last 12 months and while blocking these incidents are high priorities for all organizations some teams that are faced with a range of responsibilities that include security are finding that they need the blend of the right security tools the right managed services to ensure application uptime and data security at all times and with this shift you know organizations are also seeing an increase in ransomware statistics with 59 of attacks where data was successfully encrypted now including data in the public cloud and now more than ever it's important to be aware of data security best practice across all environments whether that's on premises whether that's in a hybrid or public cloud environment and these incidents we're seeing new tactics involved here as well we're seeing new tactics there automating searches targeting those harder to monitor cloud resource misconfigurations to gain entry into an organization's environment now misconfigurations are now causing up to 66 of incidents in public cloud environments so organizations need that ability to also shift their focus to continually monitor for these potential backdoors into your environment and according to a recent sophos research project you can see just how quickly someone can identify these misconfigurations including vulnerabilities like leaving rdp or ssh exposed or data storage in public mode or perhaps over privileged i am roles giving an attacker access into that environment and these types of vulnerabilities these misconfigurations can be found and they can be targeted in less than a minute so it's leaving the window to respond very short for you as an organization now a lot of organizations can think that they are securing data securing workloads in the cloud just because they are aligned or they are using the services from say amazon web services or microsoft azure or google cloud platform and so on and unfortunately this isn't the case you know each cloud provider will have what they call a shared responsibility model for security and as you move to the cloud it's important to take the time to understand your cloud provider's own model here now let me begin by expanding on this shared responsibility model by touching on a subject that we call operationalizing security in the cloud and what this really breaks down to is is like a mental governance model if you will there has always been the shared responsibility model with any cloud provider an example of this is on screen here but you know each provider is going to have their own specific version this is kind of a catch-all generic version if you like that tries to provide a summary of what everyone's uh where everyone's responsibilities lie now what you see here is really just a definition in blue and orange of the different security responsibilities of the cloud provider so aws azure google and so on now the cloud provider is responsible for security of the cloud so the compute resources the underlying infrastructure that you're utilizing and the customer so yourselves you are responsible for the security in the cloud so any data that you upload or applications that you upload into your environment that is where your responsibilities lie for securing those pieces and the term operationalizing that security is really important to grasp and so if i zoom in here on your responsibilities what we mean by operationalizing security is it's about selecting the right security software tools the right people in your organization or in third-party organizations and the right processes so if we spend just a moment just to look at these in a bit more detail now the right security software tools what i mean by that is that there are of course cloud native security tools that you can leverage there are also tools from cloud providers security partners like sophos where we've taken some of those native tools and we've built up built on top of them we've really enhanced and expanded their capability so having the right selection of software tools to watch the specific areas of your cloud environment for security incidents is really what we're talking about and i'll of course define the the right areas in just a moment then we have people you know having the the in-house or the third party accessible individuals that are skilled with those particular software tools whether that is you know for example being able to use a seam to respon respond to events inside your central alert system or responding to or managing native security tools like amazon guard duty or azure sentinel for instance or leveraging the sophos security stack that we have on offer so people that are familiar with how to operate the security tools is really important and then you have the process you know what we mean here what's important here is that you have a well-defined process for what might happen on a bad day for your company so if there is a security incident what is the process that you have in place to respond to that incident to investigate it to triage it to find the root cause and you'll most likely need a different process and tools for different types of incidents as a you know a network perimeter event would be different say than that related to identity so having a well-defined process used by people they're skilled to operate the tools that you have in place those three things combined are really what we mean when i say operationalizing security in the cloud is at the utmost importance in your cloud environment now let's just uh talk a bit about what we can do at sophos to help here to do that i'd like to take a look at the 10 steps let's call them the 10 steps to cloud security success you know any customer can use these when referring to you know or determining what areas of your cloud environment should be monitored for security events 24 7. so let's take a look at these 10 now and i'll highlight how sophos can help you address these areas as we go through and a little bit later on in the presentation as well now the first area here is cloud infrastructure vulnerability scanning you know this is basically a definition or a set of requirements that points to your cloud cloud environment and allows you to routinely scan your cloud infrastructure for known software vulnerabilities so no doubt you're using different software tools and applications in your environment and this is really just a mechanism a service if you will to make sure that you're monitoring for known vulnerabilities the second area is cloud resource inventory visibility so this is a continuous scan and reporting of all of your cloud resources and their configuration status now of course you probably heard you know the old saying you can't secure what you can't see so you first need to make sure across your company that you have visibility of all of your assets so your virtual machines your containers your i am roles for instance you know any workload running in the cloud should be visible to you so this is a service that is specifically aimed at making sure you have a complete inventory of everything you have running in the cloud and ensure that something like shadow it just doesn't impact you negatively now the third area is cloud security best practices monitoring so every native service from your cloud providers whether it is amazon s3 or ec2 or your vms google databases whatever you're using they all have a security configuration best practice defined for them and this is a service which monitors for those specific best practices against all the native services that you might be using and it detects misconfigurations and alerts the person or the system monitoring this specific area if that that misconfiguration occurs so common examples might be shared storage buckets in that might be open might be exposed to the internet when it's not intended to be in the first place now the fourth area is compliance monitoring so we've taken some of the most common compliance areas like cis foundations benchmark or pci dss or hipaa for instance and this is a service which monitors your cloud environment for compliance status against those specific standards and alerts the system or people or persons monitoring that that specific area if there is a deviation from those standards now the fifth is around monitoring and triaging security events so this is a service which is able to monitor all of these areas that we're looking at on the screen here and the other five that i'll be displaying here in just a moment for security events so there should also be recommended remediation guidance or full remediation done by a service on top of this as well so remediation actions could be things like close open ports when they're exposed or shut down unauthorized instances that might be spun up now the sixth area is uh is really to make sure that number five is happening around the clock so 24 7. you know attackers are are not working nine to five you know these uh incidents are going to happen around the clock and usually at the most inconvenient times for you and your team so your security tools and environment should be monitored around the clock as well now number seven is distributed denial of service or ddos mitigation so this has to really do with the the edge of your cloud environment so you should have a service in place to be able to monitor for potential ddos attacks that can impact the the availability of your web facing applications for your customers and that service that you should have there should have the ability to detect these events prevent them or mitigate them in the first place now the eighth area is managed intrusion prevention system or otherwise known as ips now this is a defensive service that defends against known threat patterns you know typically on the network side and it increases your overall security posture within your cloud environment now next up the the ninth area is managed detection and response so mdr is the industry term here but specifically mdr for cloud-based endpoints you know where an endpoint is a virtual machine or an application or a workload or a container maybe a serverless function running and so on this is really a combination of software running on those endpoints monitored for threats at the infrastructure level and human beings you know experts trained in their cloud security environment looking for potential security threats detecting them investigating them and of course you know removing them from the environment and then the tenth step the tenth area here is managed web application firewall you know otherwise known in the industry as waff of course so this protects your web-facing applications and your apis against some common exploits that are really very prevalent out there on the internet so that combination of these 10 areas these 10 steps is really what we recommend that you cover within your organization to get success out of your security program so if you cover 10 we highly recommend that you cover the other seven steps if you're covering five on your own and you need help for the other five well that's a great reason to talk to sophos now here at sophos we have a single cyber security package of tools and managed security services to help take some of the weight of cloud security off your shoulders so think of it as your ticket if you like to achieving those 10 steps that we just outlined a moment ago so let me just take a moment just to break down the sophos package here for you so it's a a single cyber security package it includes automated security across cloud workloads virtual machines and endpoints it's wrapped in our sophos 24 7 managed threat response service to ensure we got security of data we can proactively prevent vulnerabilities and of course we can block threats around the clock for you so you will have a dedicated team of security experts from our mtr service they're going to be continually monitoring your cloud environment to respond to security events you know whether that's 3am in the morning whether it's weekends whether it's holidays it doesn't matter for that team and there is also a very flexible deployment approach here and it allows that protection to be managed in-house by you if you wish or through a sophos managed services partner to ensure you've got correct deployment of sophos protection correct configuration and day-to-day security management in place and there's a third option there's kind of a middle ground if you like we also have sophos professional services team who can help with the initial deployment of software security tools and then hand over to yourselves for the day-to-day management and from a cloud provider platform perspective this also helps you achieve much greater value from key aws services like amazon guard duty for instance or security hub or azure services like sentinel for instance and the reason here is because security findings from these services are analyzed they are risk assessed and they are prioritized by sophos to ensure that nothing is lost in the noise of your cloud environment in the first place and because we provide these tools and services as part of a connected system all managed from one central console working with sophos has three big benefits that it's able to unlock for you it's optimizing uh prevention in the first place we are enabling you to proactively identify misconfigurations and gaps in security best practices and this just helps to prevent security incidents in the first place okay and then if that threat does get in for any reason we're also about minimizing the time to detect and respond to those incidents and 24 7 security monitoring is there in place with that elite team of experts and that just means that you benefit from the best people and the best technology protecting your environments your your customer applications and your data and i'll highlight one of my favorite quotes from one of those customers here this is a company called celayax and this quote here is that sophos is there 24 7 so my team doesn't need to be you know for selects and a lot more of our other customers it's just not realistic as i mentioned to keep those teams monitoring security around the clock to cover your side of the uh of the cloud shared responsibility model so it can be more cost effective for you to really look at outsourcing and realigning those internal resources to more revenue generating activity for the organization most cloud security incidents could be proactively prevented with proper configuration and a good understanding of the cloud providers shared responsibility model so these 10 steps to success are really a hassle-free approach from sophos as we combine those steps with our single cyber security package to enable you to quickly add the right security tools and the right expertise whether that's in 2022 or whether that's today and just help you reduce security risk increase cloud security posture and improve the efficiency of your security program and those internal teams of yours and as i wrap up today's session i'd like to point out two key next steps if you'd like to learn more about sophos and how we can help you first here you can register for a free cloud security and compliance assessment today sophos.com forward slash cloud hyphen security hyphen assessment and there will work with you to complete a security and compliance review of your environments and present back a summary of those findings and how we can help to harden your security posture and you can also discover more information on any of the security tools and any of the services in the single package that we've discussed today at sophos.com forward slash cloud and with that i'd just like to thank you for your time today i hope you've enjoyed that session and i've enjoyed taking you through everything and i believe now we have some time for some questions we indeed we do have some time for some questions um richard are you ready i sure am yeah thanks for having me super um yeah you know given your um perspective on the market and sort of your your cat bird seat what do you expect hackers to change in in 2022 yeah and turning 22 yeah not not too far away now so i think this last year we we uh i've just put the christmas lights out today so um in uh this last year we we really saw a growth in in attackers exploring vulnerabilities sort of vulnerabilities and misconfigurations in cloud environments kind of automating their searches for those those weaknesses those backdoors and what we're now starting to witness through the sort of the half and tail end of this year and going into next year is is really more of an evolution from our customers where they're becoming software companies increasingly and and we're finding security needs to to evolve in parallel i guess to that so what we've really seen in recent incidents is software development pipelines that are presenting vulnerabilities now and the challenge for a lot of the security teams that we speak to is how do you keep pace how does security keep pace with that level of automation within the organization how do you get those efficiencies out of it so it's really yeah exploiting those if you like the pre-production environments those automated build environments where vulnerabilities and misconfigurations can be quickly replicated across the the environment and now finding increasingly how security now needs to be embedded and automated into that process because there's no way you know we want security to really be um enabling that transformation and not getting in its way or slowing it down in any way but really enabling the business to grow so yeah the the integration into development pipelines from an attacker point of view is definitely where we're we're seeing um seeing movements for sure and going into next year interesting interesting okay uh next question comes from ashish who's asking uh do self us uh provide value in service for salesforce gcp as well yeah uh yeah good question um so definitely gcp so sophos tends to to cover um sort of the um primarily aws azure and gcp and oracle cloud um primarily so from a gcp point of view yes we can provide um workflow protection for the various endpoints that you might be running there we have firewall solutions and we have our um configuration monitoring compliance monitoring service cloud optics that i mentioned during the webinar today so those are all available for gcp um today um salesforce not so much we're not quite in that um in that sort of space at the moment so um more on the infrastructure as a service and sas space from a sophos perspective okay great hey and on that um on that i i uh sighed um that turned into a tongue twister but we're uh we've got a question here we're running a lot of containers on aws uh what specifically do selfless offer for protection here yeah sure um because a lot of people might they know us more for endpoint and firewalls so what we're doing at the moment is through our um cloud security posture management service so so cloud optics that we mentioned in the session today we currently provide visibility of um the containers that you're running across your environments whether it's aws azure or gcp again and that's going to scan for um os vulnerabilities that might be exploited in those images and it's going to signpost to you where there are high risk vulnerabilities these fixes available to those container images and you could go and implement already so it's kind of a identifying those os vulnerabilities today we've recently actually just acquired a company called capsulate to specialize in linux and container runtime protection and so that is now going through the sort of integration process into our stack that is available that could be that is available as a product today but that will be integrated into our workload protection suite in the first few months of this next year and then that in turn that will provide the runtime protection of those containers and then that threat those threat detections will then at the same time be fed into our mdr services as well so they've got that detection kind of telemetry and when they're monitoring the environments for you yeah and we've probably got time for one more question i'm going to ask one here from from bradford who's wondering if you guys are an all or none solution um you know can can you slowly build his his uh his security environment with cell phones yeah definitely um and that is you know a lot of the ways that customers come to us is from say endpoint they move to workflow protection because they like the agent that's there and the protection that it provides against ransomware and so on um and that is a a very common process that people move to us through that that um pathway um the package that i've mentioned today is is obviously a that is a single package but it is built of components and you could take those individually so i mean i've talked a lot about the i think someone put it the ten commandments today um so you know if you were doing so i think i mentioned in the session if you were doing five today and wanted our help with the other five or if you want to run a combination of native aws or azure services and sophos protection that that works that that's all possible within the same console um there i say you can you know you can mix and match sophos protection with other vendors you know we obviously um would love you to uh to be able to manage it all from one central central console and have that kind of the efficiencies of that experience but um it's definitely possible and so all those component parts that i mentioned today all available separately this is this package was ultimately designed in partnership with aws as part of their um level one mssp competency that they launched um reinforced earlier this year and that package was designed um together with them to really just i suppose take the take some of that weight of security off people's shoulders not um just have a clear guided approach to what areas of my environment do i need to secure and what areas of my environment do i need to um to monitor around the clock so that's where we're trying to help there very flexible as i mentioned well tremendous well richard thanks for this this presentation i i personally really appreciate it and i know some of the audience did as well the uh the 10 security steps or commandments um that uh that you laid out there those were really useful and thanks for the bringing us up to speed on on softhost and what you guys are up to no problem thanks very much for having me thanks very much everyone all right and now we're going to move on to our next prize drawing and while we do that i'll just leave the uh the poll question up there about any additional information you'd like about the software solution and also point you to the handout section where there is uh the handout from self-host that you can get more information but we do have uh one more gift card to to give out here um it's this one goes to thomas anderson of california so thomas anderson congratulations and we have one more grand prize here as well uh and i shouldn't say one more we do have uh another gift card and another um another grand prize coming in um after our next presentation but this one is for an iphone 13 and it goes out to gary medelitsa of new jersey so gary congratulations uh and uh gary and thomas will be getting in touch with with both of you about how to claim your prizes so from faction i'd like to welcome alex sammer senior technical product manager alex welcome hi scott thanks for having me yeah take it away take it away all right welcome everyone um we're gonna have a short discussion about how can we build a better multi-cloud solution and we're gonna go into the details on what does multi-cloud mean what are the benefits what are the challenges and what are we gonna offer to help you overcome some of those challenges right and myself i'm a senior product manager here at faction for our control cloud control volumes product line um been in the storage industry for roughly about 22 years and worked at a lot of different companies in the past on storage products a quick look about faction who is fat so we were founded in 2006 in denver colorado that's where we're based out and uh we're building solutions for hybrid and multi-cloud and we're innovating in that space so that um users can leverage um all the benefits that the multiple clouds offer we have roughly under 300 petabytes under management 10 different locations for our cloud services and over a thousand end user customers um we're partners with dell technologies vmware and aws and uh also backed by dell technologies capital so when we take a closer look at multi-cloud um what is the current state of multi-cloud right when we take a look at the flexera state of cloud report 92 percent of the enterprises have a multi-cloud strategy and the average usage is about 2.6 public clouds when you take all of them into consideration and there's a link on the bottom for that report if you want to take a look at this so what are the key takeaways from that report 49 is basically silo work between the individual clouds right um so you're doing 40 49 on the cloud a and the other thing is on cloud b and only 45 are integrating the data between the clouds right so making sure that the different options are available or data sets are available between aws for example and azure the other key item is 54 of the workloads are expected to be in the public catalog in 12 months so we see a lot of movement um in terms of taking workloads and moving them into the cloud as a service you know run them in a cloud with the flexibility that the public clouds offer and then once you are in the cloud um there's a pretty high percentage that plans to optimize the cloud costs right so you start out you build your environment and then you have to um figure out how to best optimize the cost structure within the cloud and leverage the benefits of the different clouds and when we take a look on how that looks today a typical multi-cloud solution you know you have your individual clouds and you have your data set that you work with in your cloud if your compute that accesses this data set but usually it is accessible only from a single cloud right so you have your data for aws you have your data for azure and if you move the data between clouds or want to leverage the same data set from one cloud to another um you have to copy that data with um different technologies and make sure that you know it's always up to date in the same you know the same time stamp so you're working with your most recent data set in each cloud the other thing is you have to have multiple ip addresses and multiple volumes to manage to make sure that everything is in sync so these are a few challenges and when we take a closer look at those challenges is basically what happens is if you do the classic multi-cloud solution as we've seen on the slide before is that you create data silos across the clouds each cloud has their own data set working with their own data set but they're not in sync and you can't leverage for example what you just done in google cloud on aws without copying and migrating or moving the data when you start moving the data that's the second challenge that you have is um certain clouds charge different um egress fees if you take the data out of the cloud right so once you have the data in the cloud it's all good but once you have to move it out of the cloud you may end up with egress feed charges that basically leads to the fact that the data is unless you want to spend all that money to move the data out of the cloud into a different cloud it's pretty much locked into the cloud right and then in the end you have your multiple volumes you have to make sure that they're all in sync you have the different infrastructures and that makes it a little bit more complex than when you would just use a single cloud however multi-cloud has benefits for example you can leverage the ideal result resources in each cloud you know to do to access your workloads to use your data to have your compute and every cloud offers a little different sets of options that may be the best choice for your environment so but there is a better way to build a true multi-cloud solution and we're going to touch on these uh solutions in a second so what if you have one copy of the data set and you can access that data simultaneously from all clouds aws azure google oracle at the same time it's one ip address in one single volume so you don't have to worry about making sure the data is in sync you don't have to worry about having multiple copies of the data and you can update a data set on awf and it will immediately show up in azure as the updated data set and that makes it a lot simpler from a management perspective and it has a cost advantage as well in addition to that what if there would be a mechanism that you can take some of your on premise data sets that you're working with that you want to move into the cloud and just replicate them straight onto that copy of data and that's where faction can help you with we have a patented class crowd plus cloud routing that enables the access with a single isp ip space of all your data from the different clouds at the same time so you don't have to worry about all the challenges and this is a a very neat solution if you really want to go to multi-cloud and leverage aws at the same time as you want to leverage for example google cloud for your data sets and so what does it do it prevents your data silence right so rather than duplicating the data and storing it in each cloud you have one copy of the data that is accessible from all the clouds at the same time right also we can help you to lower or eliminate the egress fees right first of all we have zero egos fees with azure and oracle cloud so you can move the data out of these clouds for free and then second of all the data is not necessarily stored within the other cloud so you don't have to take it out you just read it and the only thing that you have to pay for is if you write data to it which is a smaller amount than taking the whole set out of aws for example and then we don't we avoid the data log in because your data is accessible from each cloud at the same time so for example if you have a strategy or had a strategy to go with aws but then you know there's a contract now with azure and you want to move your workloads from aws to azure you just mount that volume in azure and access all your data that you had in aws and be ready to go without any migration of your data sets right so that makes changing a vendor based on a strategy or a pricing or a the business need very very simple and then you want to manage that all from one place and you want to make sure that your connections to the different clouds have the correct amount of bandwidth and so we have a portal where you can go in see all your data sets and then you can dynamically move them bandwidth perspective move the bandwidth to the cloud where you need for example higher speeds right you're not moving the data but you're moving your connections from one cloud to another or you add a new cloud to it where you want to make your data accessible on top of that faction also offers add-ons to your data sets right we can add on data protection to make sure that your data is backed up and you can use that for your on-prem data you can use that for your cloud data you can use that for both of them together so you can basically back up your cloud workloads that are still in the cloud for example and then make sure that they're backed up or you can leverage that as an enhancement of your on-premise data center where you say hey certain workloads i want to have in a different location for security reasons and data protection reasons and you can leverage that on top in addition we offer cyber recovery solutions for the different environments so you can actually add a cider vault environment to enhance your data protection and then if you have data sets that need to be retained for a long time for legal or other reasons you can also tier that data off to object store that is either hosted in the faction cloud or you can also take any public cloud object store to tear that data off so basically what we allow to do is to build a better solution where you can leverage every advantage each individual public cloud gives you and you don't have to make a decision where am i going to place my data you can use for example let's take machine learning you can use leverage multiple clouds at the same time to do your machine learning processes and leverage the data across multi-clouds we also offer different levels of storage that you can make available um in the cloud um for access that goes from you know archive all the way to flash based turbo tiers where you get a very high performance for those sensitive workloads and if you want to take a look and look a little bit deeper and what we offer just visit our website at factioninc.com or if you have for example availability in azure's marketplace so the next thing that is coming up is we have a little polling question and so let's take a look at that without doing any research what is data gravity right is it a negative effect that makes it hard to move large data set a positive effect that makes it easy to move them a metaphor that means data and applications are attracted to each other or a metaphor that means the data and applications repel each other so let's take a look at this all right we're getting the answers are trickling in scott yeah i see that um yeah so it looks like we had about um you know they're still coming in but we could talk about it as it is it looks like for the the top selection is is the most popular one it's around 45 percent um the negative effect that makes it hard to move large data sets and then behind that is the metaphor uh the third the third option there metaphor that means that data and applications are attracted to each other so um you know and then the other the other two are sort of evenly split um so what how does how does that line up with what you expected um that lines up pretty much with what i expected um from that perspective and we're gonna give i'll want to give the answer in a few minutes i guess once all the polls are in yep i think everybody who wants to hazard a guest has probably done it number one or number three or is most people off base it is it is actually number one scott it is number one it is a effect that makes it hard to move large data sets right and so what we're what we're saying is if you defy data gravity then you can move your data you know from one place to another and that makes it very easy right so data gravity is basically you know you take a stone it falls to the earth the gravity keeps it there and if the stone is big it's really hard to move right so it is number one yeah all right super well are you ready to move on to to questions oh absolutely okay great yeah and while we have uh while we're doing questions why don't i go ahead and put up the the poll for any additional information that people would like from uh from you guys at faction you know it's a you know data sheets technical white papers etc you can see all the selections there but uh you know so this was really interesting um and this is kind of a basic question but but where is the data um you know with the faction cloud are you storing it in one of the files or is it in your own so it is in the faction cloud and we have 10 um locations across the world um they're very uh adjacent to um the cloud providers themselves so we can offer very fast and low light latency connections um to those clouds but the data is actually um in the faction cloud super okay and what's uh what's a good use case for it for a true multi-cloud solution you know what does it make the most sense to you know to con consider multi-cloud versus um you know sort of committing to one cloud or another right so we can take one example which is um let's take machine learning right um you can leverage the gpu accelerated compute in one cloud right for your training of your machine learning and then take the results and use them for modeling in another cloud without copying the data right so you can actually leverage for for one specific application the benefits that each public cloud offers and sometimes it's also a cost perspective right so one cloud may offer better gpu notes for a lower price and then so you can leverage that but then you know when you do the modeling um it's probably better in a different cloud that maybe because the um application that uses the model sits in that cloud as well so you can leverage all of them without moving the data i gotcha that's really interesting there's different use cases for the same data um you know we'll have completely different parameters um how how can um this question came in how can i benefit from the the zero egress fees for for azure so i'll give you one example there's multiple different benefits in there but let's let's say you have data in azure right so um you have your compute you have your azure storage right and um you want to make sure that the data is backed up outside of azure just in case right something happens so what you can do is you can spin up for example a ddbe within azure right you can get that um from azure in the marketplace you can back your all your different um storage device um storage platforms and file systems that you have in azure up onto the db ddbe and then replicate that ddve to traction right the replication from the azure ddbe to the faction cloud is free because you don't have egress fees right um so it's it's just going to get replicated and then the advantage you have in addition to that first of all your data is backed up at the second location and let's say you need to access that data either on premise or in aws you can actually mount that in a different cloud or on-prem and restore the backup or read the backup and access those files as well so that is one example where we can you know help with the zero egress fees where you have stuff in the cloud you have data in the cloud you want to keep it there right you may not want to move everything else but you can back it up um for a node egress charge onto the faction cloud and then use it wherever you want to okay hey we had a question from from guy who's uh he's uh you know coming in on that data gravity issue how does how does faction's multi-cloud approach overcome those data gravity concerns um your your data is um basically not tied right to a single cloud so you don't have to technically move the data to use it in google or oracle or azure or aws right so um the data gravity i'm concerned that you may have my data is in one place we can help with that the other thing is we also allow you to take that data once you have it um and you can put it back on-prem it's you know if you if you say hey you know we tried azure and it's not working um but it's stored in a fashion cloud we have very easy ways to take that data and move it back to your on-prem environment okay great question from bradford does faction provide the analysis to determine which public cloud provides the the best solution for the given workload um we can help you with an analysis um um and depending on what your requirements are um but typically there's there's a lot of different aspects to it but we can help you with hey this cloud may be the best for your workload if you want to start to move into the cloud and then we can help you um set that up and make sure that you um you choose the right cloud and the right um cloud control volume for for that okay great and the workloads typically um can the data move easily um typically what we do um what we help with is um you know you have your own products what it can do for them how do they get started with it or what are some how do exactly so so so typically typically you have you have your data on premise right that's that's what you work with you want to go get started in the cloud we'll figure out which cloud will be best but then you know if you don't want to put your data inside the cloud you want to make sure that you defy the data gravity and make it available if something changes into a different cloud we have tools and options to move the data from your on-premise into the faction cloud and from there on we just mount it in the cloud that we selected and so um there's we've been moving a lot of data from the on-premises to the faction cloud to make sure that the customers can get started with their data set and they can start very small right you can try it out you can have a few terabytes where you say okay i'm just going to start here and check it out and then you can we allow you to grow the environment to multiple petabytes if you need to fantastic well that that was really interesting alex thank you for uh for that presentation and and uh you know just bringing us up to speed on on faction really appreciate your time today thank you all right and with that we're going to go to our our last prize drawing of the day so thank you everyone for uh for being here today for for participating in this event and for all your questions um throughout the day but i do want to announce the uh the last two prize drawings so first one is the the amazon gift card for five hundred dollars and this one goes to howard lee of texas congratulations howard and our final grand prize of the day the uh you know another iphone 13 in the color of your choice goes to subin moyan of maryland so suban we will get in touch on on how to claim your prize and with that i again want to thank everyone for for attending today i just want to mention a few other things don't forget to subscribe to our 10 on tech podcast a lot of great information available through that podcast i also want to mention that if you you know liked what you saw today and you want to participate you want to you know sponsor one of these events email us at connect actual techmedia.com and uh we'll be we'll be glad to let you know how to do that also tomorrow we have another huge event coming up it's our developing your 2022 security plan so today we covered your 2022 cloud plan tomorrow is the 2022 security plan if you've got the time we'd love to have you on that eco cast so that starts tomorrow at 11 a.m eastern 8 a.m pacific and that wraps us up for the day so thanks again for participating and have a fantastic day you
Info
Channel: ActualTech Media -
Views: 52
Rating: undefined out of 5
Keywords:
Id: Cdzpry5JxAI
Channel Id: undefined
Length: 367min 41sec (22061 seconds)
Published: Fri Dec 03 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.