Pure//Launch: Your Power, Your Cloud with Pure Storage

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] [Music] so hello i'm your host dan cogan and we're very excited you could join us today for pure launch here at pure storage we're focused on delivering storage and data management products that are powerful yet easy to use we refuse to compromise so you don't have to either we're giving you the same experience as public cloud instantly delivering what users need with the infinite scalability that drives business growth wherever your data lives combining the agility and consumption model of the cloud with top tier performance for the most demanding applications is the next step in truly enabling you to capitalize on your valuable data and unlock your team's innovation potential you may have noticed that in the last several years applications have become in a word gnarly databases have gotten bigger data volumes for enterprise applications have grown exponentially and is becoming more common for an application stack to be deployed on more than one tier of storage so it can meet the performance capacity and density requirements these gnarly applications require today we're excited to introduce you to the newest member of the flash array family built to address the most demanding enterprise needs to get us started i'd like to introduce sean hanson vice president and general manager of our flash array business unit sean will them be joined in a few minutes by eric bergener vice president of research at idc to provide his perspectives on what this announcement means for you after their discussion we will hear directly from our pure storage engineers with technical presentations and demos of these new products throughout the event we encourage you to enter your questions in the chat box as we will conclude with a live q a panel now let me turn things over to sean hanson thank you dan and welcome everyone to pure launch i'm sean hanson general manager of flasharray here at pure storage today businesses are confronted by two overwhelming realities the first is that winning companies operate with more speed and agility and innovate faster to create the best possible customer experience think of how much faster you can get somewhere and how much faster you can shop uber and amazon revolutionize their industries because they innovated at a breakneck pace to deliver amazing experiences for their customers and then comes a second reality the key to rapid innovation is empowering your developers to do what they do best build amazing customer experiences to empower our builders there is a two-part innovation equation the first part is to eliminate needlessly complex and manual work the second part is to give people access to the tools and resources when and where they need them many are trying to solve this equation by turning to the cloud they like the cloud operating model they like buying what they use not buying what they think they might use in the future and they like instant on-demand access to resources like storage and compute but what we hear from our customers is that most builders just can't throw everything away they can't leave behind their existing customers in application they can't sacrifice performance price security or compliance applications will need to operate in a hybrid world for the perceivable future there has to be a better way imagine a world where you can dramatically reduce the complexity of the on-prem world plus give developers the tools they need much faster on demand here's the question we're going to answer today what if we could bring the cloud operating model to your existing infrastructure what if we could deliver unlimited scale with on-prem storage without the complexity today i'm going to show you what pure has done to make this possible just a few weeks ago we announced a major innovation purefusion brings a cloud operating model anywhere by giving developers on-demand access to the storage resources they need it also automates complex and manual tasks like workload placement workflow mobility and fleet rebalancing today we've taken one more important step in our journey by officially opening our early access program diffusion customers but i'm even more excited to tell you about something else today we're announcing the next generation of our flasharray family flasharray xl flasharray xl is a major leap forward it delivers new levels of performance density and efficiency for your most demanding applications and consolidation needs excel has more power higher density faster performance and incredible capacity our beta customers have experienced almost eighty percent more iops beyond that we're able to pack in more usable capacity and ten percent of the footprint of our competitors we're able to do this through our superior data reduction along with new innovations in hardware design so what does that mean at a practical level it means more reliability lower latency higher performance applications it means reducing data center rack space and operating costs and it means you can manage more workloads on a single array all while being as easy to deploy and simple to manage as any other pure product through flasharray xl and fusion pure can now offer unmatched scale up and scale out together as our new flagship flasharray xl provides a speed and scale that highly performant applications require married with the cloud agility and limitless scale of fusion in short it means giving you a competitive advantage to innovate faster removing the bottlenecks and high cost of traditional storage and most disruptively it means thinking of storage as a self-serve on-demand resource structured as storage as code imagine what you could do if you could simply scale to the infinite needs of your organization and its customers with a click of a button and deploy new innovations without manual work hardware managed like software data complexity made easy that's pure performance that's pure simplicity now i'd like to invite eric bergener from idc to join me on the stage eric hi eric thank you for joining us today happy to be here thanks for having me sean wonderful in a recent article you wrote the modern data experience is not a product or even a platform it is a frame of mind driven by the agility needed by today's companies today i'd like to talk about that with you today let's discuss about how you feel companies can get that agility as they decide between the public or the hybrid cloud i.t services are becoming increasingly fragmented as we see a shift to managed services or to shadow i.t what are the major catalysts driving the shift what's really driving the the whole development of shadow i.t was the need for agility on the part of the end user side developers data scientists when they build their environments they really need to be able to expand those environments easily provision new storage occasionally recover their files and they need to be able to do all this on demand and very rapidly and traditionally it organizations have not been able to provide that level of agility in fact for many of those actions it's taken a long time in some cases hours or days and that the developers as an example they're able to go to the cloud they can do all of these things on demand very rapidly and that's why cloud services have been so readily adopted by these constituencies what are some of the risks that it creates for traditional i.t if it isn't managing the data and protecting the data then you can run into issues around governance and compliance around security and also around data protection and the cloud environments that most people are are getting access to don't necessarily think about those and even if they do provide some measure of data protection what they provide may not meet the different requirements that enterprises have and so this issue was a mismatch between if it isn't involved then they can't sort of make sure that on the back end all the right things are being done as that data is made available for these constituencies to use so that's really the risk and one of the primary reasons why it really needs to step up and provide that kind of agility so that while the same at the same time they're meeting the requirements of their developers their data scientists they're also ensuring that they'll stay in compliance with governance regulation data protection requirements that kind of thing this is a major shift people talk about the rise of the devops or the devel the developer devops has become the primary mechanism to provision environments and deploy applications with the trend towards composable applications and infrastructure as code how do you see vendors responding vendors clearly are going to be need to be able to provide the same kind of capabilities that developers are getting from the cloud and another aspect of this it's not just about the agility it has to do with what they're getting used to in terms of the tools that they're using when they do go to the public cloud many of the cloud native applications are built around containers and those environments are being managed through kubernetes for orchestration so if it is going to be able to provide those kind of environments on-prem they need to offer the same set of tools the same kind of capabilities i mean the definition of a cloud native app is really one that software defined runs in a container and is orchestrated by kubernetes people talk about the cloud providing solutions for all these bottlenecks in traditional it is going all in on the public cloud really the answer actually it's not and it's pretty clear even to the hyperscalers that it's not you know several years ago many many of the public cloud providers were trying to convince enterprises that they should go cloud first and ultimately move all of their workloads to the cloud but now even the cloud providers see that that is not going to work effectively there are three deployment models there's traditional it there's private cloud and there's public cloud asked enterprises what criteria do they use when they're trying to decide between the different deployment models both for their legacy workloads and for new applications that they might deploy and security is the number one issue that they think of followed by performance ease of management availability cost and then governance and compliance now it's interesting to juxtapose those criteria with some other survey results where we asked what causes customers to repatriate workloads from the public cloud back into an on-prem location and you know 84 percent of enterprises have repatriated at least one workload that might be because they found out after they deployed it that the public cloud was really not the right environment for one of a number of reasons or maybe things changed with that workload as it evolved over time and now the requirements are different and there's a better fit on-prem the five reasons that enterprises give for repatriating workloads from the public cloud performance availability security regulatory and governance and cost and the cost aspect it's interesting because that has mostly to do with the egress charges so as the access patterns to the data change that can affect cost and in many cases it was unpredictable costs due to frequency of access that caused enterprises to want to move or repatriate workloads from the public cloud interesting let's let's talk a little bit about the applications people are moving to the public cloud first what are some of the major reasons why large enterprises are moving non-business critical applications to the public cloud there are there are clearly things that the public cloud excels in first off enterprises don't have to manage that infrastructure they just deal with the service so that offloads them and allows i.t organizations to focus on what they might consider to be more strategic aspects there are also nice aspects to the technology refresh model in the public cloud completely non-disruptive enterprises don't have to think at all about when they might have to upgrade when software upgrades might have to be done when new hardware comes out whether or not how do they integrate that into their on-prem environment the cloud providers handle all of that the cloud is also provides unlimited scalability so that's another another key case and why a lot of colder storage workloads have been very effectively moved to the public cloud and when you're dealing with colder storage workloads you tend to realize there's a lot of capacity that we'll need in this environment and the dollar per gig cost of storage in the cloud can be quite low it particularly if it's a kind of a long-term retention or archive space where you won't be accessing that data very much then the cloud can be very cost effective for those kind of workloads the workloads that seem to to do better on-prem generally have lower latency requirements higher access requirements and also meet some of the other or require some of the other things we talked about earlier in terms of governance compliance security things of that nature how does the hybrid cloud fit into the conversation there's a lot of momentum behind the public cloud how does hybrid cloud fit in well so you know idc's definition of hybrid cloud basically is that in your infrastructure some is on-prem uh in the traditional it model some might be in a private cloud on-prem and some is in the public cloud and that's actually the way infrastructure will be built going forward work there are certain workloads that do better in each of those three models and so i t organizations need to be able to choose from each of those three models as they decide for the best location for any given workload organizations want to act like a hyperscaler they want to think about either managing it through a intermediary like an equinox but they also want to have ubiquitous services like storage across all their platforms on-prem edge and hyperscaler how does this drive new trends as organizations start to take that role of the hyperscaler i agree with you that in a sense they are taking on the role of the hyperscaler for certain type certain parts of that cloud experience but not necessarily all of them because developers data scientists application managers that have been dealing with the public cloud have a different expectation for things like how fast can they provision new storage how do they pay for it availability aspects you know all of those things set the stage and they want those same kind of capabilities if it is attempting to get them to bring those workloads back on prem they want to see those same kind of capabilities so the i.t organizations need to be able to replicate to a certain extent that cloud experience with on-prem infrastructure so that's that's been a major focus i think for many of the vendors and clearly they'll need to continue to pursue this going forward because you know developers as an example they love the simplicity the ease of use of that public cloud environment and it will be tough to get them to come back if you can't offer you can't basically replicate that part of the cloud experience for them our announcements this last two weeks we've announced purefusion which is about bringing the cloud operating model on premises and excel which is about larger building blocks and being able to provide kind of the scale that limitless scale you're talking about in the cloud how do you see these kinds of technologies really making an impact in the industry this idea of a unified control plane that allows you to gain visibility across your workloads regardless of where they're deployed that that's critical and that's my understanding of one of the things that fusion will do for you so this single pane of glass that lets you see everything one one of the issues when people first started to move to the cloud and in fact with shadow i.t it was much more of an issue because it had zero visibility but if it had decided to move to the public cloud and keep some workloads on prem there were different unified or there were different management plans for that and so it was difficult to get an understanding when you had to look at two or three or four different panes of glass and you know now that most enterprises they don't want to go to just one public cloud they'll have some on-prem but they might also have some real estate in amazon in google and microsoft or other service providers and so this idea of a unified management plan that can really look across all of those and provide that single pane of glass that's going to be critical to managing infrastructure regardless of where it resides very efficiently how do you think storage administrators will react to the idea of end users being able to be able to provision their own storage in the cloud you can single click get access to what you want in a pretty pretty easy way now bringing that to the on-premise storage world is a very different way of viewing things how do you think people react to that there's two very different reactions obviously the end user side the developers they want that right and that's what it has not been able to provide before classically if you needed more storage you put in a request it could take weeks before you even heard back from i.t about whether or not they could do it and in many cases they couldn't do it until they bought additional infrastructure which would then add more time to it the cloud they just ask for it and they get it immediately right so that's one of the parts of the cloud experience that that need to be replicated but one of the other concerns that it practitioners have is when storage gets provisioned there are certain characteristics of that storage what's the performance requirement for it what's the availability of it how often do we back it up you know what's the what are the rtos the rpos that are associated with that do we have regulatory requirements that state that we must know where that data physically resides at all times that's a great reason why people can't use the public cloud so the way to address this that i think has been has been done by the vendors that are really taking the right approach here is to build some sort of a self-service catalog that allows uh end users to go in they can select the kind of storage they need but it's it's more of a business level specification right do i want you know gold platinum silver in terms of performance availability they think about it in those terms they don't have to think about well let's see i need this raid level for that and i've got to replicate those to these two other sites because there's a dr requirement they can build all of that into the self-service catalog so when the end user requests the storage they get the capacity they need and it fits within whatever the parameters are that have been defined but all of the detail underneath about how that gets done is hidden from them and that really is going to be the best way to solve this problem for both end users and for the it practitioners that are tasked with worrying about things like security availability recovery you know regulations i think you really touched on the essence of storage's code so i think one aspect of of the cloud operating model is is the cloud-like agility being able to provision quickly yeah the other side is limitless scale being able to tap into scale not only in eliminating manual work but also just simply building it to add capacity relatively quickly so fusion and excel represent this new this new model of scaling you've got the large building block with xl and then you have fusion which is kind of a single name space something that allows you to stripe across multiple devices and eliminate the ability to see just one box but to have a larger a larger view how do you think that will be accepted in the industry well i think that's exactly what certainly the i.t organization wants that because that's the way they see that they're going to meet the requirements of the developers and the other end users the end users are probably not thinking about it from that point of view all they want is here's what my my customer experience looks like when i deal with the cloud you know i want you to give that to me but you figure out how you make that happen but those items you just mentioned that's the way it will make that happen so i think i t is going to respond very positively to this they get what they want they can maintain compliance they meet availability requirements the end users get what they want fast provisioning and you know those kind of aspects and so that i think that is going to be the way going forward to meet this requirement well i think you should on the essence of the cloud operating model thank you very much i really appreciate your time today you bet yeah good to chat with thank you so much thanks shawn and eric with the exciting announcement of flash array xl combined with the flexibility and agility that the cloud provides even from your own data center pier is delivering the kinds of performance scale and storage's code technologies that allow you to reach your full innovation potential i'm sure you can appreciate how powerful this new platform sounds but enough talk let's see it in action let me hand the baton over to my butnagar and janet lafleur from our flasharray team to take us through what this absolute beast of a data center platform will deliver for you we'll share a demo of flasharray xl in action and what to expect when you unbox your shiny new powerhouse demands on storage are only increasing data volumes are growing more quickly while ransomware threats are increasing and evolving what's more storage limitations often get in the way of deploying new applications to meet these challenges we're proud to introduce the flasharray xl this newest member of the flasharray family delivers top tier power to take your most demanding apps to the next level with flasharray xl you get next level performance at scale in a high capacity dense 5u storage platform that's built for resiliency all the power without the complexity that comes with legacy enterprise storage the performance to support your most demanding applications and the scale to consolidate workloads on fewer arrays and still have room to grow as a flash array it includes complete enterprise data services built in at no additional charge that includes always on data protection that's savvy to run somewhere threats and a cloud-like model for deploying new apps quickly and easily today we'll give you a taste of how we've engineered the flasharray xl with next-gen cpus and flash technologies that future-proof your investment and as always our non-disruptive upgrades make it easy to update and expand your storage as business needs change flash array xl extends our best-selling flash array storage family for more demanding workloads with higher performance greater capacity and even stronger resiliency where flasher ac is optimized for capacity and flasharray x is optimized for performance flash array xl is optimized for performance at scale that means latency as low as 150 microseconds throughput as high as 36 gigabytes per second and a 70 performance boost over our x90 we're offering two models the larger flash array xl170 for your most intense application and the ultimate in workload consolidation thanks huge sap databases and tens of thousands of vm workloads the flasharray xl130 offers performance and capacity that's a significant step up from our x90 what's more the xl130 can be non-disruptively upgraded to an xl 170 if and when your business needs grow and with the xl chassis supporting up to 68 percent more storage capacity than the x90 you'll have plenty of breathing room to grow mayank let's take a deeper look at how the flasharray xl provides the highest tier of performance for the flasharray family thanks janet flagery xl is built on a brand new five rack unit chassis design that enables our customers on a variety of fronts first it's designed to deliver highest levels of performance not only for this first generation that we are launching but in a true evergreen fashion to be able to deliver multiple generations of performance with zero forklift upgrades second it only pushes the performance spectrum but also optimizes the density per rack unit delivering 20 higher efficiencies than flash array x third it takes resiliency to the next level with rocky protocols between the dual controllers four power supplies in an n plus two configuration and direct flash software technology to just name a few lastly it doubles the number of ports available for connecting to your application servers when you compare that to a flash array x industry leading dedupe and compression algorithms direct flash software technology combine with the enhanced densification on flasharray xl enables massive space savings power and cooling savings like never before here's a striking example of how flasher xl delivers over five petabytes enterprise scale environment in just 11 rack units whereas some of the alternatives out there cannot even accomplish that in 84 rack units that's almost 8x more space one of the key technologies that enables both the performance and rack space densification is our patent-pending direct flash modules with distributed non-volatile rams what you'll notice in flash array xl is that we no longer have physical real estate on the array dedicated for nvrams we've pushed that functionality over to the direct flash modules themselves in a distributed manner enabling higher levels of performance and scale all that space saved up enables us to have more drives in the enclosure improving our per rack unit density by 20 now on the topic of performance in 2019 we introduced direct memory modules leveraging intel storage class memory technology on flash array to accelerate performance and deliver latencies as low as 150 microseconds with purity 6.2 you will now also have the ability to prioritize volumes that leverage this cache providing the lowest levels of latency to the applications that need it we've also got some really exciting news from one of our beta customers trying the flasharray excel platform they ran extensive performance tests and the results were just outstanding using a mix of read to write ratios they measured on an average 70 percent iops increase with flattery excel 170 above the flattery x90 and with the 50 50 read write mix they saw increases in peak iops of nearly 80 percent thanks mayank now that you've had an overview of the flasharray xl architecture let's talk about the applications and use cases that will benefit most from it flasharray xl is ideal for mission critical workloads that are highly and unpredictably demanding the increased throughput and greater transactions per second keep applications responsive even under a heavy load that means reduced time to insight for compute intensive apps and when demand spikes users won't have to click and wait if you have applications that simply can't go down the xl's increased connectivity and redundancy give you the resiliency you need to stay up and running flasharray xl is also ideal for enterprise workloads that need both high capacity and high performance such as analytical workloads with data sets so large they often span multiple arrays storing such a large data set on a single beefier flasharray xl makes replication for business continuity or disaster recovery more straightforward and more consistent and even if you don't have any big beast applications with a flash array excel you can consolidate more workloads per array and support more users per workload without impacting other applications are space power cooling costs an issue for your business flash array xl can significantly reduce your data center footprint and help you meet corporate green initiatives and with the flasharray xl you have the confidence of knowing that you can non-disruptively expand or upgrade if and when your business needs change now let's unbox the flasher axle and get a closer look hey larry i'm super excited to unveil the fastest and the most powerful flash array we've built so far you've been around for a few generations you've seen a few of these unreal we are calling this flasher xl are you ready for this i'm extra large excited about this one let's see what's in the box [Music] so we've got the accessory kit right in the middle and the direct flash modules on the end cool let's see what's in the accessory kit yeah let's open it [Music] [Applause] [Music] [Applause] cool i wasn't expecting a whole new bezel yeah larry it's built for the new five rack unit chassis design and you've got this cool texture here for better airflow [Music] these are the brand new direct flash modules designed for flashlight xl let me see one of those 100 pure flash hold on a second did you guys put envy ram in this oh yeah you're going to geek out what you find out next excellent wow did you guys get 40 modules in this one yeah we've engineered some cool things to improve the rack space density pretty heavily on flasher xl you want to take this thing to the lab and check it out for sure let's go hold on barry this thing's heavy wait till we get more than five and a half petabytes of data in it all right man we've got this thing moved uh tell me about how we squeezed 40 flash modules into the new chassis well first off it's a five rack unit chassis so that we can deliver that higher performance but if you remember in flasher x we used to have dedicated space here on the top left for mv rams well we've gotten a little smarter about it we've taken that nvram capability and pushed it down into the direct flash modules themselves and calling it direct flash modules with distributed nvrams on them what about the other modules are these the same kind of modules we had in the flasharray x yeah absolutely beyond the direct flash modules with that nvram capability you're free to pop in other direct flash modules the nvme uh supporting ones as well and what about those modules we had with the intel optane technology that can reduce latency and improve performance we absolutely support the direct memory modules on flasharray xl to achieve the lowest latencies possible for your applications great this thing's awesome all right max so we're looking at the back of the array now and i can see we've got two controllers here just like we had in the flasharray x flash 3xl is a dual controller active active design and we've taken resiliency to the next level the controllers talk to each other over the rocky protocol and we have four power supplies in an n plus two configuration what about the four power supplies why did we go with four instead of two first off this is a five rack unit chassis design we needed that space to deliver the extra performance and with the high line cpus in the flasher excel family but with that extra space we could also add more efficient cooling methodology for flasher excel so with all this cooling are we going to solve global warming well not global warming larry but it is way more efficient than some of the alternatives out there you're going to see 8x more space savings when you get into terabyte scale enterprise environment wow that's great and super cool and it looks like we've got a lot of room in the flashlight xl for host connectivity we have nine pcie slots in the flash array excel for for host connectivity that's almost 32 ports per controller in a fibre channel config and you can use some of these pci slots for direct flash shelf connectivity as well oh expansion shelves so there's room for expansion shelves here and i've got 40 modules in the chassis and with two expansion shelves how many modules does that get us to yeah with that total you can get up to 96 direct flash modules and that's what takes you to a 5.7 petabyte effective environment wow that is massive scale you bet all right let's get this thing powered up and take it for a spin absolutely let's do it [Music] [Music] all right thanks larry and mayonk great job and now that it's powered on let's take a look at what it takes to set up flasharray xl the setup process for flash array excel is exactly the same as it is for all flash arrays i run pure setup new array after a short hardware validation and firmware update i'm asked to provide some management network details and confirm them the setup takes that information and applies it to the flasharray and validates that everything's ready to go once complete i'm able to log into the flasharray and start my normal administrative tasks about five minutes end to end to complete this setup process all right so we've got the flasharray xl setup let's take a look at the flasharray excel performance and how it compares to the flasharray x90 i've configured a multi-vm application which will be run against each of the two flash arrays the workload is a 70 30 read write mix using 32 kio and pushing enough io to get the x90 to its performance limit if we look at the performance dashboard for the flasharray x90 we see three charts displaying latency iops and bandwidth looking specifically at iops we can see that in aggregate we're doing nearly 260 000 i o per second which works out to about 8 gigabytes per second all right so that was the x90 let's take a look at the same application running on the same server just connected to the flasharray xl okay same 7030 32k io workload now running against the flasharray xl as before the workload is pushing the xl to near 100 of its peak capability looking at the excel performance analysis dashboard we can see a fairly dramatic difference we can now see total iops of approximately 490 000 with an aggregate throughput of 15 gigabytes per second wow what a difference flasharray xl makes we were at approximately 260 000 bio per second and now on the xl the same application same server 500 000 or nearly 500 000 in iops i want to show you another cool thing about the flash array excel and something that we're able to do with purity 6.2 direct memory cache is a high speed caching feature that uses intel optane storage class memory inside of direct memory modules to dramatically lower read latency starting impurity 6.2 we've added a feature that enables cache prioritization for specific volumes or groups of volumes in order for them to achieve the very lowest levels of read latency to demonstrate this feature i'm going to use flasharray volume 5 in an application that reads data from the 500 gigabyte volume looking at the flasharray dashboard we see three graphs showing latency io and bandwidth over the past five minutes i've highlighted the section that shows read latency in yellow currently about 430 microseconds if we want to make an adjustment to the prioritization of volume 5 we click on qos configure qos and then increase the priority to 10. so now my priority adjustment is 10 indicating the highest possible priority for volume 5. if we go back to the dashboard we'll now see that the priority has led to a slight decrease in read latency so now we're at 380 microseconds since the data is being read back randomly it may take a little bit of time for the cache to become fully populated let's fast forward four minutes in time so where we were at about 430 microseconds we're now four minutes later at 240 microseconds reduction of nearly 190 microseconds almost half the read latency compared to what it was prior dmm volume prioritization is a game changer for read latency sensitive applications all right so there you have it flash array xl raises the bar on flash array performance enables massive consolidation performance at scale and delivers more of what customers love about flash array thank you flash array xl fills the top storage tier with a platform that has it all extreme density for workload consolidation and cloud-like scale along with the performance that your most demanding applications require we've discussed purefusion which gives you the ability to provision manage and consume enterprise storage with the on-demand access effortless scale and self-management of the cloud this new approach of storage's code reduces the time and complexity of managing infrastructure and eliminates the barriers of accessing storage resources that currently exist with legacy solutions we are excited to also announce that today purefusion is available in an early access program make sure you talk to your pure storage account team or partner if you're interested in learning more and now to provide some more detail on how purefusion and flasharray xl can provide you a true hyperscaler storage experience i'd like to welcome larry touche and vijay ganapathy from our fusion team all digital organizations are striving to be more agile as their business and applications are modernizing they want to make sure resource provisioning and management are still simple this is what we focused on while building purefusion our next generation storage management platform that delivers self-service autonomous storage's code built for limitless scale our goal is to make the world of storage management look like this we want devops teams and application owners to work in a new model where they access the environment through code or a simple ui they should be able to simply consume a menu of predefined services to provision resources themselves the resulting storage resources are automatically provisioned without the need to interact with it open service requests or wait for manual operations workloads are automatically placed in the environment to ensure balanced utilization and peak optimization removing manual steps with storage's code can take provisioning from weeks and days down to minutes purefusion is a modern cloud-like alternative to traditional scale-out storage clustering when combined with the evergreen features in flasharray it allows you to scale your storage infrastructure up down or out with fusion we're changing the typical storage management paradigm to be more cloud-like we want storage admins to become storage providers providers collect storage systems into pools called availability zones it's these azs that replace the traditional notion of scale out clusters and let you scale using different models of arrays different capacity footprints and even arrays with different media types as a storage provider you organize your users or consumers into multi-tenant management containers called tenants you can identify specific users to manage the provisioning of storage within those tenants these users can delegate administration to other users for different applications or different tenant spaces storage teams no longer have to provision storage for individual user requests users provision their own storage resources by consuming storage classes and protection policies storage classes define storage characteristics like performance limits capacity limits and the hardware type that's used when provisioning volumes protection policies define data protection behaviors like snapshot and replication slas retention slas and cyber recovery options like immutable snapshots so similar to the way things work in the public cloud providers can publish storage classes with different performance characteristics and users can choose the right storage class and protection policies for their needs making management simple and consumption easy in large environments when you're trying to make storage deployment as rapid as possible a couple of common and significant challenges will get in your way these are choosing the right array for a workload or having to figure out how to move workloads around to make room for new workloads fusion makes that a thing of the past in fusion consumers don't provision storage on specific arrays instead fusion is integrated with the ai-driven workload planner in the pure1cloud and uses that to decide where to place workloads and because users are not managing storage on specific arrays providers can non-disruptively redistribute or move workloads between arrays in the availability zone we call this a rebalance operation and it's one of the most important features in pure fusion allowing you to manage your azs similar to scale out clusters where you can add more resources on the fly and redistribute workloads to use those resources so when workloads are moved around in the environment nothing changes because the consumer's provision from the az not the individual array the management point remains the same all the apis remain the same there's no need for the consumers to adjust their automation workflows or their applications when the workloads are moved around purefusion has an api first design philosophy we're delivering new storage's code apis for self-service provisioning and consumption that enable you to automate storage deployments through platforms such as ansible terraform and more and while we do offer an easy-to-use cloud-based gui for some management operations fusion's real value is its automated provisioning workflows workflows that you can integrate into broader data center automation frameworks or custom-built in-house self-service portals fusion's architecture is designed to be simple and secure we designed purefusion as part of our pure1cloud so there's no physical infrastructure required for you to deploy using the pure one cloud you simply assign the arrays you want to manage with fusion to your availability zones configure your storage classes and protection policies and you're ready to start onboarding users when it comes to scaling up and massive performance requirements for the most demanding applications this is where fusion's ability to mix and match hardware types becomes important with the new flasharray xl you can define storage classes that provide peak performance and incredible capacity with purefusion it's simple to allow any custom storage class you need based on our entire range of flasharray and pure cloud block store platforms to put it simply purefusion gives you the capabilities you need to provision manage and consume enterprise storage with the simple on-demand self-service and effortless scale model of the cloud fusion is the next generation of storage management now let's jump into a demo with anthony ferrari where he'll show you some of the storage's code capabilities of this platform in today's demo i'd like to walk you through a few of the key steps involved in using fusion's powerful api to automate the consumption of storage from flasharrays specifically we'll be highlighting how the new flasharray xl lineup systems can bring maximum performance and workload density into a fusion environment that is already up and running on top of a fleet of our systems fusion is built to be automated and typically this would be done using an sdk or infrastructure as code automation toolkit but for the purposes of today's demo i'm going to be using a command line tool that forwards through to the fusion api so that i can show you how this workflow looks what we're going to be doing today is taking a look at both from a provider side and consumer side what it would look like to use flasharray xl inside of fusion to consume storage as a service let's get started first let's take a look at how our fleet is configured to do that i'm going to list all the availability zones that we have configured now we can see that we have two az's set up in our environment zone one and zone two next i'm going to list the arrays that are configured in our first a z looking at the output of that command we can see that we've got one flasharray x one flasher i see and one flasharray xl configured in this availability zone now that we've seen that we have flashard xl already configured in our environment let's take a look at how fusion can expose this new excel to our customers the way we're going to do that is by creating a storage class which will provide the performance and storage density options only available using flasharray xl to our end users as you can see we're configuring this new storage class called db ultra to provide up to four gigabytes per second of bandwidth and 100 000 iops now that we've done this let's switch over and look at this from the perspective of a tenant of this system one of the end customers i'm going to switch over to use that profile and then we're going to start walking through the path of how a tenant would be able to use the storage that's surfaced by fusion and backed by flasharray xl i'll start out by creating a tenant space which is essentially an application within fusion so that this tenant can then provision their storage i'll also create a placement group so that we can have all of the volumes for this application land in one hardware space the placement group really is our affinity or anti-affinity concept infusion and in this case the customer is going to want to have all of their volumes in one place for crash consistent snapshot granularity and now i'm going to have the tenant list out what storage classes are available and we can clearly see that dbultra the class we just created is available for this tenant to use and it's going to tie itself back to flasharray excel under the hood and based on that this tenant is now going to go in and create a volume to service their application and they're going to pass it in as a one terabyte volume with storage class database ultra with the volume created we're now on to the last step which is giving access for hosts so the tenant is going to list the host access policies that are available and see that they're able to connect to their linux host once they've done this they can go back and update the volume with the host access policy that they want to use and they're off and running to use this volume with that host for their database application and that's all there is to it as a provider we were able to expose flasharray excel through a storage class out to our consumers and as a consumer i was able to go in and consume storage as a service backed by flasher axl thanks for watching and hope you have a nice day as you've heard today flash array xl and purefusion are simplifying performance at scale making it easier for your teams to innovate faster and gain a competitive edge we are excited to get these products into your hands to see what you can build
Info
Channel: ActualTech Media -
Views: 26
Rating: undefined out of 5
Keywords:
Id: iApI-Ias-bw
Channel Id: undefined
Length: 48min 57sec (2937 seconds)
Published: Thu Dec 09 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.