"Unleash the power of your data".
That is how majestically HPE promoted their - major storage announcement on may the 4th 2021. Calvin Zito even said: "Look, I've been a part of hpe storage for 37 years. - This really is the biggest thing that we've ever done." Whoa! There are actually not one but two big bangs. HPE Alletra - a brand new storage array family - and a unified cloud-based data management solution called
Data Services Cloud Console. Let's take a closer look! Hi! I'm Markus and I'm your enterprise tech
enthusiast. If you're new here this is where we
talk everything about enterprise tech - from bare metal to cloud native. If you are into that sort of stuff and would like to support
the channel, please consider subscribing. Thanks! Remember when almost one year ago I said this: HPE is transforming from a box mover
into a real cloud company - and they are doing it faster
than any of us anticipated. Well, things really are moving fast. This announcement is the latest
testament to how serious HPE is about turning into a cloud company. The way I see it the main issue with enterprise IT today
is it is way too complicated. There are too many options to
choose from and they look the same but really are not. And too oftentimes they don't really even
work together or they are difficult to manage. Even within one vendor portfolio. Let me give
you an example. These days companies have their precious data scattered around in the main data
center, - secondary data center, branch offices,
edge locations and in the public clouds. Management of all these resources is,
fragmented, to say the least. You need at least one local management
user interface for the on-prem storage arrays... if you also have some other storage array around,
you'll need another management user interface. You might need a third one in remote locations,
let's say, hyperconverged environments at the edge - and a fourth one to manage the cloud
data storage. Let's not even get started with the migration of the data. Another challenge is the high up-front costs
that are associated with the storage arrays. You'll pay for all the hardware and support,
typically three to five years, at the time of purchase - and these investments are
usually big...VERY big. Now, keeping that in mind I'm super happy to see
that HPE is now taking - a very fresh and modern, some might even say bold, approach to this. By introducing Data Services Cloud Console - HPE wants to abstract data management
wherever the data is located at. Using just one cloud-based user interface
you can manage data on-prem, at the edge and in the cloud. From now on you'll pay not only
hardware and support - but additional subscription fee for this
unified global data management. HPE says the total cost of ownership will, however,
remain pretty much the same. Only the pricing model changes to support the modern cloud experience. So, I guess that means hardware and/or support
will cost significantly less up-front, right? By the way, I find it pretty cool that the
Data Services Cloud Console code itself - is mainly based on Aruba Central, an established
and a solid platform for managing Aruba networks. Aruba Central has been available for years
managing millions of devices - So, not only it's good to know HPE is not rolling out
the beta version 0.3 out to the public - but it's also good to see that HPE and Aruba are actually working quite tightly
together. Something that's far from granted in the industry with these corporate structures. I bet Tom Black who moved from Aruba to lead
HPE storage business unit has a lot to do with all this. Before you get too excited though,
there's a catch, like with everything... - Not everything is available right away. First of all a Data Services Cloud Console is built on microservices architecture - which means it is easy to add applications or apps
to support different features and resources. At launch the available features are limited
but we will surely see more in the near future. Also at launch this will only work with two storage arrays. Two NEW storage arrays! That's correct, HPE did not leave the storage
announcement at Data Services Cloud Console - But added two brand new arrays to
the portfolio: - HPE Alletra 6000 and HPE Alletra 9000. Let's dive in! So, there's HPE Alletra 6000 which is
100% designed and built from ground up in-house - but it is still based on HPE Nimble Storage. HPE Alletra 6000 will be fully
manageable from the cloud. It's going to be all-NVMe. Being Nimble-based
offers six nines of availability, - and comes always as a two-node configuration. However performance
has been pumped up significantly. Marketing slides at least promise as much as
three times more performance. Like the little bro 6000, Alletra 9000 is
also completely designed in-house - with all new internals. But it's heavily based on HPE Primera DNA. Alletra 9000 is, of course, also managed from the
Data Services Cloud Console, - and it is also all-NVMe. But instead of six nines of availability,
Alletra 9000 guarantees - a whopping 100% percent availability - just like HPErimera. Alletra 9000 scales up to four nodes at launch - but later this year there should be an update coming that - allows Alletra 9000 to scale up to eight nodes. Performance compared to Primera in the marketing slides
is just simply: "Better". I guess there's not much to improve then. So, at launch you can use
Data Services Cloud Console - only with Alletra 6000 and Alletra 9000. However, according to HPE they will add support
for Nimble Storage and Primera "soon". I would not be surprised if SimpliVity and Cloud
Volumes would be supported right after. And why not XP and maybe MSA? Now that we are speculating,
from technical point of view, - nothing actually stops somebody, at some point - From developing an app that supports even third-party storage. Who knows? All this leads to ease of use. So, maybe the most interesting thing to me is what HPE is calling - Intent based provisioning, - one of the future apps that will be coming to
Data Services Cloud Console. So, once you have Alletra arrays providing
the capacity, - acting as the data engine, - and Data Services Cloud Console abstracting all that - you can do some pretty neat things. For example. instead of logging into a storage array, - figuring out the correct RAID level, creating a LUN, - doing zoning, presenting the LUN etc etc... you would just basically say: - I need 2 terabytes of capacity for a global SQL server, - and the Data Services Cloud Console,
with the help of InfoSight AI, - will find you the best location; edge, on-prem, cloud, whatever - and provision the most suitable capacity for you. Now how cool is that! THAT is what I call simplification. Now we have an Aruba Central or...network central - and then we have the Data Services
Cloud Console which is managing storage and data... I'm just wondering if we are going to see a
Compute Cloud Console...Central - anytime soon? But is this really something brand new that
we haven't seen before? Well, no...and yes. There have been plenty of solutions
that try to solve this very dilemma - Like IBM SAN Volume Controller and NetApp Data Fabric. Heck, even HP had a solution called
HP SAN Virtualization Services Platform - or HP SVSP - back in the good HP times - that tried to do the exact same thing. But there are a couple of fundamental differences. First of all other solutions are all based on
some sort of physical appliance - that sits in between the actual arrays and compute - or has a special proprietary software that needs to be installed somewhere. Data Services Cloud Console, however, is 100% cloud native, - no installations, just logging to DSCC... Am I supposed to shorten it like that, by the way? I don't know... You just log into DCSS - very much like you log into
public cloud portal and off you go. Secondly, the underlying architecture is open and
based on microservices - that allow for virtually any capability
to be brought in later on. In other words, it's not locked into any technology
or proprietary hardware or protocol. This adds a lot of credibility for it to be a long-term solution. So to me these two things make it really
stand out from the other solutions - that are or have been. As I mentioned in the beginning, already last year
I declared HPE a cloud company, - granted, maybe just the tad prematurely - since HPE is far from being a cloud company in a sense
AWS or Azure are cloud companies. But the ambitious and fast transformation
from being a box mover - to becoming a cloud provider with HPE GreenLake - and these latest announcements
make me even more convinced - that HPE will be a true cloud company
sooner than we may realize. Now, THAT was a lot of interesting stuff! Can't wait to see how this unfolds for HPE and the industry. Hope you enjoyed the video and if you did,
give me the likes and the subs! See you with the next one!