Fungible Virtual Product Launch Event

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] so [Music] [Music] [Applause] [Music] [Applause] [Music] we may be living in a hyper-disaggregated world but against all odds we found ways to compose ourselves in the pursuit of common and meaningful purposes at fungible we are united in one mission to bring to you revolutionary technology that will allow you to imagine new possibilities for a future that is fungible good morning everyone i am pradeep sindhu ceo and co-founder of fungible i'd like to thank everybody for coming to our first systems product launch i would like to begin by reminding everybody why we exist as a company our vision is to revolutionize data centers we aspire to dramatically improve the agility the security the performance the reliability and the economics of data centers this is a big challenge we call the realization the full realization of our vision a fungible data center today we will introduce the fungible storage cluster which takes a big step towards building a fungible data center so it's helpful to begin with an industry perspective the first giant step in computing took place in 1947 with a computer called the edvac the edvac ushered in the era of programmable general purpose computers prior to this computers were hardwired and very very difficult to build to solve a particular application the second step occurred when general purpose computers went to hyper converge scale out computers but this was at small scale the third step was in the early 2000s when cloud native applications were built on top of massively scale out general purpose x86 based microprocessor servers today we want to announce cloud native scale out fungible data centers what is a fungible data center it is a data center where compute memory and storage are disaggregated resources are organized in a small number of server types with each server type responsible for a specific resource and powered by the fungible gpu each server type is deployed in a scale-out manner consisting of many instances and all server instances are connected over a high performance true fabric that allow these resources to be composed dynamically and to execute practically any workload efficiently as well as at high performance now a fungible data center is inherently multi-tenant because its resources can be dynamically partitioned into ultra secure bare metal virtualized data centers fungible data centers address systemic inefficiencies in existing data centers stemming from the use of a compute-centric architecture which doesn't work so well in a data-centric world specifically fungible data centers will improve the agility of deployment the security of infrastructure as well as of data performance of data centric workloads by anywhere between 10 times to 20 times they will improve the reliability of network and storage dramatically and finally the economics of data centers by somewhere between a factor of three and a factor of twelve so how do we achieve all these properties the key is a hyper-disaggregated architecture enabled by the fungible dpu the fungible dpu solves two fundamental problems in today's data centers the first one is to enable efficient execution of data centric computations that are today performed inefficiently by general purpose cpus the second one is to enable highly efficient interactions between hyper-disaggregated nodes today we're taking an important step towards realizing our goal of building fungible data centers we are announcing our first fungible dpu-powered solution the fungible storage cluster this solution is a cloud-native scale-out secure high-performance over fabric storage platform the fungible storage cluster supports inline high performance data services enabled by the fungible dpu these services include data durability data reduction and data security they are enabled concurrently without any performance degradation over time we will be adding additional services the fungible storage cluster also supports pooling at massive scale providing unparalleled economics through high utilization of storage finally the fungible storage cluster scales linearly in performance starting from 15 million iops in two rack units to 300 million iops in a single 40 rack unit and beyond to multiple racks to complete our vision of fungible data centers we need to power cpu servers gpu servers and hard drive servers using the fungible dpu stay tuned for announcements on these solutions but for now let me turn to my co-founder bertrand cerle to discuss composability compassibility is a central concept in computer science you compose atomic instructions in two functions and the functions into more functions more capable functions that's how all software gets done now for hardware if you want to create a data center you are going to add computers thousands of computers to get the scale out effects that you want now in order to add a computer you first need to pick the configuration how much disk it has the ram and so forth and then you need to order it and then it gets delivered and you need to install it the whole process can take weeks it's also not very flexible because when your workload changes you need to order different servers so to address this lack of flexibility people have been looking at virtualization with a virtual machine you can define the computer you'd like to have for your given workload so that's good but virtualization is implemented with a hypervisor that maps the low level storage and networking calls into the bare metal networking and storage calls and that translation process is fairly slow so here comes the dpu to provide more flexibility so this is how it works for each cpu you add a dpu and the gpu is going to remap all the low-level calls in hardware and because it's in hardware it's super efficient so let's look at an example with um storage okay the storage virtualization the stretch composition so the cpu boots when it boots it tickles the dpu that's attached to it the dpu talks to the control plane and the control plane has a mapping that says for each cpu what boot image it should boot from that boot image is transferred to the dpu and the dpu gives the cpu the same boot image that the dpu would have had if the disk was attached and now the boot can proceed on the dpu the same thing happens with networking so at the end of this process for both storage and networking the cpu thinks it has a certain set of nicks a certain set of overlay and delay a certain set of volumes as if all those things were directly connected to the cpu in fact all of them are proxied via the dpu now because the gpu is totally programmable we can compose gpus fpga it's all a small matter of programming for the dpu now given your workload you define what the composer should compose all your resources are logically disaggregated your storage on one side your servers on one side your gpu on another side you use a composer to provision for the workloads that you have all those computers and the servers connected with the dpu get composed in just a matter of seconds when you boot your server its proper environment appears as you expect with all the nyx it expects and all the storage and the volumes you expect not only we can compose storage and networking for your workload but we can record this composition and create a recipe or a template that you can tweak later there are many parameters you can tweak but one of them is how many instances of your server you have and so that means when your workload expands you can just with one click reapply your recipe and you get your increased workload very very easy to use what are the benefits of this approach the first major benefit is performance all the composition is done by the dpu is done in hardware and the dpu offloads from your cpu so you run at bare metal speed no hypervisor in the way the second benefit is that it's cheaper because you avoid the need to over provision your servers you get the benefits of pooling it's cheaper because you have fewer skews you don't need to have some big servers with a lot of this and some little servers with little this no you can just have a massive reduction in the number of skus but the last big win is agility you can rearrange the servers for your given workload in just a few minutes this is to be compared to weeks in the traditional way where you roll in a new server in your data centers so this is why compressibility is so important and we believe all data centers will become fungible data centers enabled with gpus and composition hi my name is benny simontov and i'm the vice president of product and business development at fungible we're excited to unveil the first of fungible line of products the fungible storage cluster or fsa in short an industry-leading very high-performance scale-out disaggregated secure all-flash storage cluster with moore's law flattening and software being fundamentally bottlenecked by general purpose servers architectures cpus are unable to cope with large growth and storage requirements including the ability to achieve high iops and throughput at low latencies especially when you turn on data services like compression encryption and erasure coding it has become increasingly clear that the solution for executing data centric computation efficiently lies elsewhere at fungible we believe that the market needs to address the general purpose cpu bottleneck at a more holistic level basically combining software innovations together with with hardware innovations this is exactly why we developed the fungible data processing unit a new class of processor designed from the ground up to handle data centric computations at extreme high performance and to enable efficient fabric interconnect across large number of disaggregated nodes fungible has done to storage exactly what sdn did for networking we completely separated the storage control plane from the storage data plane the storage data plane running on scale out storage node is called fs 1600 the storage control plane which ran on three or more ha nodes is called fungible composer let's take a look at the fs 1600 first the fs1600 was designed on the principle of scale out architecture we're adding more nodes will increase the performance linearly in fact we can support up to thousands of nodes in a single data center the fs1600 is a 19 inch wide two ru high storage node with 24 u.2 nvme ssd and 800 gigabit per second of ethernet bandwidth it is powered by two f1 dps which are running the entire storage networking and security stacks and as a result the fs1600 achieves extremely high performance at das like latency for example a single fs 1600 can achieve over 15 million iops of four kilobyte random read which is more than 80 of the theoretical ssd capacity in the box 15 million iops translate into 60 gigabyte per second of throughput this is an average 5 to 10 times higher than any other x86 based storage array another key benefit of the fs 1600 is its ability to turn on inline data services like compression encryption and erasure coding without impacting performance of the node this is typically not the case with cpu-based storage nodes the fs1600 currently supports block storage with nvme over tcp and nvme over through fabric we add about 10 microseconds of additional latency over das which is unnoticeable by the high level applications the disaggregation however give us the full benefit of independent scaling of compute and storage fundable true fabric enabled by the fabric control protocol further allow us to create a large scale of fs1600 with a reliable end-to-end quality of service and much lower latencies compared to other protocols the high performance and low latency makes remote ssd appear like local ssds even at the very large scale of deployment in fact the fs1600 is the only high performance storage solution available today that supports network erasure coding as compared to 3x replications fungible achieves data durability by implementing a flexible erasure coding algorithm across multiple nodes over the network we support all the way from 2 comma 1 to 32 comma 8 configurations distributing the fs1600 nodes across different racks enhances reliability as our ec implementation allow the fsc to recover from multiple system failures even in the case of power parallels to the entire rack another benefit is that our customers can now implement ec for hot data at high performance and all latencies up until now ec is typically only implemented for cold data let me summarize the benefits to our customers well first fs1600 deliver the highest performance density in the market which is about 5 to 10x higher than any existing x86 based storage platform performance scales linearly when adding additional nodes secondly our customers should see 3x lower tco and footprint compared to current deployments the tco benefits come from inland compression ratios that are comparable to google broadly but runs about 100 times faster lower overhead from network ec compared to 3x replications and pooling of stranded ssd resources increasing media utilization thirdly the fsc supports some additional advanced storage services like multi-tenancy for cloud environment encryption addressed and in motion with per volume key per volume quality of service with min max iops snapshots clone striping as well as additional advanced storage services the fs 1600 powered by the fungible dpu enable hyper disaggregation of data center infrastructure let me now turn over to my colleague shrini to talk about how the cluster fs 1600 is managed by the fungible composer thank you beni hello i'm srinidy vardharajan the svp of solutions at fungible it's my pleasure to introduce the other half of the fungible storage cluster the control software known as fungible composer the fungible composer is a centralized management solution that's developed to configure manage orchestrate control and deploy the fs1600 cluster it operates on a control plane that is distinctly separate from the data plane the composer itself runs on a three node self-contained quorum based cluster for scalability and high availability the core architecture is cloud native it consists of stateless services with all the state confined to fully replicated databases and messaging services we use internal load balancers to steer control plane load across service instances and this ensures a smooth operation under data center scale load fungible composer consists of five services storage service a network management service a telemetry service a node management service that's responsible for log collection and finally an api gateway that provides external access to the services that are hosted by the fungible composer what is unique about our data center scale storage system design is the separation of the control plane from the data path this enables the data path to be really simple and thus robust to a variety of failure conditions that you run across in a data center i mean this is what you really want moving complexity out of the critical data path and into the control plane the storage service in fungible composer is responsible for creating and managing storage volumes and enabling storage capabilities along four axis data durability data security data reduction and performance isolation users of the fs 1600 can easily select the data durability scheme all the way from raw volumes with no durability basically ephemeral storage to erasure coded or replicated volumes with configurable protection to recover from an arbitrary number of failures all of this on a per volume basis data security is provided by seamless volume encryption with per volume encryption keys and support for centralized key management via k-map for data reduction the storage service provides a variety of selectable compression algorithms and data deduplication again on a per volume granularity finally performance isolation is guaranteed by per volume quality of service support if you notice i keep harping on per volume basis this is because unlike other storage arrays that create siloed combinations of durability security data reduction and performance isolation and force all data volumes to belong to either one or the other of these combinations the fungible storage cluster truly enables each volume to be independently configured along all four axes this is what provides the customization that is needed for multi-tenant data centers ease of setup is particularly important when you're talking of a scale-out storage cluster that operates a data center scale and this is where the network service in the fungible composer comes in the network service automatically detects an attached fs1600 node and it uses zero touch provisioning to add it to a fungible storage cluster cluster expansion is now a seamless operation the telemetry service provides a sophisticated data gathering and distribution engine for telemetry data and for metrics that are gathered from fungible dpus within fs1600s to access the vast amount of telemetry data that comes out of dpus the telemetry service uses a subscription model once you're subscribed to a metric data points are periodically uploaded from the dpus to the telemetry service and you can query these metrics data from the telemetry service also monitors the health of the fungible storage cluster as a whole and this provides the insight that's necessary for initiating failure recovery under software server storage or network failures because of its performance scale and granular configurability the fungible storage cluster is particularly well suited for high performance data center scale operations major use cases here include parallel file systems high performance databases elastic block storage across ai and machine learning use cases analytics we hope you are ready to enter a new era of hyper disaggregated storage that provides performance power and cost efficiencies that have historically been enjoyed only by hyperscalers this is not your grandma's headache congratulations on the launch of the fungible storage cluster we're excited to have fungible join the ibm partner network and further the reach of ibm spectrum scale the fungible storage cluster comes at an opportune time to serve an industry that's clamoring not just to optimize performance and maximize performance but to turn performance density into much better footprint and cost efficiencies congratulations to the fungible team at tech data we are looking forward to jointly bringing the market the fungible storage cluster to customers with extremely stringent performance and efficiency requirements our partners are excited to bring new technologies to market and we look forward to winning together good morning delighted to be here rarely have i been more excited in my three decade storage career to discuss a breakthrough new storage product the fungible storage cluster so let's begin by discussing the customer requirements that drove the design of the fungible storage cluster there are six key requirements that customers told us about first performance both high iops and low latency second low cost including reliability very high reliability at low cost third as the security threats are accelerating security is a paramount requirement of our customers number four we're dealing with scale out cloud data centers and therefore the storage also needs to be scaled out and to be able to handle tens of petabytes of data in a data center and the way failures need to be handled are also consistent with the way scale out systems must handle failures number five multi-tenancy very important because our customers will have multiple tenants within a data center and we need to be able to protect one tenant from another both in a security sense by keeping them apart and also in a performance sense to make sure that no one tenant is disturbing the performance of another tenant and finally ease of use is critical both during deployment of the storage cluster and during ongoing operations which need to be automated so why is the fungible storage cluster better than existing solutions at meeting these six requirements let's take them one at a time performance the fungible dpu was specifically designed to execute data-centric workloads and the storage stack is a perfect example of such a workload we will be 10 times better than general purpose cpus on such workloads furthermore the true fabric enables very low average and tail latencies which are important to our customers in fact we'll show you later that even going over the network our remote storage performance can be equal to and in some cases even better than the performance of local storage to achieve low cost we do three things first the dpu has world-class compression capability built into the dpu and it's better or as good as the best compression in the world widely known to be google's broadly secondly we do cross node erasure coding now this is important because a lot of people when they think about erasure coding they think about erasure coding within a storage node of a bunch of ssds to protect against ssd failures our ec very importantly is cross node few people do this most people replicate across nodes and there's a big difference in cost between erasure coding across nodes and replicating across nodes finally we do cross data center pooling our fungible dpu and true fabric enables us to create a storage cluster that can be shared across thousands of racks this is different from what people are doing today which is disaggregating within a few racks next let's discuss security as we know current systems are limited by the speed of the cpu the dpu on the other hand we're able to do encryption at line rate both in motion and at rest next we're going to discuss massive scale the true fabric and the dpu enables massive scale the ability to scale to tens of petabytes in a data center furthermore as you will see we have an architecture that cleanly separates the data plane and the control plane something we've learned from the networking world this is also critical to great scaling for the fungible storage cluster multi-tenancy is very important and what we do with the dpu is we enable per-volume quality of service compression data protection and encryption keys this is in contrast to today's storage systems that often provide a single level of durability for all data we're very granular in what we do finally let's discuss ease of use and why the dpu and the fungible storage cluster is differentiated with ease of use because of the networking dna in the company we do something called zero touch provisioning not usually done in storage systems this makes it very easy to deploy secondly we make all placement decisions with respect to where a volume goes even when there are tens of nodes or even hundreds of nodes of storage in a data center with many such nodes without our solution the customers typically have to remember which box or which node has their data we completely absolve the customer from that responsibility now let's get into performance which is one of the key requirements as you know let's begin with the top right table we're showing two numbers similar to what other cloud providers do because they offer two kinds of storage ephemeral storage and durable storage our number for ephemeral storage on a per node basis is 15 million iops our number for durable storage is approximately 9 million iops when i'm cross node protected just a few years ago you know i was personally building a storage system based on general purpose cpus and we were struggling to achieve even 1 million iops so these numbers 15 million 9 million these numbers are simply astounding furthermore we are measuring in the lab that the fungible storage cluster scales very linearly so for example if i had seven nodes we can offer greater than a hundred million iops of raw performance now how good is our durable performance let's look at the bottom table the bottom table shows you how good we are compared to other scale out storage competitors there aren't very many other storage scale out competitors out there so we picked for comparison ceph which is deployed by a few cloud service providers and because it is known to be scale out and we also picked for comparison another recent entrance which is about the closest competitor we could find the first two rows are about iops the next two rows are about latency so as you can see from the first two rows we are about 18 times better than ceph with read and write iops and we are approximately three times faster on read and write iops than our closest competitor maybe two to three x for writes three x for reads and on latencies as you can see again compared to ceph we're about 15x faster on reads maybe 30x faster on right latencies and compared to our nearest competitor on read latency while they're approaching 170 microseconds we can get below 170 microseconds all the way up to 4.5 million iops compared to their 3 million iops all of these results actually assume tcp we measured with tcp we didn't measure it with true fabric as you know the performance is going to be even better for us once we take these measurements with true fabric other than performance customers also care about low cost high reliability high security and ease of use now let's talk about why the fungible storage cluster is cost efficient number one we have superior compression our compression is as good as the best in the world widely thought to be google's broadly second we have extremely low overhead durability as we said we do cross node erasure coding 12 plus three erasure coding has only 25 percent overhead when people do three-way replication there is 200 percent overhead that makes a big difference thirdly because encryption is built into the dpu we can do encryption without expensive self-encrypting drives and finally because we have such high performance in a single node we can support the needs of a large number of customers in a very small footprint by doing 15 million iops we can do in a single node what someone else that does three million iops per node would require five nodes to achieve the same level of performance so we can be significantly cheaper as a result now let's talk about why we have cost efficient reliability look at the table in the middle of the chart and you'll see that the top two rows are two and three-way replication rows the next three rows are various erasure coded configurations the middle column is the cost of achieving the given reliability lower is better and the last column is the probability of data loss once again lower is of course better as you can see 12 plus 3 erasure coding is almost 5 orders of magnitude better reliability than 3-way replication notice that the cross node ec has the lowest cost and the best reliability so you can have the best of both worlds furthermore our performance with razor coating is superior to the performance that other competitors have using the more expensive replication so we really are better in actually three dimensions here now let me explain to you why the fungible storage cluster is highly secure again there are four reasons number one we're able to encrypt at line rate at 800 gigabits per second almost a terabit per second we have per tenant keys which can be stored in the secure enclave inside the dpu every customer can get their own key and we only allow signed code to execute in the dpu using our secure boot protocol let's now switch gears to ease of use fsc achieves ease of use by allowing for automated management using a single rest api which can be used to manage tens of petabytes of data in a data center we've been talking to customers and of course one of our largest customers found this a very compelling proposition because of the extremely low opex and the ability to manage and automate all of this from their orchestration system they loved that about our product when we create a volume we decide where it goes in the system even though there may be a hundred nodes in the system we decide where it goes customers don't have to remember where the volume is placed furthermore all the alerts and notifications from our system are presented in a standard way to cloud orchestration systems allowing them to automate all of these failures and this automation drives down the opex ease of use is further enhanced by the fact that the customer does not need to manage multiple data silos because we can support diverse kinds of data in a single store in the example on the bottom of the chart we show three different customers all storing their data on the same fungible storage cluster with different encryption keys different durability requirements different compression requirements and each of them has their own performance sla which we guarantee and this is all handled within a single storage cluster the ability to consolidate workloads and simplify the infrastructure drives opex down further because the customer doesn't need multiple silos we're going to now turn and discuss some use cases for the fungible storage cluster we're going to begin with the disaggregated scale out cloud storage use case this is the use case the canonical use case that drove the design of the product and we've already shared with you why we are the best at satisfying the six key requirements that scale out cloud customers have for this use case so i'm not going to belabor that point but i'm going to show you a little performance data comparing our performance for scale out disaggregated storage to that of the public cloud vendors and we're going to take a look at per volume performance results because this is easily found on the public cloud provider's website so what we're going to show here in this use case is that we're better than the public cloud providers who have their own scale out storage except those are proprietary implementations as you know our own implementation uses a standard nvme or fabric implementation the performance numbers for google cloud and for aws as i mentioned come directly from their website our performance is actually measured in our own fungible labs and several customers have done these performance measurements and they all match as you can see here we're actually better by anywhere from a factor of three or four over these proprietary solutions from the public cloud providers even though we're using a completely open standard implementation in fact i can share with you that one of our customers specifically moved from the public cloud to a private cloud used our fungible storage cluster and they told us that we are 4x better compared to what they were experiencing on the public cloud the second use case i want to share with you today is with ibm spectrum scale or gpfs this as you know is one of the premier scale out file systems in the world ibm spectrum scale is used in many applications many verticals such as high performance computing ai and machine learning big data analytics and private cloud and it's used across many industries like banking insurance media defense telco life sciences it's a very mature product used in many many places there are three key advantages of using the fungible storage cluster as the storage for gpfs as shown in the chart first you get very high iops for small block workloads gpfs is typically being very good at large block workloads and we get very high performance up to 87 million iops per petabyte in fact the target we were given to beat was 10 million iops per petabyte and we beat it handily the second advantage of using fungible storage cluster with gpfs is lower cost because we reduce the loader burden on the application nodes because we offload compression and encryption into our dpu thirdly as we've already shared with you we do cross storage node ec which enhances reliability while driving down cost at the same time this bar graph shows us the performance of gpfs running on fungible storage cluster as you can see we achieve almost 10 million random read iops with 12 hosts running gpfs driving to just two cluster nodes just two fsc nodes we can extrapolate that and we can see therefore that we expect to be able to get 30 million iops with just six fsc nodes very powerful performance finally let's talk about the third use case this is for scale out databases where we'd replace direct attached storage with scale out fsc storage we see four advantages first you get better performance as we will show in the next chart both better throughput and lower latency we also can get you lower cost by using fse because there are no stranded resources with the direct attached storage furthermore we can reduce load because of the offloaded compression we also offer better ease of use because we can do centralized data management and then finally we offer better security because of the use of per tenant keys here we show the performance of using the fungible storage cluster versus direct attached storage for a no sql database specifically for cassandra these results assume a 95 5 that is 95 read 5 right workload and we are using the yahoo cloud serving benchmark for these results in the case of das we assume that compression is enabled in cassandra and for fungible storage cluster we enable compression on the fungible storage cluster itself using our dpu and turn off compression in cassandra we show four charts for latency measurements these are read and write latencies and also average and tail latencies the gray represent das in these charts the blue graphs represents the fungible storage cluster is better as you can see fse provides uniformly better latencies finally on the left we show a fifth chart and this shows transactions per second of course here higher is better and as you can see fsc provides superior transactions per second as well so both latency and transactions per second are better clearly these are very impressive results given that we're going over the network to get to the fungible storage cluster compared to storage that is local with local dash finally let me conclude i hope by now i have convinced you that the fungible storage cluster is a breakthrough storage platform most of the differentiation comes from the use of the fungible gpu and the associated software stack that was co-designed with the fungible dpu to drive these amazing levels of performance as i showed you through the presentation we meet the six key customer requirements of high performance low-cost security reliability multi-tenancy and ease of use better than any other storage system we also took you through three very compelling use cases scale out cloud spectrum scale or gpfs and various scale out databases i hope i have really intrigued you enough about the fungible storage cluster and i really look forward to your questions this marks the end of all of our presentations and we're now going to turn over to the live q a thank you so much a huge congrats to the fungible team the softbank team joins you in celebrating this remarkable milestone in a post-morse law world where our infrastructure cannot keep pace with exponentially growing workload demands we are counting on your differentiated gpu systems to usher in the era of high performance composable infrastructure at the core and the edge congrats again fungible congratulations to the fungible team for hitting such an important milestone as fans of the fungible dpu we're excited to see its first incarnation in a storage platform with its first product we are looking forward to seeing the fungible storage platform make a dent in the next generation infrastructure space congrats to pradeep bertrand and the whole fungible team on the recent launch fungible data processing unit or dpu is a transformational technology and it's great to see its value and benefits translate to a storage use case we are looking forward to tremendous success for fungible dpu in many storage use case across many customers congrats again team congratulations to the fungible team for the successful launch of the fungible storage cluster i look forward to many more punjab dpo powered platforms in the future hey big congratulations team fungible this launch marks an important milestone for the company we are thrilled to see the innovations and punchable labs deliver that tangible value into the hands of customers and we wish you every success in building technology and market leadership in the industry [Music] you
Info
Channel: Fungible
Views: 260,463
Rating: undefined out of 5
Keywords: data, datacenter, technology, cloud, network, datacentric, fungible, softwarearchitecture, datacenterarchitecture
Id: pTdB6zeTL6w
Channel Id: undefined
Length: 50min 26sec (3026 seconds)
Published: Tue Oct 27 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.