Organizations trying to reduce their storage total cost ownership (TCO) need to figure out how to reduce storage complexity. If organizations only had one workload and that workload didn’t need data protection, then storage management would be very simple. IT would only need to buy and manage one storage system. Organizations, of course, have many more than one workload; they have dozens, if not hundreds. All of those workloads need protection from disasters and ransomware.
The problem is that the storage industry has not responded with a single storage system that can support all of these workloads’ requirements. Storage vendors have instead responded to the workload variety problem with the exact opposite of a viable solution. In fact, they have worsened the problem with limited-use-case storage systems. Most storage systems on the market have an ideal use case and one or two secondary use cases. The result is storage sprawl, where the customer has to buy five to six different storage systems to meet the requirements of the organization’s users and applications.
There is even sprawl within use cases. For example, network attached storage (NAS) systems organizations may have a mainstream solution for user home directories and another for high-performance unstructured data. Or in the backup use case, customers are now being told a best practice is to have a backup storage target for rapid data ingests and then another one for long-term immutable data retention.
Reduce Storage Complexity with Efficient Hardware Optimization
The first step in reducing storage complexity is optimally leveraging the latest innovations in storage hardware. Modern storage hardware can deliver hundreds of thousands of IOPS and up to 20TB of capacity per drive. Most storage vendors, however, require dozens of solid-state drives (SSDs) to deliver a couple hundred thousand IOPS and don’t allow their customers to use high-density drives. Many organizations should be able to meet their storage needs with a dozen flash drives, and their capacity needs with two dozen hard disk drives (HDDs)
Why is there such a disparity between the potential of the hardware and the results the customers are seeing? Most storage vendors, even though their product may be new, utilize the same decades-old, multi-layer I/O stack to manage data instead of creating new I/O algorithms specific to their solution. These older I/O stacks enable the vendor to get to market quickly but force customers to buy significantly more hardware than they should in order to meet performance and capacity demands. The dependency on these legacy multi-layer I/O stacks also forces vendors to focus on a specific use case instead of consolidation. Even though the raw potential of the hardware resources can address multiple use cases, the inefficiency of the obsolete storage software prohibits customers from fully tapping into those resources.
StorONE, instead of racing to market, took the time to rewrite the storage I/O stack for maximum efficiency. The development effort took eight years, but now the software can deliver on the potential of the latest hardware innovations. Our single-layer I/O engine can deliver 500K IOPS or more from twelve flash drives. It also supports high-density 20TB hard disk drives (HDD) without IT having to worry about week-long rebuilds. StorONE can recover a 1PB volume of 20TB HDDs in less than three hours. Our high-performance auto-tiering algorithm automatically manages data movement between flash drives and HDDs, delivering all-flash array performance at hybrid storage prices.
The StorONE Engine’s efficiency enables maximum hardware utilization and resources to support performance-demanding and capacity-centric workloads simultaneously. Using both flash and hard drives enables the cost-effective application of these resources, creating a very competitive starting price point.
Reduce Storage Complexity with Practical Consolidation
If the storage solution can efficiently use hardware resources, the next step is consolidating workloads onto that efficient platform. Some AFA vendors claim they can consolidate workloads, but their claim requires that IT buy into the philosophy that all data should be on high-performance flash storage, something that most do not. Capacity-centric workloads will still use far more cost-effective HDDs today and in the future. Consolidating all data into a single system that requires the wholesale adoption of a media technology that is ten times more expensive than the alternative is not practical.
StorONE’s vRAID makes the use of high-density drives safe. StorONE’s high-performance auto-tiering algorithm automatically manages data movement between flash drives and HDDs. Its consistency makes using flash and HDDs practical, delivering all-flash array performance at hybrid storage prices.
Beyond supporting mixed media, practical storage consolidation needs to provide control. IT needs to ensure that performance-demanding, mission-critical workloads get the resources they need but also won’t starve other workloads. Again, the AFA solution where all workloads get high-performance is impractical, but neither is a hybrid system delivering inconsistent performance.
StorONE’s Virtual Storage Containers (VSCs) enable organizations to mix these workloads and meet each workload’s performance and data resiliency requirements. VSCs enable IT to set the performance, capacity, drive redundancy, data protection intervals and retention, data encryption, and disaster recovery settings for each workload.