Storage QoS ensures specific performance levels for applications and workloads. Will it push datacenters to 100% virtualization?
Storage system capacity isn’t any longer a top concern one of the IT professionals I speak with. It has been replaced with “How do I maintain top performance for a given application?” Here’s very true within the virtual environment, where storage I/O is shared. With out guarantee of a selected performance level, mission-critical applications is absolutely not virtualized.
Storage Quality of Service (QoS) is the same to Network QoS: It ensures that a specific application or workload always gets a definite performance level. For storage systems, this level is sometimes expressed as IOPS. Progressively more storage systems now claim to give some style of QoS.
Storage QoS typically sets the utmost choice of IOPs that a selected application may use. To make sure that the appliance gets this performance level, the IOPS of the storage system are totaled and allocated to every application. Once the whole amount of storage system IOPS have been assigned, you need to either upgrade that system or purchase another one.
[The web of factors brings a complete new set of storage and security challenges. Read Internet Of factors: What About Data Storage?]
Some systems provide help to over-allocate IOPS, but this is often risky. That’s just like thin provisioning capacity, where you allot more capacity than you’ve got, assuming that you’ll never top off all volumes collectively. An analogous idea holds true for performance, in theory: Not your entire applications would require peak load jointly.
Throttling vs. QoS
In my opinion, true QoS provides the facility to set both a lower and upper storage performance parameter for every application. Throttling sets just a minimum threshold, ensuring that an application will always have at the least that level of storage performance, and if the system can deliver more, it would accomplish that. For a midsized datacenter, the minimum requirement is maybe sufficient.
For the enterprise and cloud provider, however, the minimum isn’t enough. The issue with lower-limit QoS is that applications get more storage performance than they want until the system is under load. While this can be fine in some environments, enterprises and cloud providers want applications to get the precise experience their customers are deciding to buy — not more, no less. Which means having the ability to set both a minimum and maximum threshold.
Flash vs. hybrid
The final challenge of delivering storage QoS is tips on how to configure the system itself. Most storage systems that deliver QoS capability leverage flash. If the system is all flash, performance is continuous and allocating it for specific workloads is comparatively straightforward.
Hybrid systems, even so, are more difficult. To maintain costs down, they leverage both disk and flash — two tiers of storage with very different storage performance profiles. Maintaining a QoS guarantee requires careful management of the flash tier in order for applications that want a high level of IOPS performance are either always on flash or are quickly moved to it as performance begins to peak.
QoS on a hybrid system is feasible, nonetheless it requires more work at the element of the storage software developer — and a few pointed questions from IT professionals.
Storage QoS safely virtualizes mission-critical applications by assuring their performance, and it may be considered a key requirement of any new storage system. Storage QoS stands out as the key element in pushing datacenters to 100% virtualization.
Interop Las Vegas, March 31 to April 4, brings together thousands of technology professionals to find the most up-tp-date and cutting-edge technology innovations and methods to drive their organizations’ success, including BYOD security, the newest cloud and virtualization technologies, SDN, the web of items, and more. Discover more about Interop and register now.
George Crump is president and founding father of Storage Switzerland, an IT analyst firm targeting the storage and virtualization segments. With 25 years of expertise designing storage solutions for datacenters around the US, he has seen the birth of such technologies as RAID, NAS, … View Full Bio
More Insights