DRS—Distributed Resource Scheduler—for VM workloads has been a part of VMware since VI3. The concept is that vCenter can evaluate the utilization of the compute and memory resource consumption for each host in a cluster (with DRS enabled). Based on policies you set, DRS can automatically initiate vMotion to migrate VMs to balance the workload across hosts in the cluster or evacuate guests from a host in order to place it into standby.
Storage DRS (sDRS) is one of the new features added to vSphere 5 Enterprise Plus, and its impact on storage utilization should be just as ground-breaking as it’s predecessor, Host DRS (hDRS).
I attended the breakout VSP1823/sDRS by Manish Lohani and Anne Holler, VMware Product Manager and Engineering, respectively.
As one might expect, sDRS is the logical product of Storage vMotion under the control of DRS. In conjunction with SIOC (storage I/O control)—which helps to throttle the storage access within the confines of a single datastore—sDRS takes historical and instantaneous data on storage performance and redistributes VMDKs for a VM to different datastores within a storage cluster.
The sDRS product team started with a basic problem statement: storage placement is not automated. This applies for both VM creation as well as VM growth. In summary, sDRS intends to solve the following three problems that otherwise must be managed manually.
- VMDK Placement
- Avoiding out-of-space conditions
- I/O load balancing
In a well-balanced cluster and storage environment, it’s possible to create a new workload on a datastore that then causes that datastore to become a performance bottleneck. Other than removing the offending workload completely, the administrator must determine the new location for the load without merely tranferring the problem along with the VMDKs. Today, that process can be trial-and-error, or the use of such over-powered datastores that one never gets into the problem in the first place.
Further, a cluster & storage that is well-balanced today may become unbalanced through utilization, when no new workloads have been added to the system.
Finally, I/O load patterns may be “bursty” or otherwise cyclic; without a history, a datastore could be performing acceptably when administrators are making storage decisions, while peak utilization could make the performance of the datastore unacceptable.
In order to perform its magic, sDRS introduces “datastore cluster” as a new management object. Like a host cluster, multiple datastores can be added to a cluster that is then managed by vCenter as a single resource pool. For that reason, clusters you create should be comprised of similar-performing datastores; in the hDRS analogy, you don’t put dissimilar hosts (e.g. Intel & AMD based systems) together in a host cluster.
The relationship between host clusters and storage clusters is many-to-many; the boundaries for both are the vCenter instance. Ideally, the configuration would include fully-connected hosts and datastores, but it is not a requirement; sDRS “understands” the partially-connected environment, however, and takes that into account when making recommendations.
In almost every other way, sDRS is equivalent to hDRS:
- Affinity rules for maintaining two (or more) VMDKs on the same datastore.
- Anti-affinity rules for separating two (or more) VMDKs across different datastores in the cluster.
- Maintenance mode on a member datastore will cause sDRS to evacuate the VMDKs to other datastores in the cluster.
Unlike hDRS, the creation of a VM on sDRS-enabled datastores offers a wealth of options for suggesting a target for VMDKs, all of which can be overridden.
Another way that sDRS differs from hDRS is the way the automation is “tuned”; with hDRS, the administrator has a setting for “how aggressive” it will manage the load on the hosts. sDRS gives more granular controls than hDRS with it’s “stars rating”.
Finally, sDRS differs from hDRS by defaulting to non-automatic operations, where hDRS defaults to “full auto.” In fact, automation is actively discouraged for certain types of SAN environments; environments where optimization and load balancing are done “in the background” (e.g., EMC FAST or Equallogic multi-member storage pools).
And, as one might expect, the implementation of sDRS means a number of new managemet views and performance graphs have been added to vCenter.