Feeds
category index
twitter
Tuesday
Sep062011

Reminder: Midwest Regional on December 6

Check your calendar; if you don’t have Tuesday, December 6 2011 reserved to come to the Midwest Regional VMUG Conference, hosted by the Kansas City VMware User Group at Arrowhead Stadium, now is a great time to do so!

Registration for the event is open, and we have a great slate of vendors and presenters on board to make a great day for the event.

Plan to stay the whole day! Don’t just come for an hour or two; don’t come for the morning and skip the afternoon! We have cool stuff in the works for the entire day; it will be worth your while to stick around…

Monday
Aug292011

VMworld 2011 session: VSP1823, Storage Distributed Resource Scheduler

DRS—Distributed Resource Scheduler—for VM workloads has been a part of VMware since VI3. The concept is that vCenter can evaluate the utilization of the compute and memory resource consumption for each host in a cluster (with DRS enabled). Based on policies you set, DRS can automatically initiate vMotion to migrate VMs to balance the workload across hosts in the cluster or evacuate guests from a host in order to place it into standby.

Storage DRS (sDRS) is one of the new features added to vSphere 5 Enterprise Plus, and its impact on storage utilization should be just as ground-breaking as it’s predecessor, Host DRS (hDRS).

I attended the breakout VSP1823/sDRS by Manish Lohani and Anne Holler, VMware Product Manager and Engineering, respectively.

As one might expect, sDRS is the logical product of Storage vMotion under the control of DRS. In conjunction with SIOC (storage I/O control)—which helps to throttle the storage access within the confines of a single datastore—sDRS takes historical and instantaneous data on storage performance and redistributes VMDKs for a VM to different datastores within a storage cluster.

The sDRS product team started with a basic problem statement: storage placement is not automated. This applies for both VM creation as well as VM growth. In summary, sDRS intends to solve the following three problems that otherwise must be managed manually.

  • VMDK Placement
  • Avoiding out-of-space conditions
  • I/O load balancing

In a well-balanced cluster and storage environment, it’s possible to create a new workload on a datastore that then causes that datastore to become a performance bottleneck. Other than removing the offending workload completely, the administrator must determine the new location for the load without merely tranferring the problem along with the VMDKs. Today, that process can be trial-and-error, or the use of such over-powered datastores that one never gets into the problem in the first place.

Further, a cluster & storage that is well-balanced today may become unbalanced through utilization, when no new workloads have been added to the system.

Finally, I/O load patterns may be “bursty” or otherwise cyclic; without a history, a datastore could be performing acceptably when administrators are making storage decisions, while peak utilization could make the performance of the datastore unacceptable.

In order to perform its magic, sDRS introduces “datastore cluster” as a new management object. Like a host cluster, multiple datastores can be added to a cluster that is then managed by vCenter as a single resource pool. For that reason, clusters you create should be comprised of similar-performing datastores; in the hDRS analogy, you don’t put dissimilar hosts (e.g. Intel & AMD based systems) together in a host cluster.

The relationship between host clusters and storage clusters is many-to-many; the boundaries for both are the vCenter instance. Ideally, the configuration would include fully-connected hosts and datastores, but it is not a requirement; sDRS “understands” the partially-connected environment, however, and takes that into account when making recommendations.

In almost every other way, sDRS is equivalent to hDRS:

  • Affinity rules for maintaining two (or more) VMDKs on the same datastore.
  • Anti-affinity rules for separating two (or more) VMDKs across different datastores in the cluster.
  • Maintenance mode on a member datastore will cause sDRS to evacuate the VMDKs to other datastores in the cluster.

Unlike hDRS, the creation of a VM on sDRS-enabled datastores offers a wealth of options for suggesting a target for VMDKs, all of which can be overridden.

Another way that sDRS differs from hDRS is the way the automation is “tuned”; with hDRS, the administrator has a setting for “how aggressive” it will manage the load on the hosts. sDRS gives more granular controls than hDRS with it’s “stars rating”.

Finally, sDRS differs from hDRS by defaulting to non-automatic operations, where hDRS defaults to “full auto.” In fact, automation is actively discouraged for certain types of SAN environments; environments where optimization and load balancing are done “in the background” (e.g., EMC FAST or Equallogic multi-member storage pools).

And, as one might expect, the implementation of sDRS means a number of new managemet views and performance graphs have been added to vCenter.

Sunday
Aug282011

Guest Post: Traveling to Conferences

In the IT field, continuing education is critical to both personal and professional success; professional success should also translate into success for the businesses for which we work. Education can come in many forms, from reading books and trade magazines to attending classes and seminars. Doing any of these things on-line is fine; getting out of the office and sitting down next to your peers adds the additional benefit of connecting you with folks who may be struggling with the same challenges as you. You may also find that you are doing the helping, instead of being helped.

Attending a good conference is a way to get a concentrated dose of seminars and classes with a chaser of networking thrown in for good measure. And traveling out-of-town gives you the additional opportunity to set aside your normal routine and get immersed in the environs.

I typically budget for a couple of conferences each year, and I’ve been going to the same ones for a while. Contrary to any belief that “the same old stuff” is presented at annual meetings, the ones I’ve attended have updated, fresh things to learn every time; and as a bonus, I tend to see the same folks coming back, year after year. Not only does that make for great networking and friendships, it helps validate that the conference is a good one.

VMworld is one of those conferences, and I’m back for my fourth year.

It’s early yet for the schedule as I write this post; registration isn’t open for another 7 hours. But in the 14+ hours that I’ve already been here, I’ve reconnected with guys I’ve known for a while and met a bunch of cool, new folks who share my passion for IT and VMware. I’ve already learned some things, too.

Now don’t get me wrong: I’m here for the tech. But in the course of relaxing before the business of the conference gets going, I joined a number of fellow attendees at a sushi bar. For the folks that know me, I’m sure you’re surprised. I’m a straight-up steak-and-potatoes kind of guy, and the thought of eating little bits of raw fish and seaweed is kind of nauseating.

So I followed my own advice: when you get out for a conference, you set aside the routine and push yourself to learn something new.

As it turns out, I learned that sushi isn’t all bad; in fact, the “spicy tuna” was pretty darned good.

So this conference is off to an auspicious beginning: my travel wasn’t marred by weather problems (my condolences to all those folks affected by Irene!), and I’ve learned something that I’d have never learned had I skipped the conference and stayed home.

(In the spirit of full disclosure: after the sushi bar, I went to another restaurant and made a filling meal out of a more substantial dish.)

Jim Millard is a member of the KC VMUG Leadership team. You can follow his exploits in his blog or on twitter.

Thursday
Aug252011

VMworld 2011

The 7th annual VMworld conference is set to kick off on Monday, August 29, 2011 in Las Vegas. The leadership for the Kansas City VMUG will be at the event, and we’ll be sending reports back as your “roving reporters.”

Tuesday
Aug022011

Spotlight: new VSA feature for vSphere 5

If you’re a small business without (or with limited) shared storage, VMware is targeting a new feature in vSphere 5 directly at you: vSphere Storage Appliance (VSA).

They have a brief intro to the appliance online, and while they are aiming at a very specific audience for this first version, I think the appliance has potential beyond that.

If you have hosts that you originally purchased with ESX in mind, and have a good amount of local storage available because you intended to chew a bunch of it up on the service console, the VSA may be a way to recapture that lost investment even if you have “traditional” shared storage available (e.g, SAN or NAS).