ITBusiness.ca

vSphere5 boasts host of updates

VSphere 5.0, the latest iteration of VMware’s “Cloud Operating System,”boasts a wealth of updates, including new tools to manage fleets of VMs, and vast tiers of virtualized, vMotion-enabled storage links.

Since storage powerhouse EMC owns asignificant chunk of VMware, we always wondered whenthe storage dimension would become exploited more heavily, and we gotthe answer. But the storage-related enhancements are by no meansEMC-specific.

VMware’s feature list comes at a price per processor that ranges fromas low as $85 to $3,495. And that doesn’t cover the cost ofAcceleration Kits. What you get with vSphere 5 is a feature set ofinclining gradients that include stronger storage virtualizationoptions, large and wide hardware support (in terms of vCPUs per VM,memory, and storage) and new capacity to roll-out and “life cycle” VMsat a very fast pace.

Preventing server sprawl is accomplished in the way vSphere controlsthe VMs and their storage as objects.

With so many tiers of feature support, it can be confusing and VMwarehad to send a cheatsheet so that we could keep track of what wasavailable in what type of license. The graduations of licensing willrequire a spreadsheet analysis for most organizations.

IP address issues addressed
We performed both a bare metal install and an upgrade of our existingVMware vSphere 4 installation. Our small network operations center (100+ cores in six test servers, and Dell-Compellent SAN)isn’t the best place to hammer vSphere 5, but we were able to give it abit of a workout. (Note that support for ESX VMs has expired in vSphere5.)

VCenter, vSphere’s central control app, can now be run as a VMappliance if desired; the appliance runs SUSE Linux,and is lightweight. Other executables still have Windows executableequivalents if you need them; we didn’t.

The initial upgrades went smoothly, save for the fact that the vSphereinstallers misidentified the name of our Active Directory domain, asmall problem that had us scratching our heads.

There are incumbent steps to upgrade VMware’s virtual switch appliance,and the new strategy removes a lot of IP addressing problems thatexisted in the prior release.

IP addressing can be a problem for administrators when moving VMsaround, especially from facility to facility, as each is likely to havetheir own location-endemic addressing allocation needs.

The prior version of vSphere, while allowing for a bit oflocation-diverse addressing, didn’t have strong multi-sitetransparency. The new virtual switch takes care of a lot of the miseryfor both IPv4 and IPv6 addressing schemes. It’s not quite ideal, andsome administrative functions must be done outside of the appliance,but its visuals allow a more inter-site understanding of addressingneeds and allocations.

Thin-provisioning options
We used both our lab and our NOC resources to launch varying sized VMsof different operating system types – mostly Windows 2003/2008 R2 and Red Hat, CentOS, and Ubuntu Linux.There was no mystery. VM conversions were unmentionably easy, save forsome important characteristics: we now had up to 32vCPUs per virtualmachine (with advanced licensing option cost), and could see atremendous amount of oversubscribed (if set) memory and storage.

It’s possible to thin-provision (oversubscribe, under-allocate inactuality) almost every operational characteristic of a VM. Doing sohas benefits, depending on the settings we used, and allows vSphere tomake recommendations, or simply move VMs from one server to another tomanage actual needs, vs. initial judgments.

In doing so, VMware has also met a checklist item withover-subscription capabilities for those needing multi-tenancy options,as thin-provisioning permits “elbow room” that can be later physicallyprovisioned when tasks and campaigns mount up.

In other words, less needs to be known about actual server behavior, asVMware can be set to move VMs around to match their execution needs,even when those needs have been capped/throttled by an administrator.Using set guidelines, vSphere will refit VMs into servers to adjustworkloads and demands. Control over what VMs go where can be veryhighly defined and rigid, but ability to fit VMs into hardware serversbased on their performance characteristics takes a little time as it’sbased on accumulated observations of behavior.

It took nearly a day before vSphere started to move things around,although we could have made it more sensitive (and move more quickly toadjust), but we wanted to see what it would do.

We noted that several improvements have been made to both online errormessages, and VMware’s notoriously obtuse docs, as well. That said,VMware’s UIs when used by a browser access are difficult and errormessages can sometimes be totally missing.

We ascribed part of this to the fact that it was a brand-new release,yet we were occasionally frustrated with web access interaction withthe new appliance. We noted that interactions allowed us to scrambleports, and used SSL where that was appropriate. Overall, there was astronger security feel.

We tested fault tolerance and auto-controlled/manually suggested VMmovement. As we launched certain VMs, we forced them into make-workapplications to analyze their CPU use. VMware picks up on CPU with abit more sensitivity, we found, but other behavioral characteristicscan force a move, too.

We decided to attack one Linux app with lots of artificial IP traffic.Almost like a waiter moving customers in a restaurant, the VM was movedacross to another server on the same VLAN – whose traffic wasessentially nil. Downtime was about four seconds or less in our trials.

Advanced storage features
More interesting, however, is how our Dell Compellent SAN resources canbe used, and we tested these resources without the soon-to-be-deliveredglue software from Dell specifically for VMware vSphere 5. Theseresources are also potentially expensive to use, depending on needs andthe license type chosen:

High Availability, VMotion (move those VMs around), and Data Recoveryare in the Standard and Advanced Versions, ranging from $395 to $995and limited to eight vCPUs/VM.

Add in Virtual Serial Port Connector (a Luddite but useful feature),Hot Add (CPUs, memory, vDisks), vShield Zones (of fault protection),Fault Tolerance (detect, move), Storage APIs for Array Integration,Storage vMotion (move your VMs and/or storage live), and theDistributed Resource Scheduler and Distributed Power Management, andyou’ve hit the vSphere 5 Enterprise License. That’s $2,875 perprocessor and limited to eight vCPUs.

If you go all the way to vSphere Enterprise Plus at $3,495 perprocessor, you can graduate to 32 vCPUs per VM, and add theaforementioned Distributed Switch, I/O Controls for network andstorage, establish Host Profiles and have Profile-Driven Storage, usethe Auto Deploy (intelligent and automagic VM launch, and use theStorage Distributed Resource Scheduler (Storage DRS).

For mission critical applications, Storage DRS may be worth the priceof admission for some. When a compatible array is used, one can groupdisk resources as an object, and move the whole object (active disksand all) to another part of the array. This means that aggregatedinfrastructure can be moved wholesale without outage, as an object,perhaps guided by administratively selected fault detection or just theneed for maintenance.

As our Dell Compellent SAN lacked the new drivers, we were unable toperform the heavy lifting promised. You’ll need a high-performance SANtransport to move the data around, Fibre Channel at minimum, but otherinterfaces like InfiniBand ought to do well – especially for disparateor thickly populated array object movement. Protocols like iSCSI(unless over a dedicated and unfettered 10GB link) are unlikely to beuseful unless the transaction times will be small (e.g. not much datato move).

Yet at the bottom end of things, VMware’s High Availability still worksmarvelously. Moving VMs from host to host and back and forth from theNOC to the lab worked flawlessly, if somewhat encumbered byaperiodicity in our Comcast transport to the NOC. This trick can now bedone by all of VMware’s competitors as a basic, but it’s part ofVMware’s DNA and it shows.

From a practical perspective, most of VMware’s competitors can do theseminimums, but some of the competition suffers from OS version/brandfixation and doesn’t have egalitarian support. Others that haveegalitarian OS support have weak storage management, and overallvirtualized data center/cloud support.

VMware’s vSphere covers all of the bases as close to the state of theart as any production software we’ve seen. It’s still wickedlyexpensive, and it’s the one to beat.

Exit mobile version