Bad economy good for client virtualization?

Could the stretched-out replacement cycles for desktop machines be a boon for client computing? In a recent Wall Street Journal Business Technology blog post, Ben Worthen noted that a survey from our print publication, CIO Magazine, found that companies will forgo traditional three-year replacement cycles for desktop machines (both traditional desktops and notebook computers). According to the CIO survey, 46 percent of businesses will defer replacing machines for the next year or two.

Worthen says that this will be a problem for people who are already suffering from overloaded machines, bogged down by big apps and too much data. I’m not so sure about that. Any machine purchased in the past three years should be capable of holding at least 2 gig of memory, which should be plenty for most people’s workloads. On the data side, most three-year old machines should have at least 40 gig of storage, and probably more. It’s hard to imagine most work environments requiring more than 40 gigs of storage.

However, I think he’s on to something-not so much from today’s workloads, but from tomorrow’s. Specifically, the looming (semi) forced shift to Vista or Windows 7. Both of these versions of the OS require a significantly larger hardware footprint than XP does. Consequently, there’s a collision course between the operating system of the future and the hardware of the present-which presents an enormous opportunity for client virtualization.

There’s three ways that client virtualization can help out in a capital-constrained environment:

If the current hardware really is overloaded by heavyweight apps, presentation virtualization is a possibility. This technology puts the application back on the server and merely shunts the user interface out to the client machine. Instead of having to host the entire application process and store the data, the machine acts as a rich client.

If the client hardware is insufficient to run Vista or Windows 7, move to a Virtual Desktop Infrastructure (VDI) environment, with a virtualization server hosting multiple desktops. There’s no need to outfit end point machines with 4 gig of memory and 200 gig of storage. It’s not even necessary to scale that level of resource onto the server.

For example, if you have 10 desktops on a single server, you don’t need 40 gig of memory. Because end user machines are very spiky in terms of usage and, frankly, under-utilized 99 percent of the time, a smaller amount of resource is required on the server. In other words, the resources can be multiplexed. This is financially savvy for two reasons: (1) Due to the multiplexing effect, you don’t need to buy as much total resource capacity as you would if you were provisioning individual end points, each with sufficient capacity to support a Vista environment; and (2) buying in bulk for servers is, up to a certain point, less expensive than buying the same amount of capacity for individual end devices-in essence, you’re paying wholesale rather than retail (so to speak) for hardware capacity. There is a finite ability to play this “wholesale vs. retail” tradeoff; typically when you start putting very large memory sticks into servers the price for it escalates wildly.

For those power users who can’t (or won’t) “share” a server, there’s a different flavor of VDI available. You still put the client machine in the data centre, but dedicate a blade server to it. A significant investment for hardware dedicated to a specific user, but the design of blade server systems still reduces overall investment. While you scale the amount of memory linearly for each dedicated blade, economies of scale are still available for resources like power supplies, network connections, and cooling.

Of course, all three options still require some kind of interface equipment at the end user location-after all, there has to be a screen to look at and a keyboard to type on. For the first option, the current hardware can be left in place. For the latter two options, an existing desktop can be use. However, a thin client is also a possibility. This presents the intriguing opportunity to implement VDI for current users, using their existing desktop machines; when new users join the company (or a desktop machine for an existing user needs to be replaced) a thin client device is given to the user. The cost differential between a fully-scaled desktop device and a thin client can be very large; I’ve heard $200, quantity of one, quoted for thin clients.

Of course, there are other factors to be considered. The capacity of the network needs to be examined to see if it can handle the traffic between the data centre and the desktop locations. Also, the cooling capacity dispersed to end user locations must be available within the data centre.

On the other hand, client device reliability should go up significantly with centralized administration. Operations staff on-site visits to end user device locations should drop as a result.

I don’t know that I’ve seen any good case studies on how the numbers might pan out for a real-world desktop virtualization implementation. Intuitively, however, desktop virtualization seems like it must be less expensive to run. However, it’s often the case that a situation that, early on, made economic sense, when seen through to a later phase, doesn’t make sense at all. Inertia keeps the no-longer-making-sense system in place because “it’s the way it’s always been done.” Desktop computing made a ton of sense early on when it delivered computing ability to end users at a fraction of the cost of mainframes. It makes sense as a way to let end users perform their own computing taks.

It certainly has fostered innovation because many, many apps make sense only in an end-user computing environment (the Web, anyone). But that doesn’t mean that the only way to achieve those benefits in the future is to plunk an expensive, extremely powerful computing device in front of the person interacting with it.

Smart CIOs will be looking at client virtualization with open eyes, particularly in this economic environment. Maybe those desktops could be stretched for two-or three-more years.

Bernard Golden is CEO of consulting firm HyperStratus, which specializes in virtualization, cloud computing and related issues. He is also the author of “Virtualization for Dummies,” the best-selling book on virtualization to date.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Related Tech News

Featured Tech Jobs

 

CDN in your inbox

CDN delivers a critical analysis of the competitive landscape detailing both the challenges and opportunities facing solution providers. CDN's email newsletter details the most important news and commentary from the channel.