The server virtualization dream Part 1

Most of us wouldn’t dream of going on vacation with nothing more than the clothes on our back and a small handbag. Taking along a huge suitcase, however, is not only heavy but requires waiting in the check-in line for some time.

Unfortunately, only in a dream world can a hand bag be

bigger on the inside than on the outside, to carry all of your vacation belongings. And in these same dreams, huge suitcases magically morph down to the size of handbags, saving holiday-makers the hassle of carrying and checking in huge cumbersome luggage.

IT professionals dream of robust networking environments that exist in this same dynamically expanding and contracting dream world. They want their networking environments to be capable of processing weekly payroll, end of month commissions, end of year accounting — A/R, A/P, General Ledger “close outs”, AND at the same time be able to maintain their daily ERP, CRM, and Email systems. Most servers, even in extreme conditions, rarely reach maximum processing power. In fact, in a typical workday environment, most servers (particularly Windows) rarely surpass ten per cent utilization rate.

The Reality

Luckily, at least for IT professionals, the dream world of server “morphing” – or virtualization in the real world setting – is becoming a reality.

Although most companies are not taking advantage of virtual server expansion and contraction capabilities today, it is possible to “borrow” CPU and/or memory capacity from other servers, which are currently not being “taxed”, and then return that same CPU and/or memory capacity back to their original “owners”- in their original state. Imagine servers being spoofed to think they have unlimited CPU and memory capacity and subsequently never go beyond processing/workload thresholds again!

Engineers at Evolving Solutions, Inc., a Data Disaster Recovery, Storage Architecture, and Business Continuity solutions provider, predict that by the end of 2004-early 2005 servers that auto-monitor and auto-adjust for Data On-Demand requirements will be appearing frequently in larger IT shops. Servers able to auto-adjust to continuously changing CPU and memory needs will become as widely accepted as the current “cascading servers” methodology. More than simply a foray into virtualization; this is a complete leap into “autonomic computing”.

Local Server Virtualization

Imagine employees accessing large files or applications such as Visio or AutoCAD from a local server.   Processing power needed for multiple employees to open large files located on a single server can push CPUs and/or memory past pre-defined thresholds that are typically set at 70 per cent – 80 per cent.  When they exceed their thresholds, the lack of processing power drastically inhibits data/document retrieval speeds across your LANs and WANs. This often results in hard dollar costs – stemming from replacing smaller servers with larger ones or on clustering the existing servers – and soft dollar costs in the form of a loss in employee productivity. Grow this scenario into an Online Transaction Processing (OLTP) environment and watch hard dollars disappear in the same way baseball caps fly from open convertibles.

Take for example “Local Books”, a small fictional company that sells books written by local authors from their book store on Main Street. The first day they launched their online shopping venue they received 30,000 hits and hundreds of attempted transactions. Because they had not effectively planned for this activity, they found their OLTP and backend database server(s) being significantly taxed.

Wait cycles increased because the CPUs and/or memory were functioning constantly beyond an 80% utilization threshold. Spikes in wait times meant Web site visitors, and online buyers, were negatively affected. All of this happened while their SQL, File and Print, and Exchange servers were running idle at less than 10 per cent utilization.

Unfortunately this type of scenario is typical within many IT shops. While generally planning for system failure, they often forget to plan for success and system scalability. If “Local Books” had a plan in place to handle additional on-demand ordering, their systems would have been ready for the drastic increase in online orders and would not have dropped/lost any of the transactions.

Had “Local Books” set up a virtualized server environment, utilizing products like VMware and/or IBM’s Orchestrater, their OLTP server never would have reached the processing threshold of 70 per cent-80 per cent. The server would have dynamically accessed any of the available resources from the SQL, File and Print, and/or the Exchange servers to temporarily borrow processing power to complete and book order transactions during peak ordering periods – thus eliminating wait times. After the capacity was no longer needed, the OLTP server would have politely returned the capacity back to the respective servers. The “Local Books” brand equity would have remained in tact and a hefty profit would have been made on the opening day of the online store.

In terms of our automotive analogy, a proper server virtualization environment would have allowed the “Local Books” OLTP sever to virtually grow or “morph” from a two-seater to a four-seater, from a four-seater to a station wagon, and – if needed – from a station wagon to a more powerful truck. And when the extra capacity was no longer needed, the truck would simply shrink back down to a two-seater again.

Remote Server Virtualization

Assume “Local Books” grew to become “National Books”, but this time they had a plan for exponential growth. They implemented a virtualized server environment, reduced wait times, and as a result, successfully processed more online orders then they could initially fathom. Now the “National Books” website receives millions of hits and processes tens of thousands of online transactions and book orders each day.

Without a hardware resource virtualized environment, each time order processing reached it’s capacity, it would either slow down process requests, create significant “time out” errors, or worst of all, halt the National Books website altogether. The additional “unplanned” traffic on their server could have lead to data corruption, lost sales and diminished credibility of their company brand.

But because “National Books” chose to implement a virtualized server environment, their primary applications could share resources with other (secondary) applications such as: Exchange with J.D. Edwards, SQL with Siebal, SAP with Tivoli, and so forth. Sales and online Web site transactions would be conducted without slowing down the network resulting in an increase in per transaction profitability and brand awareness.

What this means is that “National Books” would not have to add servers each time they run a special promotion or have a new “Best Selling Author” book released. As a result, they would be able to save substantial dollars because a virtualized server environment would enable them to increase their “on-demand” CPU and memory resources without having to spend additional hard dollars. “National Books” processing horsepower would be guaranteed no matter how large the demand.

Please make sure to check out next week’s CDN This Week for part two, which will tell you the first steps to take in server virtualization.

About the Writers

Jaime J. Gmach – President, Evolving Solutions

Jaime Gmach co-founded Evolving Solutions in January of 1996 after spending the previous ten years sharpening his entrepreneurial skills in various elements of the technology industry. He previously served in roles ranging from Customer Engineer and Director of Technical Services to Sales Manager and finally to President of Evolving Solutions. Jaime’s strong technical perspective comes from years of face-to-face interaction with clients to design and implement their desired business solutions.

Todd Holcomb — Director of Professional Services, Evolving Solutions

Todd has, for nearly 20 years, led emerging technology initiatives such as Server Virtualization at the enterprise level. He has acquired a deep understanding of mass storage (SAN, NAS and CAS) environments from former employers including EMC, Sylvan Prometric, IBM Global Services, and three years running a IT start-up company specializing in data management and “on-the-road/on-the-fly” order processing.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Featured Story

How the CTO can Maintain Cloud Momentum Across the Enterprise

Embracing cloud is easy for some individuals. But embedding widespread cloud adoption at the enterprise level is...

Related Tech News

Get ITBusiness Delivered

Our experienced team of journalists brings you engaging content targeted to IT professionals and line-of-business executives delivered directly to your inbox.

Featured Tech Jobs