ITBusiness.ca

The server virtualization dream, Part Two

Many IT professionals may wonder that if server virtualization is available today, why aren’t more IT shops taking advantage of this of money-saving / resource-sharing solution? Because it is as new a concept now as hybrid vehicles were 10 years ago. Ten years from now, hybrid vehicles will

no doubt be commonplace. However many, if not most of you, don’t want to wait 10 or 20 years to virtualize your IT environment. So the following three steps are designed to get your company driving in the direction of autonomic computing.

Step 1 — Assess & Validate

Conduct an environmental assessment to define each department’s server processing needs. Deploy custom configured resource/environmental auditing agents to poll all servers to identify current totals of CPU, memory, adaptors, file/system capacity and total used and unallocated disk space (be sure to account for all archive file space as it often takes up 30-to-40 per cent of all data storage – much of it in duplicate and triplicate form). During this same assessment you would also identify CPU, memory and adaptor usage peaks; read, write, and wait cycle peaks and identify all data that has not been accessed over extended periods of time.

Step 2 — Rationalize and Critique

Critique your current server environment. Identify and consolidate processing-compatible applications to single servers, or you can virtualize your existing multi-server environment to share processing attributes from a common pool. Only the second choice will aid you in the reduction of purchasing new servers for every new application. As a result you would increase utilization of your existing servers from a typical 10 to 20 per cent to a more effective and efficient 40 to 50 per cent. More importantly, you drastically decrease your “unexpected” outages while turning your one-to-one, limited-growth environment into a completely flexible and scalable solution without throwing out your existing investment.

Identify all mission critical servers. Leave those servers in a one-to-one relationship for your heavy-hitting applications such as SAP, PeopleSoft, Siebel and large OLTP databases. Then, consolidate your non heavy-hitting applications (file and print, Exchange, SQL, etc.) and virtualize the remaining servers to form a common pool of hardware resources. Finally, configure the above mentioned CPU, memory, and adaptor resource pool to be shared with the heavy hitting servers/ applications — whenever it is needed.

 

Step 3 — Stop Investing

Look around. Imagine the amount of gas that would be saved if we would all carpool with at least one more person. Stop thinking the only solution is to buy another server; chances are you are not taxing the existing servers you already have. Start “carpooling” your data and available resources!

Tap into your existing hardware pool and reduce the number of servers you feel you have to buy simply to increase on-demand processing capacity. Odds are high that you don’t need to add a server to increase your CPU and/or memory horsepower. In fact, if your IT environment is typical, you not only may not need to add to your existing server pool, but chances are you would be positioned to cascade much of your existing servers and reduce your related server budget for years to come . . . starting today.

 

Autonomic Computing

In the very near future, many of today’s production-level servers will not only be virtualized, but will be configured for and capable of performing internal performance audits or “automated health checks” (from I/O processing needs at the CPU and memory level to page and buffer credit settings at the kernel level). They will automatically adjust and/or reconfigure themselves according to their immediate system needs and be able to virtually morph to meet almost all on-demand needs – all with either pre-designed human involvement (decision making points – particularly when you are just starting your deployment) or, eventually, without any human intervention at all.

Virtualizing your servers will enable them to identify their own CPU, Memory, and adaptor requirements. They will reach out to idle servers and borrow capacity in order to complete immediate tasks. Then, without human prompting, these virtualized servers will return the capacity when it is no longer needed.

The ultimate goal of server virtualization is autonomic computing; capacity on-demand that provides an effective road map for managing your information systems . . . regardless of size, processing demands, resource needs, time of day or night, or human availability.

Autonomic computing may not be the solution to every problem from “soup to nuts”, but it certainly is a solution for most server environments from “coupe to trucks”.

 

 

About the writers:

Jaime J. Gmach – President, Evolving Solutions

Jaime co-founded Evolving Solutions in January, 1996 after spending the previous ten years sharpening his entrepreneurial skills in various elements of the technology industry. He previously served in roles ranging from Customer Engineer and Director of Technical Services to Sales Manager and finally to President of Evolving Solutions.

Todd Holcomb — Director of Professional Services, Evolving Solutions

Todd has, for nearly 20 years, led emerging technology initiatives such as Server Virtualization at the enterprise level. He has worked for EMC, Sylvan Prometric, IBM Global Services, and spent three years running an IT start-up company specializing in data management and “on-the-road/on-the-fly” order processing.

Exit mobile version