System delays and unresponsiveness are not only inconvenient in the enterprise, they are extremely costly in terms of lost productivity, and help desk and IT time required to debug reactive maintenance issues.
Servers and workstations running the various Windows operating systems including the
latest, Windows Server 2003, are being deployed more than ever within the enterprise. However, an often-overlooked element of the Windows operating system,
file fragmentation, causes an overall degradation in system performance and reliability. Downtime or slow downs are unacceptable; particularly when these can be easily remedied using automated defragmentation software.
This white paper covers the performance and potential reliability implications of file fragmentation as well as its associated costs and investigates defragmentation as a solution to unnecessary or premature hardware upgrades.
A fragmented disk on a Windows system cost an enterprise in more ways than lost performance.
Most Windows systems managers, as well as a growing number of users, know that fragmented files on disks cause an overall degradation in system performance. What is only now becoming more well known, however, is that fragmentation can occur not only in the files and data on a drive, but also in the file system; creating common reliability / stability issues that demand IT time and attention, including long or aborted boot times, slow or aborted back ups, file corruption, system and program hang ups, system freezes and other system errors.
In addition to the fact that effective and routine use of defragmentation technology can help resolve these issues, defragmentation can produce comparable performance gains to costly system upgrades. Enterprises can further realize considerable reductions in IT total cost of ownership (TCO) by using an automated networkable defragmenter.
Why does disk fragmentation occur?
Although disk fragmentation begins as the operating system itself and applications are loaded onto a computer, a basic explanation of file fragmentation follows.
When a file is first created and saved, it is laid down on the hard disk in contiguous clusters. When the file is later read, the head in the disk drive moves directly from one cluster to another on a single track. The head stays in one place over that track and reads the file as the disk moves beneath it. As more files are written to the disk, they are also laid out in contiguous clusters.
As files are erased, their clusters are made available again as free space. Eventually, some newly created files become larger than the remaining contiguous free space.
These files are then broken up and randomly placed throughout the disk. As the file creation, editing, and deleting processes continue, fragmentation becomes more and more pronounced, exacting a progressively serious toll on system performance.
How system performance and reliability suffer because of fragmentation
Without file fragmentation, large amounts of disk space would remain unutilized. Disk storage capacity is greatly expanded by allowing files to be split into smaller pieces that can be randomly placed on whatever clusters are available. If the file fragments fall into largely contiguous clusters, there is minimal performance impact. But if fragments are placed in non-contiguous blocks, it results in a significant degradation in system performance and accessibility. Why? The disk’s read/write head must jump from track to track to find all the pieces of the file and reassemble them into a single file. This results in disk latency and overall system slowdowns, which can also lead to common system reliability issues that demand help desk and troubleshooting resources to resolve.
Although many companies acknowledge that file fragmentation is a fact of life on most modern distributed systems, few are aware of just how much it is costing the bottom line in terms of lost performance and downtime.
Some companies, unaware of the impact, are likely to attempt to resolve these situations with more expensive acquisitions of higher-performance hardware.
However, it is just a matter of time before fragmentation impacts the new machines because this process only temporarily masks the problem and inevitably affects the new equipment as well. Therefore, an enterprise can significantly decrease IT total cost of ownership (TCO) by instituting automatic defragmentation across the network, rather than relying exclusively on more costly hardware upgrades to keep the system stable and at optimum performance levels.
The hidden benefit of defragmentation – Forestalling unnecessary hardware upgrades
With fragmentation exerting such a severe toll on system performance, it’s quite likely that many organizations have initiated hardware upgrades unnecessarily. By using an enterprise defragmentation utility, it is possible to achieve performance gains that meet or exceed many hardware upgrades. From a cost standpoint alone, this is an attractive proposition.
Is there an alternative to installing defragmentation software? Yes, though it is a poor investment of time and resources. The user or system administrator would have to dump the entire contents of each disk onto a backup tape or spare disk and then reload the contents onto the disks. Although this does result in some fragmentation reduction, unlike earlier mainframe and mini computers, it is not complete and is a time consuming method. The cost of an administrator’s time alone would make this approach unfeasible, not to mention the time during which users would be denied access to the system. Further, it is only a short-term fix, as disks will again become more thoroughly fragmented within a relatively short period.
In part two, IDC explores manual verses network defragmentation and how it affects the bottom line.
Frederick W. Broussard is an analyst at IDC Corp. in Framingham, Mass.