One of the problems with today’s data volumes is, well, today’s data volumes. There’s so much of it, it’s virtually impossible to sort out what’s mission-critical, what’s important, and what’s merely there.
Without those distinctions, there are two choices: back up everything, if you can, and hope it can be recovered in a reasonable amount of time if that becomes necessary, or take a stab in the dark and pray that if something goes wrong you can retrieve what you need to keep the business afloat.
Neither is a particularly viable option in case of disaster.
So, how can you make sure that you’re adequately protecting your company’s crown jewels? The first step is figuring out what they are. And that requires data classification.
Data classification is part of the information lifecycle management process. Once implemented, it allows you to determine what data needs protecting, how strong those protections need to be, and how important the data is to your business. For example, intellectual property that differentiates your offerings from everyone else’s is a major competitive advantage. If it’s lost, the business could go under. A memo about vacation days, on the other hand, is something that could be painted on the building’s wall without significant impact, and would not be missed if it were lost in a disk crash.
Common data classification schemes consist of three categories. The first, variously known as Confidential or Restricted, contains data the loss of which would be catastrophic to the organization. Think of items such as personally identifiable information, credit card numbers, medical data, authentication such as user names and passwords or encryption keys, or the aforementioned intellectual property.
The second category (sensitive, internal use only, or private) is of medium sensitivity. It consists of things like most email, and most business data (unless it is confidential).
The third category is Public – anything like a press release, marketing material, and so forth, that has been deliberately released to the public.
There are more complex schemes as well, such as that detailed in FIPS PUB 199, from the U.S. National Institute of Standards and Technology, that interweave data confidentiality, integrity, and availability, if a simple classification isn’t enough.
The next thing you need to know is who owns the data, who else has access to it, and what each user can do with it.
Manual data classification would be time consuming to the point of impossibility for many organizations, so there have been tools developed that handle the bulk of the task. For example, Microsoft has released the Microsoft Data Classification Toolkit, a free download that is designed to help enable an organization to identify, classify, and protect data on its file servers, and HP offers the Atalla Information Protection and Control Suite, which provides automatic classification at the point of creation, once the product has been set up, and ensures that the classification remains attached to the data as it moves.
Data classification can be a lot of work, and is time-consuming to the point where internal staff often just can’t cope. But it’s too important to neglect, so if it’s beyond internal personnel’s capabilities, it’s well worth engaging experts such as the Rogers business continuity specialists to make sure the right data is protected in the right way to keep you in business should disaster strike.