When ‘IT’ stands for Incredible Terror – 20 mistakes you need to avoid

Back in 2004, InfoWorld’s then-CTO Chad Dickerson polled the best and brightest to reveal 20 IT mistakes that were surefire recipes for cost overruns, missed deadlines, and in some cases, lost jobs.
A lot has changed in the past four years, but one thing hasn’t: IT’s capacity to fall prey to misguided practices, given the complexity of the responsibilities involved. So in the spirit of “forewarned is forearmed,” we bring you 20 brand-new mistakes that today’s IT managers would do well to avoid. As before, the names have been changed to protect the guilty, but the lessons learned are plain to see.

  1. Overzealous password policies

A clear and consistently enforced password policy is essential for any network. What good is a firewall when an attacker only needs to type “password” to get in?
But strict password security cuts both ways. If your password requirements are too complex and draconian, or if users are forced to change their passwords too often, your policy can have the opposite of its intended effect. Users pushed to the limit of remembering passwords end up writing them down-in a drawer, on a Post-It, or on a piece of tape stuck to their laptop’s keyboard. Don’t undermine the ultimate aim of your password policy by insisting on unrealistic requirements.

Besides, passwords are so 2004. If you want strict access control today, think multifactor authentication.

  1. Mismanaging the datacenter

Sys admins aren’t exactly known for their neatness, but in the datacenter, order is essential. Spaghetti cabling, mislabeled racks, and orphaned equipment can all cause big problems. Careless provisioning can easily lead an admin to reconfigure the wrong server or reformat the wrong volume, so keep things tidy (and always double-check your log-ins).

Good systems housekeeping also means getting production servers off engineers’ desks and out of their hiding places in the basement. Managing those assets is IT’s job, and it should shoulder the burden with diligence and gusto. Make sure your CFO understands the importance of maintaining a datacenter that’s large and well-equipped enough to grow with the business without turning into a jungle.

  1. Losing control over critical IT assets

Senior management has a request: “The marketing team needs to run ad-hoc SQL queries against the production database.” It’s simple enough to implement, so you grudgingly make it happen and move on. Next thing you know, poorly formed queries are bringing the server to its knees before every Thursday marketing meeting. Your next assignment? “Fix the performance issue.”

Backseat drivers are a hazard; handing over the keys to someone who can’t drive can be fatal. The experience and judgment of IT management plays a crucial role in all decisions related to IT assets. Don’t abdicate that responsibility out of a desire to avoid confrontation. A bad idea is a bad idea, even if business managers don’t realize it.

  1. Treating “legacy” as a dirty word

Eager young techies may hate the idea that mission-critical processes are still running on systems their grandparents’ age, but there’s often good reason for IT to value age over beauty. Screen-scraping isn’t as sexy as SOA, but an older system that runs reliably is less risky than a brand-new unknown.

Modernizing legacy systems can be expensive, too. For example, the State of California expects to spend US$177 million ($204 million CAD) on a revamped payroll system. And according to one IDC study, annual maintenance costs for new software projects typically run into the millions. In these days of tightened IT budgets, don’t be in too much of a hurry to make your “dinosaurs” extinct before their time.

  1. Ignoring the human element of security

Today’s network admins have access to a dizzying array of security tools. But as hacker Kevin Mitnick is fond of saying, the weakest link in any network is its people. The most fortified network is still vulnerable if users can be tricked into undermining its security-for example, by giving away passwords or other confidential data over the phone.

For this reason, user education should be the cornerstone of your site security policy. Make users aware of potential social engineering attacks, the risks involved, and how to respond. Furthermore, encourage them to report suspected violations immediately. In this era of phishing and identity theft, security is a responsibility that every employee must share.

  1. Creating indispensable employees

As comforting as it may be to know that a single employee understands your systems inside and out, it’s never in a company’s best interests to let IT workers become truly indispensable. Take, for example, former City of San Francisco employee Terry Childs, who was eventually jailed for refusing to reveal key network passwords that only he knew.

In addition, employees who are too valuable in specific roles can also get passed up for career advancement and miss out on fresh opportunities. Rather than building specialized superstars, you should encourage collaboration and train your staff to work with a variety of teams and projects. A multitalented, diverse IT workforce will not only be happier, it will be better for business, too.

  1. Raising issues instead of offering solutions

Are your warnings of critical vulnerabilities falling on deaf ears? Identifying security risks and potential points of failure is an important part of IT management, but the job doesn’t end there. Problems with no apparent solutions will only make senior management defensive and dismissive. Before reporting an issue, formulate a concrete plan of action to address it, then present both at the same time.

To win support for your plan, always explain your concerns in terms of business risk-and have figures available to support your case. You should be able to say not just what it will cost to fix the problem, but also what it could cost if it doesn’t get fixed.

  1. Logging in as root

One of the oldest rookie mistakes is still alive and well in 2008. Techs who habitually log in to the administrator or “root” account for minor tasks risk wiping out valuable data or even entire systems by accident, and yet the habit persists.

Fortunately, modern operating systems-including Mac OS X, Ubuntu, and Windows Vista-have taken steps to curb this practice, by shipping with the highest-level privileges disabled by default. Instead of running as root all the time, techs must enter the administrative password on each occasion they need to perform a major systems maintenance task. It may be a hassle, but it’s just good practice. It’s high time that every IT worker took the hint.

  1. Teetering on the bleeding edge

With public beta programs now commonplace, the temptation to rely on cutting-edge tools in production systems can be huge. Resist it. Enterprise IT should be about finding solutions, not keeping up with the Joneses. It’s OK to be an early adopter on your desktop, but the datacenter is no place to gamble.

Instead, take a measured approach. Keep abreast of the latest developments, but don’t deploy new tools for production use until you’ve given them a thorough road test. Experiment with pilot projects at the departmental level. Also, make sure outside support is available. You don’t want to be left on your own when the latest and greatest turns out to be not ready for prime time.

  1. Reinventing the wheel

There’s no better way to ensure IT agility than to take charge of your own software needs. But too often, companies employ software developers only to squander their talents on the wrong projects.
You wouldn’t write your own Web browser or relational database. Why, then, do so many companies waste energy building custom CRM apps or content management systems, when countless high-quality products already exist to fill those needs?

In-house software development should be limited to projects that confer competitive advantage. Functions that aren’t unique to your business are best handled with off-the-shelf software. Failing that, start with an open source project and tweak it to meet your requirements. Redundant development projects only distract from genuine business objectives.

  1. Losing track of mobile users

Networked tools make it easy to push security updates, run nightly backups, and even manage software installation for users across an entire organization- provided, of course, that their PCs are connected to the corporate LAN. But what about users who spend most of their time off-site?

Mobility and telecommuting have changed the game for systems management, network security, and business continuity. Laptops that lack current security patches are a prime vector for malware. Files that are never backed up can mean countless hours of lost productivity. And what will happen to your sensitive data in the event of theft? Automated IT policies offer no reassurance if road warriors can slip through the cracks.

  1. Falling into the compliance money-pit

When it comes to complying with Sarbanes-Oxley, HIPAA, and other regulations, too many companies fall back on the Band-Aid method. But throwing money at nebulous compliance objectives only drains funds that might otherwise be used for more tangible projects. While a critical regulatory deadline may necessitate a quick compliance fix in some cases, overall it’s best to take a holistic approach.

When planning your compliance strategy, think in terms of global policies and procedures, rather than point solutions targeted at specific audits. Aim to eliminate redundant procedures and manual record-keeping, and focus on ways to automate the compliance process on an ongoing basis. To do otherwise is just throwing good money after bad.

  1. Underestimating the importance of scale

You may think you’ve planned for scalability, but chances are, your systems are rife with hidden trouble areas that will haunt you as your business grows. First and foremost, be mindful of process interdependencies. A system is only as robust as its least reliable component. In particular, any process that requires human intervention will be a bottleneck for any automated processes that depends on it, no matter how much hardware you throw at the task.

Also, cutting corners today is a sure recipe for headaches tomorrow. As tempting as it may be to piggyback a departmental database onto an underutilized Web server or let an open workstation double as networked storage, resist. Today’s minor project could easily become tomorrow’s mission-critical resource, leaving you with the unenviable task of separating the conjoined twins.

  1. Mismanaging your SaaS strategy

Salesforce.com proved that SaaS (software as a service) has real legs in enterprise computing. When compared to traditional desktop software, the on-demand model offers customers a low barrier to entry and virtually no maintenance costs. Little wonder, then, that a growing number of software vendors have begun offering hosted products in numerous software categories. If you haven’t at least considered SaaS options, you’re doing your business a disservice.

Too much SaaS, on the other hand, can become problematic. Hosted services don’t interoperate as well as desktop software, and the level of customization offered by SaaS vendors varies. Remember, SaaS is just a business model-it isn’t really a bargain if the software itself is immature.

  1. Not profiling your code

Relative performance is a perennial debate among programmers. Does code written for one language or platform run as well as equivalent code written for another?
Here, software development dovetails with carpentry, as it’s often the poor craftsman who blames his tools. For every application that suffers due to an underlying flaw in the language, countless others are rife with poorly designed algorithms, inefficient storage calls, and other programmer-created speed bumps.

Locating these trouble spots is the goal of code profiling, and that’s what makes it so essential. Until you’ve identified the slowest portions of your code, any attempt to optimize it will ultimately be fruitless. Because who knows? Maybe the problem isn’t your fault after all.

  1. Failing to virtualize

If you aren’t taking advantage of virtualization, you’re only making things harder on yourself. Virtual machines were a key selling point of early mainframe computers, but today similar capabilities are available on industry-standard hardware and operating systems, often at no additional cost.

Stacking multiple VMs onto a single physical machine drives up system utilization, giving you a greater return on your hardware investments. Virtualization also allows you to easily provision and de-provision new systems, and to create secure sandbox environments for testing new software and OS configurations.

Some vendors may tell you that their products can’t be installed in a virtualized environment. If that’s the case, tell them bye-bye. This is one technology that’s too good to pass up.

  1. Putting too much faith in one vendor

It’s easy to see why some companies keep going back to the same vendor again and again to fulfill all manner of IT needs. Large IT vendors love to offer integrated solutions, and a support contract that promises “one throat to choke” will always be appealing to overworked admins. If that contract has you relying on immature products that are outside your vendor’s core expertise, however, you could be the one who ends up gasping for breath.

Rarely is every entry in an enterprise IT product line created equal, and getting roped into a subpar solution is a mistake that can have long-term repercussions. While giving preferential consideration to existing vendor partners makes good business sense, remember that there’s nothing wrong with politely declining when the best-of-breed lies elsewhere.

  1. Plowing ahead with plagued projects

Not every IT initiative will succeed. Learn to recognize signs of trouble and act decisively. A project can stumble for a thousand different reasons, but continuing to invest in a failed initiative will only compound your missteps.

For example, the Federal Bureau of Investigation wasted four years and over $100 million ($115 million CAD) on its Virtual Case File (VCF) electronic record-keeping system, despite repeated warnings from insiders that the project was dangerously off-track. When the FBI finally pulled the plug in 2005, VCF was still nowhere close to completion.

Don’t let this be you. Have an exit strategy ready for each project, and make sure you can put it in motion before a false start turns into a genuine IT disaster.

  1. Not planning for peak power

Sustainable IT isn’t just about saving the planet. It’s also good resource planning. When energy costs spiral out of control, they threaten business agility and limit growth. Don’t wait for your datacenter to reach capacity to start looking for ways to reduce your overall power consumption.

From CPUs to storage devices, memory to monitors, energy efficiency should be a key consideration for all new hardware purchases. And don’t limit your search to hardware alone; software solutions such as virtualization and SaaS can help consolidate servers and shrink your energy footprint even further. The result will be not just a more sustainable planet, but a more sustainable enterprise.

  1. Setting unrealistic project timetables

When planning IT projects, sometimes your own confidence and enthusiasm can be your undoing. An early, optimistic time estimate can easily morph into a hard deliverable while your back is turned. For that reason, always leave ample time to complete project goals, even if they seem simple from the outset. It’s always better to overdeliver than to overcommit.

Flexibility will often be the key to project success. Make sure to identify potential risk areas long before the deadlines are set in stone, particularly if you’re working with outside vendors. By setting expectations at a realistic level throughout the project lifecycle, you can avoid the trap of being forced to ship buggy or incomplete features as deadlines loom.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Featured Story

How the CTO can Maintain Cloud Momentum Across the Enterprise

Embracing cloud is easy for some individuals. But embedding widespread cloud adoption at the enterprise level is...

Related Tech News

Get ITBusiness Delivered

Our experienced team of journalists brings you engaging content targeted to IT professionals and line-of-business executives delivered directly to your inbox.

Featured Tech Jobs