Could you shoulder the weight of Titan?

VeriSign Inc., the company responsible for managing the .com and .net domains on the World Wide Web, has announced a three-year project to increase the capacity of its worldwide infrastructure by an order of magnitude.
Aiming to boost both capacity and security, VeriSign’s $100-million-plus Project Titan will increase bandwidth to handle growing demand and make the infrastructure less vulnerable to attack, further distribute the infrastructure and improve monitoring. And VeriSign will implement new security protocols to better protect Internet traffic.

The project comes shortly after hackers attacked some of the main servers that form the backbone of the Internet, including those operated by VeriSign.

A global infrastructure with bandwidth of 20 gigabits per second, able to handle 400 billion Domain Name System (DNS) queries every day, dwarfs most businesses’ IT infrastructures, and VeriSign’s plan to expand those figures tenfold – to more than 200 Gbps and four trillion DNS queries daily – by 2010 borders on mind-boggling.

Many details of Project Titan are specific to the DNS infrastructure VeriSign operates and other businesses would not have the same needs, said Tom Slodichak, chief security officer at WhiteHat Inc., a Burlington, Ont., security firm. But the general principles apply to any organization.

The DNS servers that VeriSign and others operate convert Web addresses typed into browsers and translate them into Internet Protocol (IP) addresses needed to connect those browsers to Web sites. VeriSign has more than 20 such servers scattered around the world today to distribute the workload and ensure that no request must travel too far. Within three years VeriSign plans to add about 70 more such servers in other locations.

Besides increasing the number of queries the system can handle, this will lead to better traffic management, reduce latency and make VeriSign’s infrastructure less vulnerable to attacks in specific regions. “I would say it’s a significant improvement, even though what we have today is pretty impressive,” said Ken Silva, VeriSign’s chief security officer.

Today, Silva said, VeriSign has five data centres – which deal with work such as domain name registration and disaster recovery – and seven network operations centres for monitoring its worldwide network. The company will add a new data and network operations centre in Delaware and another somewhere in Europe, Silva said.

“That’s a question of redundancy, and redundancy is always nice,” Slodichak said.

London, Ont.-based Info-Tech Research Group recommends that any enterprise should have at least one DNS server offsite, possibly with a hosted DNS provider, said Jayanth Angl, research analyst at Info-Tech. “In the event of a failure at the main site, the online presence of the enterprise can still be maintained.”

One data centre is not enough
And secondary data centres are growing more common. “Most Canadian enterprises should be at least thinking of having the two sites these days,” said Peter Cresswell, national practice manager at Bell ITC Solutions – “which many of them do.”

VeriSign will also improve network monitoring with “a holistic monitoring system that not only monitors the health of the individual servers, it also uses those servers to monitor the rest of the network,” Silva said.

The improved monitoring is meant to spot network problems early and address security threats, which are increasing all the time. Although the two Internet root servers VeriSign operates weren’t among those hit in an attack on root servers in early February, Silva said, VeriSign is already well prepared for an attack like that one. But the threat continues to grow, he said.

The monitoring system will be quite specialized because of VeriSign’s unique needs, Silva said. But, Cresswell added, many businesses aren’t paying enough attention to monitoring their networks for intrusions and other problems today. “The tools have been there for many years,” he said. “It’s time to use them.”

VeriSign says bandwidth demands on its infrastructure are growing so fast that by 2010 they will be 10,000 times what they were in 2000. To keep up with that growth, the company plans to multiply its capacity by 10 in the next three years, to more than 200 gigabits per second.

Silva sums up what ordinary businesses can learn from Project Titan this way: “Don’t wait for something bad to happen to your infrastructure. Anticipate what might happen and prepare for it.”

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Grant Buckler
Grant Buckler
Freelance journalist specializing in information technology, telecommunications, energy & clean tech. Theatre-lover & trainee hobby farmer.

Featured Story

How the CTO can Maintain Cloud Momentum Across the Enterprise

Embracing cloud is easy for some individuals. But embedding widespread cloud adoption at the enterprise level is...

Related Tech News

Get ITBusiness Delivered

Our experienced team of journalists brings you engaging content targeted to IT professionals and line-of-business executives delivered directly to your inbox.

Featured Tech Jobs