Researcher gets a chokehold on security

Most technology companies like to say they work on solutions. Matthew Williamson prefers to think of it as creating “”positive dilemmas.””

That, according to the senior research scientist with HP Labs in Bristol, U.K., is how his

CEO described his mandate, though few IT managers would see anything positive about his area of specialty. Williamson works on viruses, and has developed a concept called “”virus throttling”” which he said could ease the impact suffered by large enterprises.

Virus throttling is not yet a product, but as a member of the R&D side Williamson himself said he is unaware of its future. “”We generate intellectual property, but the decisions about where they go and the company’s strategy are not really in our domain,”” he said. “”It gives you a lot of freedom . . . It’s very valuable having a research organization that’s not directly tied (to the product lines).””

Williamson took time out from a presentation at last week’s Virus Bulletin conference to discuss virus throttling with ITBusiness.ca.

ITBusiness.ca: What inspired this line of research?

Matthew Williamson: We started about two years ago at around the time of the Code Red and Nimda outbreaks, which infected a lot of machines and caused a lot of congestion in lots of organizations. These viruses were so much faster than we’d seen before, and so the researchers’ way of thinking about that was, “”What can we do to slow them down?”” From that has come out this technology that we call virus throttling, which is just sort of thinking about anti-virus software in a slightly different way.

Most anti-virus software is selfish — it tries to prevent the machine with the software on it from being infected by a virus. This approach is more altruistic. It says, if I get a virus, I’m not going to give it to anyone else. It turns out it’s a lot easier to restrict the propagation of a virus to prevent an affected machine from affecting anyone else than it is to spot an incoming virus.

ITB: How does it work exactly?

MW: Fundamentally, for a virus to spread it’s got to go to different places, to different machines. To spread quickly, it has to do that at a high rate. Viruses like Nimda make around 400 attempts per second to find other machines. Viruses like Slammer, the one that came in February, it was more like 800 – 1,000 times a second. For most protocols and applications there are exceptions, but it’s generally the case that connections our machines make to other ones happen at a much lower rate than to the same machine. And they tend to make connections to the same machine again — you always check your e-mail from the same machine, and so on. The rate of connections to different machines is more like one connection a second. So a virus throttle is a rate limit you impose on the machine of one connection a second.

ITB: This would be done at the individual user level or the IT manager level?

MW: It’s best done as close to the machine as possible. You can do it on the host machine or you could do it in the switching network.

ITB: And this is software you would install in order to monitor this and then give you an alert when the connection rate has reached a certain level?

MW: Yes, you’d get an alert which said, “”Unknown virus trying to spread on protocol X, spread from process Y,”” and what you want to do?

ITB: How is that better than what traditional anti-virus vendors offer?

MW: Yes, as I understand it they apply policies but as far as I’m aware they don’t apply policies which are rate-limiting in all behaviour of the machine. Often people in security think in very binary ways. This is sort of different. It’s allowing through the right sort of traffic but it’s impeding and stopping the wrong sort of traffic.

ITB: How do you experiment with viruses in a lab-type situation?

MW: I’m a voracious data-hoarder. I’ll collect data on interesting artifacts or networks or issues with viruses from wherever I can get them. We get some of that information internally. Stuff we can’t create internally we have a secure area which is kind of a room with a bunch of them where we can run viruses that have restricted access and no network connectivity. So we can sort of safely experiment.

ITB: Since Code Red we’ve seen a lot more blended threats. How has that changed the approach taken in your research?

MW: We’ve done tests on most new viruses as they come up and we’ve looked at other protocols as well. I’m presenting a paper at a conference in December on a version for e-mail which prevents machines infected with e-mail viruses from spreading the viruses further. That would be implemented at the outgoing mail server for a particular client. And we’ve kind of been exploring the sort of philosophical basis behind this approach. It’s not binary. Instead of saying, “”What can we do to stop a problem?”” it’s saying, “”What can we do to make a problem less bad, to mitigate it, to contain it?”” We’ve got a lot of technologies that try to prevent these major problems and outages we get in our computers and make them more resilient by looking at ways that we can contain problems automatically and then leave the clean up and final decisions to a human.

The problem is my background’s in artificial intelligence. It’s very difficult to let computers make decisions about which things they should turn on or turn off. You don’t want some anti-virus system to turn off your payroll system.

ITB: What kind of artificial intelligence work did you do?

MW: I built a robot arm and got it to do various tasks with distributed control.

ITB: What is the most important thing you’ve learned about viruses this year?

MW: That some of them are incredibly ingenious — not to underestimate in any way the ingenuity of the virus writer.

Comment: info@itbusiness.ca

Share on LinkedIn Share with Google+