I can’t stop thinking about my experience last month when I had to reload Windows XP for a friend.
It makes me think we need to reconsider how we in the security world have failed the consumer. Should it really be necessary for a consumer to be a security expert to safely use a computer?
That seems to be our message. “You should have known not to click that link,” we say. “Why would you trust that that e-mail actually came from your mother?” We get disgusted that users keep falling for old tricks. But what are we doing to actually help these people?
We should start by better understanding the misconceptions about e-mail and Web site safety that pervade the user base. For example:
If an e-mail looks authentic, it is safe.
We seem to believe that every user should be as jaded as we are. After all, spam mail, phishing attacks and all the rest have been around for years, right? We techies aren’t surprised when the attacks appear to be legitimate emails. Why do users not suspect everything the way we do? We expect deception in cyberspace to be as common as it is in nature.
But we shouldn’t forget that a suspicious nature, so beneficial in our profession, isn’t necessarily helpful to the work our users do. Instead, we are surprised — indeed, amused — when our well-intended users respond to an attack.
And yet, really, when one of my family members last week got caught up in the wave of fake Amazon cancellation notices, what did he do that was so wrong? He responded to a message that looked legit to him, especially since he had just placed an order with Amazon.
This e-mail came from someone I know, so I know it’s safe.
Again, as security pros, we are aghast that anyone could not be aware that spammers and other e-mail attackers have for years been sending out their attacks with forged “From:” addresses. We understand the mechanisms that allow the bad guys to pose as our friends. And even my friends and family members who are not very computer savvy realize that I would not contact them trying to sell black-market Viagra.
But sometimes the sender and the message converge, as they did for the family member who had just placed an order on Amazon, and so they respond. “How could that e-mail not be from Aunt Lucy?” they want to know. I always tell them that e-mail messages, like postal envelopes, can be trivially made to contain any “From:” address the sender chooses to use.
If a friend on Facebook or Twitter posts a link, it’s safe.
As Facebook and Twitter have risen in popularity, a community of attackers has come along for the ride. These are the same people who have been sending spams and scams over the years, and they are now targeting our social networks. By using various Web application attacks like cross-site scripting (XSS), messages can be posted that look like they are from your friends. They are not legitimate, but they look like they are. (Are you seeing a trend here? Deception works
If I merely view a message, without clicking on any attachments or links, I’m safe.
Sometimes even naïve users know that a message doesn’t pass the “sniff” test, but they figure there’s no harm in viewing it just to be sure. Well, there are several ways an attacker can send e-mail attacks without requiring the victim to click on a link (including HTML IMG or IFRAME tags, combined with a reflected XSS on a vulnerable site). Many of these techniques can be just as dangerous as clicking on an executable file attachment. Again I have to ask, Why do we expect users to know this?
If I go to the URL, but don’t do anything while I’m there, I’m OK.
Taken one step further, what’s the danger in following a URL if you don’t actually do anything on the offending site? The answer is plenty. But are users at fault for making the assumption? <!–
If my browser displays the locked padlock, then the site is secure.
We’ve been telling people to use SSL for years, right? Are we now saying that an SSL-encrypted site isn’t necessarily safe or secure? Of course we are. Think about the message we send users when we tell them things like that.
None of these user actions is unreasonable, from their perspective. We techies need to recognize that even our most well-intended users will do these things from time to time without knowing any better. They’ll do them not because they are stupid, but because they just don’t share the security techie knowledge that we have. And they shouldn’t need to!
This is where the security industry has failed. For years, we have tried to ward off the types of attacks I’ve described simply by warning our users not to do “stupid” things like clicking on links. We smack our foreheads when they do that, and blame them for being so ignorant. We act shocked and amazed that they could have not read the articles about phishing, malware and XSS.
OK, I know we’ve done more than that. We’ve also forced our users to install antivirus software, firewalls, malware/spyware detectors and so on. But let’s face it, those products — especially the ones that rely on signature detection — haven’t done much to help stop new waves of attacks over the years.
Don’t get me wrong — I’m not claiming there’s some simple solution to this mess. The problems are pervasive, and they’re not going to be easily solved. But the status quo is broken. Our systems — from their operating system cores and through the e-mail clients, Web browsers, etc. — need to help our users do things securely. They need to be resilient to users doing what users do, and not get sick and die every time a user “misbehaves.” By and large, these things have not been adequately anticipated in our mainstream systems. I’ll delve deeper into that here in the coming months.
With more than 20 years in the information security field, Kenneth van Wyk has worked at Carnegie Mellon University’s CERT/CC, the U.S. Deptartment of Defense, Para-Protect and others. He has published two books on information security and is working on a third. He is the president and principal consultant at KRvW Associates LLC in Alexandria, Va.