From the day the world learned about the infamous OpenSSL crypto vulnerability, the Heartbleed fix has been available and the organization’s terse recommendation has been to apply the patch or re-compile the code without the heartbeat feature. Easy, right?

Banks, major sites, associations and other organizations have issued comforting statements indicating that they were unaffected and if they were, they’ve already fixed the problem.

So is that all there is to it? In temporarily shutting down, did the Canada Revenue Agency overestimate the complexity of the job? Did Yahoo exercise an overabundance of caution by taking most of last week to remedy what turned out to be a quick fix for so many companies?

Could there be a risk that is more difficult to mitigate than by the simple application of a patch?

If the vulnerability arises in situations where a server’s memory contents can be inspected, what else can possibly exist in that memory other than the keys to encrypted communications and the access credentials of site users?

Wait, credentials as in usernames and passwords? Is it possible that when you dump the contents of a server’s memory to retrieve literally anything and everything, you might gain access to administrative credentials and remote access methods that would give you a back-door into the server? Because, hey, those things are in memory too!

Just thinking out loud here, depending on network segmentation, you may be able to see and control a lot more than just the compromised server. And naturally, the common practice is to go in and cover your tracks by applying the OpenSSL patch yourself. It’s not only the polite thing to do, but it saves the IT department precious time! Thanks to you, overworked techies don’t even have to worry about the server because you went through the trouble of updating the software yourself.

So instead of rushing to check off every server as immune to CVE-2014-0160 and tweeting dismissive updates about how the bug failed to affect them, companies may opt to instead install the vulnerable OpenSSL release and examine it in a controlled environment, if only to get a better grasp of the disaster they have just averted.

This is pure speculation of course, but given all the trouble companies have had in replicating early claims before validating the fact that the worst case scenario is actually possible, there is no substitute for doing the testing yourself.

It’s unlikely that most companies would be outright vulnerable to a server hijacking because it would be fairly easy to prevent and catch, but in this case more than in any other, it’s important to be thorough. And to maximize productivity, IT pros should be looking for three things in particular:

1. Sites that reported the vulnerable version of OpenSSL but didn’t exhibit the flaw. There have been many of these and in most cases, they have been attributed to the strength of layered controls and “defense-in-depth.”

2. Servers that have been patched but no one quite remembers who did it and when. In many cases, missing documentation is just poor project management practice or just a hasty response to the problem. Could it have been patched without authorization?

3. Audit logs that show the unreconcilable use of administrative accounts. Check logs on all servers and interfaces that could employ those credentials and be sure to investigate the possibility that their access privileges were used to create new accounts or commandeer others to install tools or impact configuration management.

Again, as of right now this is purely theoretical and there is no evidence of the possibility of hijacking servers in this way, but it’s worth exploring.

As always, I recommend getting proper advice and using independent audits and assessments. In situations where a vulnerability is lurking within your systems, there is no better alternative to discover it yourself… and fast. It demonstrates competence, accountability and respect.

Share on LinkedIn Comment on this article Share with Google+