It’s been described as the most severe threat to online security since the start of widescale internet usage. It’s so serious it has it’s own website and so effective that the NSA reportedly used it to gather critical intelligence. A part of roughly two-thirds of all websites; it allows attackers to eavesdrop communications, steal data and impersonate services and users.
This isn’t a Hollywood script, this is CVE-2014-0160, better-known as Heartbleed, and it’s still a very real threat to server security.
Heartbleed is a vulnerability in OpenSSL that allows the theft of information normally protected by SSL/TLS encryption. The vulnerability is the result of improper input validation in the chateau gonflable implementation of the TLS heartbeat extension, adopted into widespread use with the release of OpenSSL version 1.0.1 on 14th March, 2012.
Canadian Social Insurance Numbers and Mumsnet accounts are just two of the high-profile thefts attributed to CVE-2014-0160.
The bug was patched on the day of disclosure (April 7th 2014) with major online players such as Facebook, Google and Yahoo quick to reassure users of the fix. Google is said to have started work on a patch in late March, suggesting it knew about the issue before the disclosure.
If it ain’t (heart) broke…
Whether you’re Google or a small website, an unsecure system is a scary gonfiabili proposition for any system administrator. Data, user accounts and private communications are all at risk, but implementing a quick fix isn’t always advisable, as former Opera software developer Yngve Pettersen discovered.
His research, highlighted on The Register, revealed that at least 2,500 website administrators had made their previously secure sites vulnerable to Heartbleed by upgrading an unaffected server to a newer, but not yet officially patched, version of OpenSSL. He dubbed this servers ‘Heartbroken’.
For an administrator the pressure to do something is strong, but consider the potentially severe financial and security costs of rushing a fix. Petterson believes the total patching cost of fixing servers that have been upgraded to unsecure Open SSL versions could exceed £7m.
A more considered approach
If you’re taking on a bug like CVE-2014-0160 you’ll need more than a fly swatter – you’ll need a plan. Patching a large environment or reacting quickly to a zero day exploit can be a very difficult thing to do, with requirements and conditions changing quickly. In our experience there’s a number of things that can be done to help get the job done quickly and accurately.
The 5 Step process
1. Get a complete and thorough understanding of the issue
This understanding needs to be thorough enough to be able to identify all of the conditions that cause a particular server to be impacted by the issue in question.
2. Create a complete and up-to-date list of supported servers
The more automated and regular the data collections are the better your list will be. This list MUST contain all of the relevant information required to make a decision whether the server is impacted by the issue or not.
If we take Heartbleed as an example, the description of the vulnerability was contained within CVE-2014-0160 which basically said “all versions of OpenSSL between and including 1.0.1 and 1.0.1f were jumpers for sale vulnerable to a remote crafted attack which could access sensitive information on the server “. This means, for this case, our list of supported servers needs to include information about whether OpenSSL is installed and if so; which version.
3. Compile an up-to-date list of all of the impacted servers
The complete list of supported servers that you’ve just created now needs to be reduced to include ONLY the impacted hosts – this is the list of servers that will be resolved by the SA’s. If we again take Heartbleed as an example, any server running versions of OpenSSL that was not in the range 1.0.1 to 1.0.1f was not impacted and did not need to appear on the patching list.
If you’re using our product I-insight then the Issue Tracker application module will allow you to define the issue causing criteria (in the form of a script) and then scan all of the collected machine data to identify servers where the criteria is true. Issue Tracker will also track remediation progress.
4. Have a detailed execution plan
The execution plan needs to detail exactly what is required to solve the issue, if needed this execution plan should be OS specific and contain information about where the fix is kept and how it should be applied to each server.
5. Understand how to test for the correct resolution of the issue
This bit is the key, you must fully understand how to check that the issue has been correctly and fully resolved. This test can be done manually or, preferably, in an automated way. It is important to make sure that the issue stays fully resolved on all servers, so ideally you should re-run the tests regularly to confirm compliance.
It is often time consuming and resource intensive to complete this process fully, but ultimately resolving the issue or issues in one pass will save time in the long run. In our experience mechanical bull for sale it is also very important to make sure that all servers are checked for all issue causing conditions on a regular basis. This will make sure that the environment hasn’t regressed due to file restores, bare metal restores or even new servers being introduced.
If you’re using I-insight, the Issue Tracker module will make sure that the issue is not reintroduced into the environment by comparing the issue causing criteria against all collected machine data, every time a collection is run.
Good, now repeat 300,000 times
Not you personally, but all of the administrators for Heartbleed-affected servers – according to research by Errata Security, over 300,000 servers are still believed to be vulnerable to the bug. With that in mind, you might want to check if your favourite websites have been affected - Mashable has a list.
Please leave your comments below, it would be interesting to see your feedback.