LXer Feature: 28-Aug-2014
When the Heartbleed bug made it out into the world a few months ago I had a chance to talk to Richard Kenner of AdaCore about it. I learned a few things too, here is our interview.
|
|
Q: In doing some research it seems that some of the hoopla surrounding Heartbleed came from the fact that Cloudfare announced they had fixed it, but only for their customers, is that correct?
A: No, not at all. Once the existence of the bug was disclosed, the fix was absolutely trivial to anybody with technical knowledge because the code in question was Open Source. Anybody who wanted to fix it could very easily do so. This is very different from the later bug in Microsoft software that, even though the details were well known, only Microsoft could fix because the bug was in proprietary code that only Microsoft could change.
Q: The bug as I know it makes it possible to read the contents of a secured-message, like credit-card transactions over HTTPS, but the primary and secondary SSL keys themselves. This data could then, in theory, be used as a skeleton keys to bypass secure servers without leaving a trace that a site had been hacked. Have I missed anything?
A: That's not an accurate description of the bug. Heartbleed allowed an attacker to cause an affected server to send the data in large amounts of unallocated memory. That data could be anything, from complete junk to a list of usernames and passwords. The problem for an attacker trying to exploit the bug is that there's absolutely no way to know what it was that he or she got. So to be clear, it's not the transaction itself that can be read, but the possible remnants of a transaction.
Are you referring to the private key for the certificate? If so, it was originally believed that there was no way to obtain that key with this bug. In fact, although that's true, you can obtain one of the primes related to the key and, from that, can trivially obtain the key itself.
But getting the private key is not, in any way, a "skeleton key". All it means is that *if* (and that's a *big* "if") you can intercept traffic between a user and the relevant site, you could decrypt it. But getting that encrypted traffic is far from trivial: for example, you would have to be lucky enough to be eavesdropping on an unsecured WiFi network where some user just happened to be accessing a web site that you got the private key of. If you're the NSA, which can collect lots of traffic, it's a different thing, of course, but that's not something the average attacker can do.
Q: Our own Carla Schroder put together a short list of testers for the bug, http://lxer.com/module/newswire/view/200791/index.html, what do you think the dangers were, or are still for the average person on the Internet?
A: The biggest danger is that a lot of major sites requested all users to change passwords. Most people in that situation may then choose insecure passwords, use the same password for multiple sites, or be forced to store passwords in an insecure place in order to remember them. And those weaken the user's security. The chances that the bug directly affected anyone are astronomically small.
The reason that this bug is so low risk as a practical matter is that most bugs are of the form where if an attacker does X, he'll accomplish Y. Y may be being able to get root access to a machine or to get some password, or similar. With Heartbleed, if you do X, you have no idea what you'll get. You may get a reply that contains a password. But you have no way of knowing if it a password or what account it's for unless you get lucky enough to get both an account name and a password and are able to recognize that from the returned data. That makes this bug very difficult to exploit "in the wild". Note that the few known exploits were very narrow.
On the other hand, the very *bad* thing about this bug is that there's simply no way to know for sure what somebody got, if anything, because there's no log that will tell you. So, even though you know that there's less than a one in a billion chance an attacker got anybody's password, in this litigious era, you have tell all of your users to change their passwords. Even though the chances that anybody got a private key are even smaller, you still may feel the need to revoke your certificate and get a new one, which put a tremendous strain on the PKI, since it was never designed to handle revocation on such a large scale.
Q: Some reports mentioned that if OpenSSL used the GPL for its license instead of the OSI license that the whole thing could have been avoided through better peer review.
A: No, that's completely nonsense! Anybody who wants to can look at any Open Source software no matter what particular version of the license it uses. I've had occasion to need to look at OpenSSL in the past (unrelated to security issues) and it didn't matter to me what the license was. The development policies of software (in terms of review process) are almost completely unrelated to the particular license chosen as long as it's an Open Source license. (I say "almost" because there are some people who have philosophical issues with some particular license, but, in my opinion, most of those people are more interested in philosophy than coding, so don't really affect the number of peers available to review something.)
Q: Do you think that the forks of the OpenSSL project, LibreSSL and Google's BoringSSL will make the difference the creators envision or just spread the development thinner than it needs to be?
A: Most likely both to some extent, in the short term. There has historically been a lot of forks in Open Source projects over time for reasons such as this: various groups feel they have a better idea, either technically or in terms of development process, or both. Most often what happens is that either one of those forks "wins" and the others die out, or a re-merge occurs, or some combination of those. After all, that sort of process is how OpenSSL itself got started.
Q: Do you think that too much, or not enough has been made of the issue in general?
A: Both, in different ways. Too much in that the chances that this bug actually resulted in any real theft of information is incredibly less likely (by many orders of magnitude) than is being portrayed by most coverage of it. But not enough in that there's been little focus on the lessons learned from this bug, of which there are two:
(1) It was the fact that this was Open Source that allowed the bug to be discovered and so quickly fixed. For similar bugs in proprietary software, we're completely at mercy of the company that developed the software.
For example, there was a time when it was believed that Microsoft would not fix a serious security bug in Windows XP because they'd recently stopped maintaining it. This bug points out the importance of using open sourceq software in any context where security matters and not enough has been made of that issue.
(2) This is now the second serious recent security issue (the first being the notorious bad "goto" in Apple's SSL code) where even the most rudimentary use of static analysis tools would have prevented the bug. There have been articles pointing this out, but they haven't received sufficient attention, in my opinion. This also points to the desirability of using formal methods on security software to be able to mathematically prove that they have the desired security properties (it's important to note that this does *not* mean they have no bugs: it just means that have no bugs of certain types) and again, too little has been said about this, in my opinion.
|