Monday, 21 April 2014

YubiKey Side Channel Attacks?

Overview

Sometime ago, I wrote about the YubiKey, and I've also mentioned how the YubiKey is vulnerable to a simple MiTM attack, by simply phising the user. This is nothing complicated, and it's hard to defend against these sort of things.

Given the paper "Cache-timing attacks on AES" by Bernstein (2005) and recently "Fine grain Cross-VM Attacks on Xen and VMware are possible!" by Apecechea et. al. (2014) we now have to begin asking the question, what threats does this pose beyond OpenSSL, PolarSSL and libgcrypt?

This is not an actual attack by any means, it is speculation of what might be an interesting and fruitful avenue of research.

YubiKey

As in my previous post, the YubiKey's OTP mechanism is relatively simple; take one AES block (128b), including a counter and a secret Id. Encrypt said block (in the YubiKey "dongle") with a secret key shared between the dongle and the server, and type it into the host machine to be sent to the server for checking.

The server decrypts the OTP received and checks the various internal values, e.g. the secret Id, the counter value, etc. If all's well, access is granted.

The Attack

Apecechea's attack relies on co-locating a VMs with a target, profiling the hardware architecture, then sending chosen plaintexts to the target to be encrypted under an unknown key.

I am wondering if, given a VM which verifies YubiKey OTPs, one could co-locate a couple of malicious VMs, profile the server hardware, then send lots of OTPs to be validated (i.e. decrypted) to the target VM.

If this exposed the secret key for the YubiKey, one could then decrypt any previous YubiKey OTP, and extract the secret Id and other values, and simply forge valid YubiKey OTPs.

Mitigations

AES-NI

According to Apecechea's paper, AES-NI aware implementations were much more difficult to attack. This could well be a saving grace in many situations, not just the YubiKey situation.

YubiKey HSM

I am not familiar with  the YubiKey hardware security module, I know that it has undergone review, but I don't think that review extended to side-channel or physical attacks.

If the HSM is a constant-time verifier of YubiKey OTPs, then this is a winner. Any organisation using a YubiKey HSM is automatically saved.

Append a MAC

Appending a message authentication code (though, I'm going to guess that AES-CMAC is a bad idea) of the OTP would allow forgeries to be rejected outright without any decryption of the OTP.

This does not stop attackers from trying to get as many OTPs with valid MACs as possible and using them as part of their attack, but it would take a much longer time, since the paper calls for 2^26 to 2^28 attempts.

It would also make the typed OTP much longer, and that means more "typing" for the device, and more complexity in the device. The newer YubiKeys have OATH capabilities, so they should have HMAC hardware in them, which may make this more feasible.

Using a MAC may obviate the need for an OTP anyways, so what the hey.

Rate Limiting

Simply limiting the number of attempts you allow per second as a general rule would alleviate the problem, as the attacker does need a fairly large amount of attempts (though, by no means prohibitive). Making it take 1 second OTP verification would make the attack take 2^26 seconds (roughly 2 years) per YubiKey the attacker wished to break.

This obviously is not a great way to do things, but it's better than nothing, and combined with suitable key rotation (say, once a year), you'll be quite safe. But remember, attacks only get better.

Hardened AES

This would be super, but it would seem that new side-channel attacks on AES just won't stop. Be they power, timing, or cache-based, we're not fighting a winning battle here.

In much the same way as patching OpenSSL against BEAST lead to Lucky13, I worry that we're going to end up chasing our tail with this one, since the performance trade-off is so great.

Constant Time Encryption

It should be the case that something like Salsa20 is constant-time when implemented correctly. It should have no leakages.

Salsa20 is also more amenable to an embedded environment than AES, and offers similar security guarantees.

If I were the YubiKey guys, this is where I'd be focusing my attention and looking to migrate to, assuming I wanted to keep the current usage pattern of the YubiKey (Plug it in, have it "type" a password). It is a much faster, simpler algorithm, it's designed to be strong against side-channel attacks, the reference implementation is public domain, and written so that it makes no use of malloc, so that you can get it in an embedded system easily.

Seriously, Salsa20 is where it's at.

Conclusion

Looking at the YubiKey, since the verification makes use of AES, it makes sense to ask if it is safe against Bernstein's cache-timing attacks. On the assumption it is not safe against it, what measures can be taken.

Overall, I would say that a good implementation of rate limiting should be used, whilst looking to migrate away from AES to a constant time cipher like Salsa20. My strong preference would be Salsa20.


Tuesday, 15 April 2014

A New Spectator Sport

You know when you see firefighters at a car crash, trying to rescue someone? Yea, folks OpenBSD are undertaking something similar right now.

Highlights 2014-04-14 13:45

The next gotofail?


"correct cases of code occuring directly after goto/break/returnok miod@ guenther@"

I have no indication that this would lead to a genuine vulnerability, but christ the code-smell

The Entropy is 2014-04-14 13:43:13 



"So the OpenSSL codebase does "get the time, add it as a random seed"in a bunch of places inside the TLS engine, to try to keep entropy high.I wonder if their moto is "If you can't solve a problem, at least tryto do it badly".ok miod"

Apparently OpenSSL is adding the time to their entropy pool?  But attackers can easily guess the time that your server thinks it is, so is this actually breaking things, or is the entropy pool sufficiently resistant to these kinds of boo-boos?

Highlights 2014-04-15 12:04

FOR GLORY

"Send the rotIBM stream cipher (ebcdic) to Valhalla to party for eternity with the bearded ones... some API's that nobody should be using will dissapear with this commit."

I love this guy's sense of humor. I didn't even know there was a "rotIBM" stream cipher. I'm sure it must've undergone a rigorous cryptanalysis, and IBM have never gotten in bed with nefarious government agencies. Oh, and pigs fly.

Header

"strncpy(d, s, strlen(s)) is a special kind of stupid. even when it's right,it looks wrong. replace with auditable code and eliminate many strlen calls to improve efficiency. (wait, did somebody say FASTER?) ok beck"


I think the commit message says it all.

 

Wednesday, 9 April 2014

My Heart Bleeds for OpenSSL

Overview

Today has been eventful, to say that least. My colleague and I patched several services, re-issued keys for them and started the ball rolling on revocation. It took all sodding day.

The progenitor of this has caused a lot of hubris and internet opinions.

Internet Opinions

Basically, a lot of people have said a lot of things, and I'd like to address one or two of them in something more than a pithy tweet.

Dump OpenSSL

I have seen this almost everywhere I looked. The call to arms to fling out OpenSSL, or plain re-write it. I get it, I understand that OpenSSL is not the healthiest code base. I often use it as an example of an extremely unhealthy code base.

However, just because your friend is ill, doesn't mean you take them out behind the chemical sheds and shoot them.

OpenSSL has a long and "rich" history. In fact, it's got so much history, that some of it has become useful! Rewriting is almost always a mistake.

Depending on how deep we go, we could end up throwing everything away, the CA infrastructure, the cipher suite flexibility, the vast quantities of knowledge that surround the OpenSSL ecosystem and allow thousands of developers to work in a more secure manner than would otherwise be possible.

I want to cover the cipher suite flexibility quickly, just to point out how much of a fantastic idea it is. The server and client tell each other what their most-prefered ciphers are, then they agree on the strongest one that they both support.

This has massive benefits, for instance, when BEAST et. al. came along and effectively broke block cipher based OpenSSL for a while, we could jump to RC4 for a little while. When that was broke, we all hauled ass back to AES. This gives incredibly flexibility when dealing with many attacks, by simply being able to circumvent the affected parts.

Secondly, last I checked, OpenSSL still has support for DES and RC4. This allows for graceful handling of legacy clients. This is very important when you have a very large, heterogeneous installed base.

If we lost cipher suite flexibility, we'd open ourselves up to so much more Bad Stuff.

The CA Infrastructure I'm less happy with, but that's a post for another day, and, frankly, I'd be surprised if anyone seriously improved on it for your average Joe in the next 2 - 3 years. I've not looked into it much, but NameCoin, or a NameCoin-like protocol looks promising from afar.

Suddenly retraining literally tens of thousands of developers, system administrators, etc. I'm sure that will be a cheap, quick and widely accepted move. It could happen, but it's quite entrenched. I would see the move happening over several years, with what ever "comes next" supporting developers as they move from one platform to the other. In fact, it could be TLS 1.3 (I think 1.2 is the latest at the time of writing, correct me in the comments below if I'm wrong) as implemented in *drum roll* OpenSSL.

Throw Money at the Problem

I do not think OpenSSL's developers are monkeys. If you look at the project, it's effectively dying in a lot of respects. It needs more than money, it needs a huge injection of resource, people, contributions, and more than anything else, enthusiasm.

It's kinda sad that one of the most critical pieces of software to most people's lives these days has the appeal to developers of a punch right in the face.

This is mostly because the OpenSSL project is much more organic than most other projects out there -- it's grown.

I have looked at the OpenSSL code, I've tried to ... do ... things to it. It's a big and intimidating code base, and I really did not want to break anything. So much so that I abandoned my idea. I looked into the abyss, and the abyss looked into me.

An injection of cash and some good C developers on the project as their day job would make a huge dent in this. Their first job could be making the code amenable to testing and testing the result. Their second job could be adding newer, safer cipher suites, and so on. But beyond that, OpenSSL is not Java or C#. There's no benevolent banker running the show, there is only the dedication and hard work of the volunteers.

Once the project has had a good cleanup, (and probably a few bug fixes!), an audit of some sort would definitely be nice, but I think that this is of limited value. OpenSSL is a constantly evolving project. It is not the code to run a jet engine -- written once and run on well-known hardware. It's changing almost daily.

The only way to get that level of assurance would be to build an automated verification suite, or similar. I'm looking towards Frama-C, though something with more power may be required. That way, the assurances would be quite hard to break, in much the same way as introducing regression bugs is made harder by decent unit testing.

Conclusion

I feel for the OpenSSL developers, all two of them, I really do. They have come under a huge amount of fire recently. I would not be surprised if they just downed tools to go live in a cabin in the woods tomorrow, out of sheer frustration and upset. When their code works, they get no thanks, when it breaks, they never hear the end of it.

Unfortunately for some projects, it just becomes time to stop adding features, and to go and address some platform problems, to run a comb through it and straighten things out. This is often made hard by a lack of a large test suite.

We did a similar thing with our product at my day job recently. We stopped work on features for nearly 2 years to drag the system out of the dark ages, add in a real security system, upgrade the underlying technology, really improve our test suite, etc. It's painful, slow and incredibly frustrating at the time. But when you're finished, continuing to work with your product is so much easier. It stops the "Let's rewrite!" urge quite well.

Edit, 2014-04-09

There is some discussion over on Hacker News (Hello!). I'd like to address a couple of the points.

SSL/TLS /= OpenSSL

I completely agree. The reason this comes across as a point that needs to be made is that, in writing the post in the middle of the night, immediately before going to bed without hashing the ideas out right, I've managed to put issues that were solely OpenSSL's fault right next to the spec's fault.

The reason that I talk about the potential for losing things like cipher suite flexibility is that a new library would likely implement something AES & RSA based with SHA-2 or SHA-3, and leave the others for a rainy day. We need more cipher suites at our fingertips than that (and good, safe, defaults).

Beyond that, unfortunately, attacks like Lucky Thirteen were basically other attacks, but using timing information to extra data. This sort of attack is only possible with the Mac-then-Encrypt construction that SSL/TLS uses, thus being the problem with the spec. The fact that it is a timing exploit is an issue with OpenSSL. I know the bug is effectively intractable, but I'm also aware that OpenSSL has fixed it to a satisfactory level.

You can't spell

I ager.

Seriously though, while I do make an effort, it was the middle of the night and for some reason my spell checker was not red-lining. Opening up the post this morning gave me an opportunity to go through and fix these issues ;-)

OpenSSL isn't formally verified!?

No, neither is any part of your browser, your kernel, your hardware, the image rendering libraries that your browser uses, the web servers you talk to, or basically any other part of the system you use.

The closest to formally verified in your day-to-day life that you're going to get may well be the brakes on your car, or the control systems on a jet engine. I shit you not.

We live on fragile hopes and dreams.

Monoculture Leads to Mass Vulnerability

I think that this is a real problem. However, this is crypto code. I know, I know, it's still a systems problem in lots of ways, but I can't avoid the feeling of "don't roll your own."

Asking normal developers to choose between several competing libraries with different interfaces, vulnerabilities, performance characteristics, etc. would likely result in carnage. We know that developers, when asked to choose cobble together cipher primitives, are basically pants-on-heads retarded, why would they be in a position to make those kinds of decisions on a larger scale?

I that out best bet is to to ensure that OpenSSL is a rock-solid piece of software with things like automated verification, extensive testing and independent audits. Unfortunately, that's extremely expensive (don't forget, re-writing is likely more expensive).

Blogger is Awful

Yes. I am always surprised that my blog doesn't render in a browser with no JS.