There exist many implementations of TLS. Unfortunately, alongside the proliferation of implementations, there is a proliferation of serious defects. These defects aren't just limited to the design of the protocol, they are endemic. The cipher choices, the ways in which the primitives are combined, everything.
I want to lay out a roadmap to a situation where we can have an implementation of TLS that we can be confident in. Given the amount commerce that is currently undertaken using TLS, and the fact that other, life-and-death (for some) projects pull TLS libraries (don't roll your own!) and entrust them, I'd say that this is a noble goal.
Let's imagine what would make for a "strong" TLS 1.3 & appropriate implementation.
We start with primitive cipher and mac selection. We must ensure that well-studied ciphers are chosen. No export ciphers, no ciphers from The Past (3DES), and the ciphers must not show any major weaknesses (RC4). Key sizes must be at least 128-bit or more. If we're looking to defend against quantum computers, the key-size must be at least 256-bit. A similar line of reasoning should be applied to the asymmetric and authentication primitives.
Further, the specification should aim to be as small, simple, self-contained and obvious as possible to facilitate review. It should also shy away from recommendations that are known to be difficult for programmers to use without a high probability of mistakes, for instance, CBC mode.
Basic protocol selections should ensure that a known-good combination of primitives is used. For example, encrypt-then-MAC should be chosen. We head off any chosen ciphertext attacks by rejecting any forged or modified ciphertexts before even attempting to decrypt the cipher text. This should be a guiding principle. Avoid doing anything with data that is not authenticated.
The mandatory cipher suite in TLS 1.2 is TLS_RSA_WITH_AES_128_CBC_SHA. While this can be a strong cipher, it's not guaranteed. It's quite easy to balls-up AES CBC, to introduce padding oracles, and the like. I would argue that the default should be based around AES GCM, as this provides authenticated encryption without even so much as lifting a finger. Even the choice of MAC in the current default is looking wobbly. It's by no means gone, but people are migrating away from HMAC-SHA1 to better MAC constructions. I would recommend exploiting the parallelism on current-generation technologies by allowing a PMAC.
There should also be allowances, and maybe even a preference towards well-studied ciphers that are designed to avoid side-channels, such as Salsa20 and ChaCha20.
There should be no equivalent of the current "null" ciphers. What a terrible idea.
I like that in the current RFC, the maximum datagram size is specified. That means that the memory usage per-connection can be bounded, and the potential impact of any denial of service attack better understood before deployment.
For the implementation, I am not fussy. Upgrading an existing implementation is completely fine by me. However, it should be formally verified. SPARK Ada may be a good choice here, but I have nothing against ACSL and C. The latter "could" be applied to many existing projects, and so may be more applicable. There are also more C programmers than there are Ada programmers.
Personally, I think SPARK Ada would be a fantastic choice, but the license scares me. Unfortunately, ACSL has it's own host of problems. Primarily that the proof annotations for ACSL are not "as good" as for SPARK, due to the much more relaxed language semantics in C.
Any implementation which is formally verified should be kept as short and readable as reasonably possible, to facilitate formal review. The task of any reviewers would be to determine if any side-channels existed, and if so, what remediation actions could be taken. Further, the reviewers should generally be checking that the right proofs have been proven. That is, the specification of the program is in-line with the "new" TLS specification.
Responsibilities should be shirked where it makes good sense. For instance, randomness shouldn't be "hoped for" by simply using the PID,
or the server boot time. Use the OS provided randomness, and if
performance is paramount, use a CSPRNG periodically reseeded from true
random (on Linux, this is supposed to be provided by /dev/random).
The implementation should, as far as possible, aim to meet the specification, as well as providing hard bounds for the time and memory usage of each operation. This should help with avoiding denial of service attacks.
The deployment would be toughest. One could suppose that a few extra cipher suites and a TLS 1.2 mode of operation could be added in to ease deployment, but this would seriously burden any reviewers and implementors. The existing libraries could implement the new TLS specification, without too much hassle, or even go through their existing code-bases and try to move to a formally-verified model, then add in the new version. Once a sufficient number of servers started accepting the new TLS, a movement in earnest to a formally verified implementation could begin (unless this route was taken from the start by and existing library).
Any suitable implementation that wants major uptake would ensure that it has a suitable license and gets itself out there. Current projects could advertise their use of formal methods to ensure the safety of their library over others to try to win market share.
We had the opportunity to break backwards compatibility once and fix
many of these issues. We did not, and we've been severely bitten for it.
We really go for it now, before the next heartbleed. Then we can go back to the fun stuff, rather than forever looking over our shoulders for the next OpenSSL CVE.
No comments:
Post a Comment