A couple of weeks ago I wrote an article about some issues with the Internet’s Public Key Infrastructure. In particular, I was looking at what happens if you want to “unsay” a public key certificate and proclaim to the rest of the Internet that henceforth this certificate should no longer be trusted. In other words, I was looking at approaches to certificate revocation. Revocation is challenging in many respects, not the least of which is the observation that some browsers and platforms simply do not use any method to check the revocation status of a certificate and the resultant trust in public key certificates is uncomfortably unconditional.

I’ve had a number of conversations on this topic since posting that article, and I thought I would collect my own opinions of how we managed to create this rather odd situation where a system designed to instil trust and integrity in the digital environment has evidently failed in that endeavour.

I should admit at the outset that I have a pretty low opinion of the webPKI, where all of us are essentially forced to trust what is for me a festering mess of inconsistent behaviours and some poor operational practices that fail to even provide the palliative veneer of universal trust, let alone being capable of being a robust, secure and trustable framework.

You’ve been warned: this is a strongly opinionated opinion piece!


We need a secure and trustable infrastructure. We need to be able to provide assurance that the service we are contacting is genuine, that the transaction is secured from eavesdroppers and that we leave no useful traces behind us. Why has our public key certificate system failed the Internet so badly?

Is cryptography letting us down?

It doesn’t appear to be the case. The underpinnings of public/private key cryptography are relatively robust, providing of course that we choose key lengths and algorithms that are computationally infeasible to break.

This form of cryptography is a feat worthy of any magical trick: we have a robust system where the algorithm is published, and even one of the two keys is published, but even when you provide both of these components and provide material that was encrypted with this algorithm using the associated private key, this body of data still makes the task of computing the private key practically infeasible. It’s not that the task is theoretically impossible, but it is intended to be practically impossible. The effort to exhaustively check every possible candidate value is intentionally impractical with today’s compute power and even with the compute power we can envisage in the coming years.

This bar of impracticality is getting higher because of the continually increasing computational capability, and with the looming prospect of quantum computing. It’s already a four-year old document, but the US NSA report published in January 2016 (NSA Suite and Quantum Computing FAQ) proposes that a secure system with an anticipated 20 year secure lifetime should use RSA with key lengths be 3072 bits or larger and Elliptical Curve Cryptography using ECDSA with NIST P-384.

Let’s assume that we can keep ahead of this escalation in computing capability and continue to ensure that in our crypto systems the task of the attacker is orders of magnitude harder than the task of the user. So it’s not the crypto itself that is failing us. Cryptography is the foundation of this secure framework, but it also relies on many other components.

It’s these other related aspects of the PKI infrastructure that are experiencing problems and issues. Here’s a few:

  • We are often incapable of keeping a secret. Anyone who learns your private key can impersonate you and nobody else can tell the difference.
  • The relationship or authority that a public key certificate is supposed to attest might be subverted and the wrong party might be certified by a certification authority. And we’ve seen instances where trusted Certification Authorities have been hacked, compromised or coerced to issue certificates to the wrong party under false pretences.
  • The architecture of the distributed trust system used by the Internet’s PKI makes the system itself only as trustable as the worst performing Certification Authority (CA). It doesn’t matter how good your CA might be, if every user can be duped by a falsely issued certificate from a corrupted CA, then the damage has been done.
  • For many years these domain name certificates were orders of magnitude more expensive than domain name registrations, yet the system was continually undermined by poor operational practices with a result that these expensive instruments of trust were in fact untrustable.
  • Trust Anchors are distributed “blind” by browser vendors, and are axioms of trust: things certified by a trust anchor are automatically valid, but where a CA fails to maintain robust procedures and issues certificates under compromised conditions, we as users and end consumers of the security framework are exposed without any conscious buy-in: it just happens as a function of the existence of their trust anchor in our systems software. We either have to be experts to know how to flush these out, or rely on others to help us update.
  • Not all trusted CAs operate as diligently as our implicit trust in each and every one of their actions requires of them. There was the episode of the issuing of fake certificates for several Google domains by an Egyptian intermediary using CNNIC as the trusted CA in order to eavesdrop on Egyptian citizens. There was the issue when Symantec issued certificates that had never been requested (). And of course there was the hacking of Diginotar’s CA back in 2011.

Each time these incidents occur we castigate the errant CA. Sometimes we eject them from the trusted CA set, in the belief that these actions will fully restore our collective trust in this obviously corrupted framework. But there is trust and there is credulity. We’ve all been herded into the credulity pen.

We’ve seen two styles of response to these structural problems with the Internet’s PKI. One is to try and fix these problems while leaving the basic design of the system in place. The other is to run away and try something completely different.

Let’s Fix this Mess!

The fix crew have come up with many ideas over the years. Much of the work has concerned CA ‘pinning’. The problem is that the client does not know which particular CA issued the authentic certificate. If any of the other trusted CA’s have been coerced or fooled into issuing a false certificate, then the user would be none the wiser when presented with this fake certificate. A trusted CA has issued this certificate: good enough, so lets proceed! With around one hundred generally trusted CAs out there, this represents an uncomfortably large attack surface. You don’t have to knock them all off to launch an attack. Just one. Any one. This vulnerability has proved to be a tough problem to solve in a robust manner.

The PKI structure we use requires us to implicitly trust each CA’s actions all of the time, for all of the CA’s in the trust collection. That’s a lot of trust, and as we’ve already noted that trust is violated on a seemingly regular basis. So perhaps what we would like to do is to refine this trust. What the fixers want is to allow the certificate subject to be able to state, in a secure manner, which CA has certified them. That way an attacker who can successfully subvert a CA can only forge certificates that were issued by this subverted CA. Obviously it doesn’t solve the problem of errant CAs but it limits the scope of damage from everyone to a smaller subset. This approach is termed pinning. The various pinning solutions proposed so far rely on an initial leap of faith in the form of “trust on first use”.