Pages

Thursday, September 15, 2011

An untrustworthy CA: DigiNotar

In August 2010 I first wrote about certificate authorities, or CAs.
The only reason rogue CAs haven't flourished is that only a few CAs at the top of the certification chain matter. Verisign, for instance, will not risk its reputation by certifying anything other than the "real" Macy's. It also will decertify any intermediate CA (that is, any CA whose own identity is verified by Verisign) that, by certifying bogus identities, abuses the trust placed in it.

However, other CAs can and do operate at the top, and it's not clear which of them can be trusted. Some of them might have an incentive to issue bogus certifications, on behalf of criminal organizations, for instance. Others might just be sloppy.
In December 2010 I wrote another entry about CAs, specifically, untrustworthy ones.
In theory, a MITM [man-in-the-middle] attack should be exceedingly difficult against an SSL (or rather, TLS) connection. However, if you can subvert the CA so it's willing to issue fraudulent certificates for you, you have solved 90% of the MITM problem. Your CA can issue a certificate attesting that you are amazon.com, or Bill Gates, or whoever you need to be; the end user will verify that your CA "confirms" the certificate belongs to Amazon or Bill Gates or whomever. Neither the end user nor Amazon (or Bill Gates, or whoever) will detect your successful MITM attack.
At the end of August 2011, we learned that a Dutch CA, DigiNotar, issued over 200 bogus certificates to unknown parties thought to be connected to the Iranian government.

According to the New York Times, DigiNotar was not corrupt, but rather, sloppy.
DigiNotar, which is owned by an Illinois company called Vasco Data Security International, did not make the attack particularly difficult, according to a report by Fox-IT, a security company that was commissioned by the Dutch government to investigate. The company’s critical servers contained malicious software that should have been spotted by antivirus tools, the report said, and the servers related to certificates were all protected by just one weak password. DigiNotar did not respond to requests for comment last week.
DigiNotar cares so little about security and trustworthiness, it cannot even enforce proper security procedures internally. Thanks for playing, DigiNotar. You can get out of the business now.

The thing is, it's all but certain DigiNotar's not alone in its cavalier attitude toward the business of identity certification. The only reason we found the problem is that one of the bogus certificates was for Google, and Google's own browser, Chrome, happens to include the hash of Google's public key(s) for encryption in the binary image. Of course the public key associated with the fraudulently issued certificate didn't match. Whether it was ignorance of Chrome's public key pinning, too-great ambitiousness on the attackers' part in trying to spoof Google's identity, or both, the discovery of the fraud was a lucky break. We can't count on such lucky breaks.

The X.509 model relies too heavily on a single point of failure: the CA. The way to get around that is to require multiple inputs into the mechanism that decides whether another party is who or what it says it is. When a bank wants to determine if a caller is who he or she claims to be (i.e., a genuine customer), the bank asks a series of questions (I imagine, though I don't know for certain, that it's a randomly chosen set of three or more from a much larger number of known facts about the person) only the correct person should be able to answer. (As we know from the high incidence of identity theft, this system isn't perfect. However, it's the best practical solution we have in the real world.)

There are subtleties and hidden assumptions in this identity-checking model. First, the bank's human phone operator is trained to notice hesitation or other oddities in the responses that might indicate the person isn't who he or she claims to be, even if the answers are nominally correct. That sense of "something's not quite right" could trigger more identity-checking.

Second, identity-checking over the phone tends to be one-way: it's the call recipient who is trying to verify the identity of the caller. That's because we regard the phone system to be infallible in its operation. When you dial the number for your bank, you don't wonder whether you'll reach someone else: you know you'll reach the bank. While it's certainly possible to attack the phone system in a way that would allow your call to the bank to be redirected, the difficulty of doing do is so great that we assume it just doesn't and won't happen. (Governments have the ability and might have the desire to redirect calls, but even for them it would be a tall order to do so for a large financial institution.)

No such trusted infrastructure exists within the Internet. However, there is one characteristic of Internet communications that can be used to help detect false identity: most of the parties communicating on the Internet have no interest in furthering third-party fraud. That is, while I might be interested in portraying myself as someone out of my fantasies in a chat room or on Facebook, I have no interest in helping Iran (or the U.S., or a cracking ring) pretend that it is Google. I would therefore have no objection to sharing my process to verify Google's identity, that is, how I verified what "the party claiming to be Google" presented as its certificate.

Such a crowdsourcing approach allows one to build up a history of (putatively) successful verifications of a given party's identity, and provides one with increasing confidence that if one is presented with a given certificate, it's genuine. It also allows you to detect a certificate that is different from the one everyone else has seen in the past. The new certificate might or might not be fraudulent, but at least you know you got something different and can act. This is as close as the Web gets to the bank phone operator's ability to notice "something's not quite right". And at the moment, this is our best hope for fixing our overreliance on the honesty and efficacy of CAs.

The Electronic Frontier Foundation has a commentary piece from which I obtained much of the information in this post. Bruce Schneier's brief 1 September 2011 blog entry provided links. I also recommend reading the commentary to that post if you want to understand more about why this is such a difficult problem to solve given the Internet's current design. Finally, the crowdsourcing approach to verifying identity wasn't my idea: it's mentioned in the EFF piece, which in turn references the Perspectives Project. Check out Perspectives' home page for a simple explanation of the problem and how crowdsourcing would work within the Internet's current design.

No comments:

Post a Comment