ciphergoth: (Default)
[personal profile] ciphergoth
Update: Anonymous comments must be signed! I've made a couple of exceptions to this policy, but I may stop unscreening comments that don't have any kind of name at the bottom.

My current plan to change the world involves writing a manifesto for a proposed mailing list to work out crypto standards that actually work and stand a chance of getting widely adopted in the open source world. This is essentially version 0.1.5 of that rant, and may contain some inaccuracies or overstatements; I look forward to your comments and corrections.

Currently there are four crypto standards that see any real use in open source land; in order of deployment, they are:
  • SSL/TLS
  • SSH
  • OpenPGP/GnuPG
  • IPSec
These are the best examples of good practice that we cite when we're trying to encourage people to use standards rather than make it up, and all of them fail to be any good as a practical, convenient basis by which people writing open source software can make their software more secure through cryptography. All of them suffer from three problems; in order of increasing severity
  • They are all designed long ago, in three cases initially by people who were not cryptographers, and are difficult to adapt to new knowledge in the crypto world about how to build good secure software. As a result, deprecated constructions for which there are no good security reductions are common. They are also generally far less efficient than they need to be, which would be a very minor problem if it didn't put people off using them.
  • In every case protocols and file formats introduce far more complexity than is needed to get the job done, and often this shows up as complexity for the users and administrators trying to make them work, as well as unnecessary opportunities to make them insecure through misconfiguration.
  • But by far the worst of all is the parlous state of PKI. This of course is something I've ranted about before:
    • SSL's dependence on the disaster that is X.509 makes it insecure, painful for clients, and imposes the ridiculous Verisign Tax on servers, as well as making it very unattractive as a platform for new software development.
    • SSH occasionally shows you a dialog saying "you haven't connected to this server before, are you sure?" I'm sure someone's going to tell me they actually check the fingerprints before connecting, but let me assure you, you are practically alone in this. I can't even share this information across all the machines I log in from, even if I use ssh-agent. The situation for authenticating clients to servers is slightly better, but still involves copying private keys about by hand if you want the most convenience out of it. It makes you copy whole public keys rather than something shorter and more convenient like OpenPGP fingerprints. It certainly doesn't make use of the basic fact that keys can sign assertions about other keys to make life more convenient.
    • OpenPGP's authentication is based on the PGP Web of Trust, which is all about binding keys to real names using things like passports. As I've argued before, this is a poor match for what people actually want keys to do; it's a very poor match for authenticating anything other than a person.
    • IPSec is also tied to the X.509 disaster. It is also so complex and hard to set up that AFAICT most IPSec installations don't use public key cryptography at all.
Perhaps the key management problems in all these applications can be pinned down to one thing: they were all designed and deployed before Zooko's triangle was articulated with sufficient clarity to understand the options available.

It's worth noting one other infuriating consequence of the PKI problems these applications display: none of them really talk to each other. You can buy an X.509 certificate that will do for both your SSL and IPSec applications, if you're really rich; these certificates will cost you far more than a normal SSL certificate, and for no better reason than that they are more useful and so Verisign and their friends are going to ream you harder for them. Apart from that, each application is an island that will not help you get the others set up at all.

I've left out WEP/WPA basically because it's barely even trying. It should never have existed, and wouldn't have if IPSec had been any good.



I'm now in the position of wanting to make crypto recommendations for the next generation of the Monotone revision control system. I wish I had a better idea what to tell them. They need transport-level crypto for server-to-server connections, but I hesitate to recommend SSL because the poison that is X.509 is hard to remove and it makes all the libraries for using SSL ugly and hard to use. They need to sign things, but I don't want to recommend OpenPGP: it's hard to talk to and the Web of Trust is a truly terrible fit for their problem; on top of which, OpenPGP has no systematic way to assert the type of what you're signing. They need a way for one key to make assertions about another, and we're going to invent all that from scratch because nothing out there is even remotely suitable.

Monotone has re-invented all the crypto for everything it does, and may be about to again. And in doing so, it's repeating what many, many open source applications have done before, in incompatible and (always) broken ways, because the existing standards demand too much of them and give back too little in return. As a result, crypto goes unused in practically all the circumstances where it would be useful, and in the rare case that it is used it is at best inconvenient and unnecessarily insecure.

I don't believe that things are better in the closed source world either; in fact they are probably worse. I just care more about what happens in the open source world.

We can do better than this. Let's use what we've learned in the thirty-odd years there's been a public crypto community to do something better. Let's leave the vendors out, with their gratuitous complexity and incompatibility as commercial interests screw up the standards process, and write our own standards that we'll actually feel like working with. We can make useful software without their support, and it seems in this instance that their support is worse than useless.

A good starting point is SPKI. SPKI has a very nice, clean syntax that's easy to work with in any programming language, very straightforward semantics, and supports constructions that anticipate the central ideas behind petnames and Zooko's Triangle. Unfortunately SPKI seems to be abandoned today; the feeling when I last looked at it was that despite their inadequacies, the victory of PKIX and X.509 was now inevitable and resistance was futile.

Well, it turns out that X.509 was so bad that no amount of industry support could turn it into the universal standard for key management applications. There are places that it will simply never be able to go, and in fact these are the vast majority of real crypto applications. On top of which, there is a limit to how far a standard that hardly anyone will ever understand the application of can go.

It's time we brought back SPKI. But more than that, it's time we adapted it for the times it finds itself in; take out the parts that complicate it unnecessarily or slow its adoption, extend it to do more than just PKI, and specify how it can talk to the existing broken cryptographic applications in as useful a way as possible. Once we've built a working petnames system to serve as a workable PKI, my current feeling is that we should start with no lesser a goal than replacing all of the standards listed above.

Does anyone else think this sounds like a good idea? What other way forward is there?
Page 1 of 3 << [1] [2] [3] >>

Date: 2007-02-18 09:43 pm (UTC)
From: [identity profile] skx.livejournal.com
Yes. Good idea.

(Just be glad I didn't rant about the quality of gpg's source code ..:

One thing you mention in passing is SSH fingerprint checking. I'm not sure if you're aware of this, but it is possible to check these via "magic" DNS entries. See RFC 4255 for details.

Date: 2007-02-18 09:57 pm (UTC)
From: [identity profile] xeger.livejournal.com
I think I'm missing the problem(s) that you're trying to solve, although I've got a pretty good handle on what you don't like ;)

From the various things you list, it sounds like you're attempting to address:

1) Transport layer security
2) Authentication
3) Authorization
4) Identity

I think there's also a side-order of:

5) Manageability
6) Useability
(a) for programmers
(b) for users

Is that a reasonable summary?

Date: 2007-02-19 12:03 am (UTC)
From: [identity profile] kitty-goth.livejournal.com
I'm not quite sure what you're saying here.

I don't understand why the current mechanism, whereby you could check with the administrator of a remote system that the fingerprint you saw when you tried to connect to what purported to be her machine matched the one she saw on her system would be improved by putting into the DNS?

How does that help?

Date: 2007-02-19 04:07 am (UTC)
From: [identity profile] meta.ath0.com (from livejournal.com)
Don't forget S/MIME, which combines the problems of PGP with the problems of SSL.

Date: 2007-02-19 05:49 am (UTC)
From: [identity profile] ciphergoth.livejournal.com
It helps if you have DNSSEC, but of course nobody has DNSSEC.

Date: 2007-02-19 06:49 am (UTC)
From: [identity profile] allonymist.livejournal.com
(Hi, sjmurdoch sent me.)

If I were architecting a next-generation system-to-transform-and-rationalize crypto, I'd definitely look closely at SPKI; s-exp is so much nicer to work with than asn.1 that it's not even funny, and unifying things like certificates for OpenPGP and TLS and friends.

If, on the other hand, I were working on something like Monotone, I'd run screaming at the suggestion that we needed to replace TLS and OpenPGP in order to get the job done: yes, working with OpenSSL's X.509 functions (or NSS's, or whoevers) is a bit of a pain, but rebuilding a transport-layer encryption library from scratch sounds like a massive mission detour. Probably, I'd use TLS for the transport, OpenPGP-plus-some-decoration for data signing and encryption, and jury-rig some kind of certificate format (maybe spki, who knows) to manage the subkeys I was using for each.

What other way forward is there?

Let me riff on your idea. To do the most good work for the field in the small term, I'd start with a simple but powerful tool that could expand into the various protocols and slowly replace the badness of them. Take the "separate domains" problem you talk about above: It isn't hard to jury-rig cross-certification between the various certificate/fingerprint formats now, but so far as I know there is no popular standard way to certify your keys with one another, or unify them. Most pragmatically, I'd like people who trust my OpenPGP key to also be able to trust my OTR key and my SSH keys and my self-generated X.509 certs. I could OpenPGP-sign a document with all these keys in it, but people would need to handle it manually. (There are probably standards to handle this keys, but as far as I can tell, they aren't implemented in any way useful to me.) Once you've got a good cross-certification format, and you get it supported in a few applications (probably by add-on tools at first), you can start promoting it as a general-purpose certificate format for new protocols, and hopefully have it become an alternative format for existing tools.

At least, that's how I'd take over the PKI world this week: from below, starting as a convenience tool for hacker types; and attacking the domains that OpenPGP and X.509 certs handle the worst. Competing with existing standards is a mug's game; writing a tool that's useful from day one is how successful open-source software happens.

Date: 2007-02-19 07:30 am (UTC)
From: [identity profile] ciphergoth.livejournal.com
Your second para sounds about right, though the jury-rigged certificate format probably won't be SPKI - it'll probably be something more directly based on how Monotone works. It's not my plan to drag Monotone down with massive mission creep. On the other hand, the Monotone developers may simply find those tools so hateful they'll refuse to work with them.

If they do go for OpenPGP and SSL, that doesn't mean I can't pursue this branch too. I like your cross-certification idea - I still want the eventual result to be that all those standards wither and die to be replaced by something workable, but there's a good discussion to be had about how to get there from here.

Date: 2007-02-19 10:35 am (UTC)
From: [identity profile] kitty-goth.livejournal.com
Um. So why isn't this: "If we weren't starting from widely deployed 'Starsky and Hutch' protocols, then it would help if we didn't start from here."

Date: 2007-02-19 11:23 am (UTC)
From: [identity profile] ciphergoth.livejournal.com
I don't know if DNSSEC is over-complicated or badly designed; I haven't really looked into it. Unlike the others, DNSSEC could work at least in theory because it knows what edge of Zooko's triangle it's trying to live on.

However:

(*) DNSSEC would only ever work if everyone who got a domain got a DNSSEC delegation as a matter of course. That's directly against the commercial interests of Verisign, who sell SSL certificates and now seem to control the domain system.

(*) The DNSSEC designers made some bad choices: they wanted all subdomains to be securely enumerable from the root domain, so that you could get secure assurance of a negative answer. People really, really don't like that. They should have allowed negative answer signing to be delegated to an ephemeral key that lived on the DNS server itself and wasn't empowered to sign much else.

(*) It only covers one edge of Zooko's triangle in any case; I want to leave the world where we all try and live on that one edge.

Date: 2007-02-19 11:27 am (UTC)
From: [identity profile] keirf.livejournal.com
Back in yonderyears I had to test the SSL functionality of a client-server product, and went on a couple of X509 training courses. It all seemed like overcomplicated bollocks to me at the time. I'm glad to see I wasn't totally wrong.

Date: 2007-02-19 12:34 pm (UTC)
From: [identity profile] pavlos.livejournal.com
As a lay user, I don't immediately see what's wrong with SSL itself. It seems to get the job done simply and right when it somes to HTTPS or some other protocol like IMAP over SSL if your provider supports it. What bugs me, again as a lay user, is the lack of support for SSL and existence of apparently pointless standards where SSL would do.

  • There's a huge number of web pages with more or less private content, like LJ, that don't go over SSL by default. Why not?
  • Only a few mail providers support client connections with IMAP or SMTP over SSL. Why not all of them?
  • Email transport is not encrypted. Surely that's ridiculous as there is always a DNS and an SMTP(?) server at the receiving end, which could hopefully be convinced to negotiate keys for their known users.
  • We seem to have a tool called SSH that I think is not the same as telnet over SSL. I only have a hazy understanding of what it is but it seems unnecessarily different. Why?
  • We have a VPN tool called IPsec, which again seems to be a different thing than SSL. Why?

So, I would naively expect there to be one thing that establishes an authenticated and encrypted point-to-point connection, and I'd expect all the other conenction-based tools to operate over it. I'd expect the creation of multiple VPNs over SSL to be a straightforward matter for user processes to arrange. I'd also expect to be able to make some DNS query that can provide a public key for joe@domain.tld (but maybe this is part of what you suggest). I wouldn't expect OpenPGP to be on the same bandwagon necessarily, as it's not connection-based. On the other hand, it's interesting to consider whether the same binary message format could serve both connection-based and file-based communications.

I'm not sure what all this has to do with your aims. I vaguely understand that the main thing you want to fix is having a working standard for a trust network, and a woking instance of that standard that isn't Verisign. However I don't understand to what extent the Verisign problem could be fixed by bringing out an instance rather than a standard for a new open certificate authority (so act like Verisign and have users manually accept your root certificates). I also don't understand how important would be the residual requirements if you take the view that host names get certified, and then the hosts can certify enything in their domains.

Date: 2007-02-19 07:48 pm (UTC)
From: [identity profile] ciphergoth.livejournal.com
I'm guessing that SSL isn't used in the applications you name often for performance reasons. That's a shame - with better crypto the performance hit could be reduced to very little. For example, SSL session cacheing is unbelievably clumsy and painful. With better crypto you could build a session cache that lived entirely on the client side; the cached sessions would then last for days rather than hours, and so the server workload for supporting crypto would be hugely reduced.

SSH is distinct from SSL mainly for historical reasons. However SSL doesn't support password-based authentication. SSH does, but as far as I know doesn't support things like session cacheing at all. In addition SSH doesn't support modern password protocols like SRP, which is just ridiculous given their security benefits. You are right that a single protocol should subsume both roles.

As you say, ideally packet-level crypto a la IPSec would replace both; the trouble with this is that unlike transport-layer crypto, packet-layer crypto requires operating system support and so is much harder to get deployed. You'd need some real cleverness to make it all seamless, too: when you looked up the URL you'd have to get information about what sort of VPN to set up in order to get to the content, and there are some tricky issues with address space. I'm increasingly starting to think that secure networking is naturally source routed, and that's a change to the way people usually think about network programming. I'd be interested to know if there are any other inherent reasons to prefer transport-layer security.

I would expect the wire formats for secure connections and secure files to differ somewhat but have a lot in common.

Getting a public key for joe@domain.tld is a job for DNSSEC or similar, as discussed above. Given such a mechanism I'd certainly expect it to extend to everything you want a public key for, including OpenPGP-like applications; as I say, though, this is the edge of Zooko's triangle that interests me least at the moment. The authorities who are in a position to make things better in this space have strong short-term commercial motivation to profit from how crap they are. And I don't believe we could fix it by stepping up to replace Verisign, either, though it's been tried; ultimately we would put ourselves in a position where we were competing in the same space and with the same commercial pressures, but without their massive market dominance. The secure solution will be to empower users to cut out the middleman and directly make assertions about trust on keys.

Date: 2007-02-19 10:11 pm (UTC)
From: [identity profile] fizzyboot.livejournal.com
I guess if one is defining a standard (or merely deciding which standard to use) the first thing to do is be clear about what problem one is trying to solve.

In the simplest case, crypto is used when Alice and Bob want to communicate without Eve knowing what they are saying. And public key crypto solves that problem.

The only difficulty is when A and B don't already know each other's public keys but which to communicate. A solution is for A to send B a message saying "here's my public key, I want to talk, what's yours?", B to reply and then for them to talk. Of course this key exchange could be automated.

This solution, however, has a problem: it is vulnerable to a man-in-the-middle attack. Efforts to solve this problem include X.509 and the PGP web-of-trust.

I now have an admission to make: I don't understand the web of trust. I've read the GnuPG documentation and it all seems very complicated. I possibly could understand it if I really made an effort too, but my brain tends to recoil at things thast appear to be overly complex. Maybe I am just too stupid or lazy to understand it; however I know more about crypto than the average PC user, so if I think its too difficult, what's the average user to think? I suspect many would simply shrug their shoulders and give up.

Which brings me to another issue. People like their computers to be secure, but they also like to be able to get their work done, and for nearly everyone, getting stuff done is a higher priority than computer security. Therefore if the user perceives a security system as being too complex or effortful, they are likely to by-pass it. Hence a user might write out their password on a post-it note attached to their screen.

This suggests to me that any good security system will be as nearly transparent as possible to the user, or it won't get used. Also, it should be as simple to understand as possible, because the harder it is to understand, the more likely it is that the user will set it up incorrectly in a way that makes it insecure.

Anyway that's just some random meanderings from me. If/when you set up this mailing list, please let me know, I'd like to be on it.

Date: 2007-02-19 10:20 pm (UTC)
henry_the_cow: (Default)
From: [personal profile] henry_the_cow
What about Kerberos, and more recently Shibboleth? Shibboleth is being deployed throught the higher education sector.

In the open-source grid world, GSI has made some headway, but that's usually built on top of X.509 PKI.

I suppose that WS-Security / SAML / XACML etc. are just layers above these basic protocols?

Date: 2007-02-20 08:38 am (UTC)
From: [identity profile] ciphergoth.livejournal.com
Actually the most important applications of crypto are to do with authentication rather than secrecy. In that context the "man in the middle" attack consists of saying "hey, I'm Bob, you believe me don't you?"

I think I understand the Web of Trust, I just think it's not much use. I really don't understand X.509, and I'm not entirely sure anyone else does either.

Date: 2007-02-20 09:00 am (UTC)
From: [identity profile] ciphergoth.livejournal.com
Shibboleth looks interesting from a brief glance. Like Kerberos, though, it's mostly useful for authentication within a single large institution, which isn't so interesting for applications like Monotone, Jabber and so forth. Again, it knows roughly where it lives on Zooko's triangle.

I've looked a little at WS-* and the whole thing looks like a nightmare. Which style of XML canonicalization shall we use? They are incredibly complex and it looks like it gives you all the rope you need to shoot yourself in the foot and more. To me they are great examples of everything that's wrong with vendor-driven standards.

Date: 2007-02-20 03:45 pm (UTC)
reddragdiva: (Default)
From: [personal profile] reddragdiva
The Web of Trust is not something cared about by anyone who doesn't (a) have a specific application it fits or (b) have an obsession with it.

Date: 2007-02-20 07:46 pm (UTC)
From: (Anonymous)
That's funny... just this weekend I met a guy working on getting Shibboleth and OpenID talking to each other here at UCSD... as it happened he was ranting about Shibboleth being an overcomplex overengineered piece of junk :-).

-- Nathaniel

Date: 2007-02-20 09:06 pm (UTC)
From: (Anonymous)
There are many reasons to prefer transport-layer to network-layer security. As you have mentioned, network-layer solutions need to be implemented in the operating system kernel making them particularly inconvenient to deploy. Also, IPsec (which for all practical purposes is the only network-layer protocol we have) has been widely criticized (http://www.schneier.com/paper-ipsec.html) for being exceptionally complex and this fact hinders in depth security evaluations. However, I think that the most important argument against network-layer security is that it violates basic networking stack architecture principles. When you are doing security management at the network layer it usually means that you lose all the reliability and reassembly features provided by the transport layer. To be able to make security decisions (like authentication, authorization, etc.) you need to re-implement many TCP features that allow you to assemble network packets at the network layer, thus breaking the purpose behind the separation of functionality into layers.

--
Patroklos Argyroudis
http://ntrg.cs.tcd.ie/~argp/

Date: 2007-02-20 10:06 pm (UTC)
From: [identity profile] skx.livejournal.com
What I was trying to say was that most people ignore the host key verification step since every single time they establish a connection to a new host they are prompted with an indecipherable key.

In the most common case everything is OK.

The really serious times these are useful is when a host key has changed without you expecting it - I figure that because people are conditioned to accept these prompts on new connections that a significant number of people will just "OK" a changed host key.

If, additionally, key fingerprints are stored in DNS then the typical case would be:

a) User connects to new host.
b) DNS says everything is OK
c) User is not prompted.

The only times the user would be prompted would be a) if the key changes, or b) if the DNS data is incorrect - but hopefully at this point most people would be unused to these kind of prompts and things would be simpler.

Its not a huge win if you don't control DNS - and in that case you might be at the kind of company where databases of server fingerprints are automatically distributed (with cfengine/etc) .. but it is a simple check to make..

Date: 2007-02-20 10:39 pm (UTC)
From: [identity profile] ciphergoth.livejournal.com
But without DNSSEC, is there any reason to trust the public key you get out of the DNS more than the key the host itself reports when neither is securely authenticated? Why can't an attacker who can pretend to be the remote host pretend to be the remote DNS server too?

Date: 2007-02-20 10:43 pm (UTC)
From: [identity profile] skx.livejournal.com
Indeed without DNSSEC the extension is pointless.

Date: 2007-02-20 10:51 pm (UTC)
From: [identity profile] ciphergoth.livejournal.com
I agree about IPSec as you can see from my post - in fact Ferguson and Schneier's paper on it was one of the things in my mind when I wrote it. It isn't the only network-layer protocol we have - look at the number of "SSL VPNs" out there - and it was on my list of protocols that needed to be replaced.

I'd nonetheless like to see if we could get away with just replacing IKE, and leaving something like IPSec with most options removed in place.

I'm still not wholly convinced of the inherent merits of transport-layer security. I can imagine a scenario in which you can learn about the sender, and thus make authorization decisions, just by looking at the IP address of the other party and determining from it that they are a particular party who is communicating via a VPN. In practice, though, transport-layer security can offer such great convenience of integration that it's probably not worth trying to be "pure" about this and insisting on one network security protocol to rule them all.

Date: 2007-02-21 10:14 pm (UTC)
bob: (Default)
From: [personal profile] bob
There's a huge number of web pages with more or less private content, like LJ, that don't go over SSL by default. Why not?

the main reason i think would be IP exhaustion. each ssl domain needs an IP. you cant use virtual hosts.

Date: 2007-02-22 09:04 am (UTC)
From: [identity profile] pavlos.livejournal.com
Interesting. I understand from this an d Paul's reply that there would be a lot more possibilities if the two communicating systems could create new, addressable virtual networks at will.
Page 1 of 3 << [1] [2] [3] >>

Profile

ciphergoth: (Default)
Paul Crowley

January 2025

S M T W T F S
   1234
5678 91011
12131415161718
19202122232425
262728293031 

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Dec. 24th, 2025 03:56 pm
Powered by Dreamwidth Studios