unfs3 is pretty good, but it will probably never be as fast as a kernelspace nfs server; FunFS, on the other hand, is already faster than any NFS, simply because the protocol is optimized for minimum latency and real caching, not cheeseball NFS-style caching.
On the other hand, the unfs3 server is already fully functional with a 54k binary; FunFS is still not fully functional, and it's much bigger (if you count its WvStreams dependency). Statelessness (ie. NFS) obviously gives several advantages, simplicity being a major one, but I wonder if these advantages will be worth the compromises (ie. stupid in-kernel servers vs. bad performance) we have to make?
The 80/20 Rule
pphaneuf pointed me at a comparison of successful/unsuccessful technologies based on various attributes. The supposed winning method, although the survey is grossly statistically invalid, was that projects developed using the 80/20 rule are generally more successful (Tim Bray).
This reminded me of some other discussion of the 80/20 rule, notably Bloatware and the 80/20 Myth (Joel Spolsky) which is (from the title) obviously of the opposite opinion. And there's a related opinion regarding Java and its flood of APIs.
So that got me thinking: how valid is the 80/20 rule? When NITI started, it was just a small number of people, even fewer developers, and a brilliant (IMHO) idea - and we built and sold a few boxes, but we never suddenly started swimming in money. Ever since then, we've been adding developers and doing more work, but less and less of the work has been brilliant ideas. More and more of it has been normal, day-to-day stuff. And with each release, incorporating more normal, day-to-day stuff, we're able to sell more and more software; exponentially more. The 80/20 rule got us started, but it's the 99% rule (soon to become the 99.9% rule, if we get too popular) that gets you big.
This actually explains a few things. OpenSource is described, in Tim Bray's first article above, as tending to "just copy what works", regardless of whether it's very 80/20 or not. But this isn't the whole story. I think OpenSource is "bloatware", as described in the second article, from Joel Spolsky. It followed 80/20 (Tim Bray's version) to get started, but now anybody can extend it to get whatever they want - and people do. Linux now gets used everywhere you can imagine, precisely because someone was able to easily change it to do what they wanted, not because it really did a lot of stuff or was particularly good at something else.
The alternative approach is something like Windows, which isn't so easily extended (it's not that hard) but is already near-complete - it can do more stuff out-of-the-box than Linux can, and so it's popular for different reasons. Unfortunately, the cost of making something as feature-complete as Windows is exponentially more expensive than the cost of making something as extensible as Linux; I suppose that means Windows will be the one to fail, eventually. (It also means that the GPL was the magic ingredient that turned Unix from a failure into a success; and that the GPL has and would have nothing to do with the present/future success of Windows, since Windows' success doesn't come from its extensibility).
Do I have a point? Well, how about this: both Tim Bray and Joel are right, but they read different things into the 80/20 rule. Tim Bray says 80/20 is about starting simple and selling it before you're done - also known as Worse is Better. Joel is opposed to limiting your software to the necessary 80% in order to keep it simple and elegant, and then expecting to be wildly successful. It just doesn't happen. You start off mildly successful using 80/20, then you throw 80/20 aside and get serious... or you at least open source your thingy and use a really good plugin API so that people can fill in the other 19.9% of usefulness for you by doing the remaining 99% of the work.
ssh+2FA to all your machines, anywhere, without opening firewall ports.