100% Pure

accept no imitations

Everything here is my personal opinion. I do not speak for my employer.
Back: February 2011
Next: April 2011

2011-03-13 »

The strange story of etherpad

I don't actually know this story - certainly no more of it than anyone who has read a few web pages. But luckily, I'm pretty good at making things up. So I want to tell you the story of etherpad, the real-time collaborative web-based text editor.

I find this story particularly fascinating because I have lately had an obsession with "simplifying assumptions" - individual concepts, obvious in retrospect, which can eliminate whole classes of problems.

You can stumble on a simplifying assumption by accident, by reading lots of research papers, or by sitting alone in a room and thinking for months and months. What you can't do is invent them by brute force - they're the opposite of brute force. By definition, a simplifying assumption is a work of genius, whether your own or the person you stole it from.

Unix pipes are the quintessential example of a simplying concept in computer science. The git repository format (mostly stolen from monotone, as the story goes) is trivial to implement, but astonishingly powerful for all sorts of uses. My bupsplit algorithm makes it easy and efficient to store and diff huge files in any hash-based VCS. And Daniel J. Bernstein spent untold effort inventing djb redo, which he never released... but he shared his simplifying ideas, so it was easy to write a redo implementation.

What does all this have to do with etherpad? Simple. Etherpad contains a few startling simplifying assumptions, with a surprising result:

Etherpad is the first (and still only) real-time collaborative editor that has ever made me more productive.

And yet, its authors never saw it as more than just a toy.

My first encounter with etherpad was when Paul Graham posted a real-time etherpad display of him writing an essay. I thought it looked cool, but pointless. Ignore.

Sometime later, I read about Google Wave. I thought that level of collaboration and noise in your inbox sounded like nothing anybody could possibly want; something like crack for lolcats. Ignore.

And then, a little while later, I heard about etherpad again: that it had been bought by Google, and immediately shut down, with the etherpad team being merged into the (technically superior, if you believe the comments at that link) Google Wave team.

Moreover, though, a lot of the commenters were aghast that etherpad had been shut down so suddenly - people were using it for real work, they said. Real work? Etherpad? Did I miss something? Isn't it just a Paul Graham rerun machine?

I couldn't find out, because it was gone.

The outcry was such that it came back again, though, a couple of days later, with a promise that it would soon be open sourced.

So I tried it out. And sure enough, it is good. A nice, simple, multi-user way to edit documents. The open source version is now running for free on multiple sites, including ietherpad and openetherpad, so you can keep using it. Here's what's uniquely great about etherpad:

  • Start a document with one click without logging in or creating an account.
  • Share any document with anyone by cut-and-pasting a trivial URL.
  • Colour coding, with pretty, contrasting colours, easily indicates who wrote what without having to sign your name every time (ie. better than wiki discussions).
  • Full document history shows what changed when.
  • Each person's typing shows up immediately on everyone's screen.
  • Documents persist forever and are accessible from anywhere.
  • No central document list or filenames, so there's nothing to maintain, organize, or prune.
  • Easy to import/export various standard file formats.
  • Simple, mostly bug-free WYSIWYG rich web text editor with good keybindings. (I never thought I would voluntarily use a WYSIWYG text editor. I was wrong.)
  • Just freeform text (no plugins), so it's flexible, like a whiteboard.
  • A handy, persistent chat box next to every document keeping a log of your rationale - forever.
  • A dense screen layout that fits everything I need without cluttering up with stuff I don't. (I'm talking to you, Google Docs.)
  • Uniquely great for short-lived documents that really need collaboration: meeting minutes, itineraries, proposals, quotes, designs, to-do lists.
Where's etherpad development now? Well, it seems to have stopped. All the open source ones I've seen seem to be identical to the last etherpad that existed before Google bought them. The authors disappeared into the Google Vortex and never came out.

A few months later, Google cancelled the Wave project that had absorbed etherpad. It was a failed experiment, massively overcomplicated for what it could do. Nobody liked it. It didn't solve anyone's problem.

And that could be just another sad story of capitalism: big company acquires little company, sucks life out of it, saps creativity, spits out the chewed-up remains.

But, you see, I don't believe that's what happened. I think what happened is much more strange. I think the people who made etherpad really believed Google Wave was better, and they still do. That's what fascinates me.

See, upon further investigation, I learned that etherpad was never meant to be a real product - it was an example product. The real product was AppJet, some kind of application hosting engine for developers. As an AppJet developer, you could use their tools to make collaborative web applications really easily, with plugins and massive flexibility and workflows and whatnot. (Sound familiar? That's what Google Wave was for, too.) And etherpad was just an example of an app you could build with AppJet. Not just any example: it was the simplest toy example they could come up with that would still work.

I get the impression that the AppJet guys were pretty annoyed at the success of etherpad and the relative failure of AppJet itself. Etherpad is so trivial! Where's the magic? Oh God, WHAT ABOUT EMBEDDED VIDEO? WILL SOMEONE PLEASE THINK ABOUT EMBEDDED VIDEO? Etherpad couldn't do embedded video; still can't. AppJet can. Google Wave can. Etherpad, as the face of their company, was embarrassing. It made their technology look weak. Google Wave was a massive testosterone-powered feature checklist from hell, and Etherpad was... a text editor.

No wonder they sold out as fast as they could.

No wonder they shut down their web site the moment they signed the deal.

They felt inferior. They wanted to get the heck out of this loser business as soon as humanly possible.

And that, my friends, is the story of etherpad.

Epilogue

But I'm expecting a sequel. Despite the Wave project's cancellation, the etherpad/appjet guys have still not emerged from the Google Vortex. Rumour has it that their stuff was integrated into Google Docs or something. (Google Docs does indeed now have realtime collaboration - but it's too much AppJet, too little Etherpad, if you know what I mean.)

When I had the chance to visit Google a few weeks ago, I asked around to see if anybody knew what had happened to the etherpad developers; nobody knew. Google's a big place, I guess.

I would love to talk to them someday.

Etherpad legitimized real-time web document collaboration. It created an entirely new market that Google Docs has been desperately trying, and mostly failing, to create. Google Docs is trying to be Microsoft Office for the web, and the problem is, people don't want Microsoft Office for the web, because Microsoft Office works just fine and Google Docs leaves out zillions of well-loved features. In contrast, etherpad targeted, and ironically is still targeting and progressively winning despite the project's cancellation, an actually new and very important niche activity.

The brilliance of etherpad has nothing to do with plugin architectures or database formats or extensibility; all that stuff is plain brute force. Etherpad's beauty is its simplifying assumption, that just collaboratively editing a trivial, throwaway text file is something a lot of people need to do every single day. If you make that completely frictionless, people will love you. And they did.

Somehow, the etherpad guys never recognized their own genius. They hated it. They wanted it dead, but it refuses to stay dead.

What happens next?

...

Pre-emptive commentary

I expect that as soon as anyone reads this article, I'll get a bunch of comments telling me that Google Wave is the answer, or Google Docs can now do everything Etherpad can do, or please try my MS Office clone of the week, etc. So let me be very specific.

First of all, look at the list of etherpad features I included above. I love all those features. If you want me to switch to a competing product for the times I currently use etherpad, I want all that stuff. I don't actually want anything else, so "we don't do X but we do Y instead, which is better!" is probably not convincing. (Okay, maybe I want inline file attachments and a few bugfixes. And wiki-like hyperlinks between etherpad documents, ooh!)

Specific things I hate about Google Wave (as compared to etherpad):

  • It's slower.
  • The plugins/templates make things harder, not easier.
  • Conversations are regimented instead of free-form; you end up with ThreadMess that takes up much more screen space than in etherpad, and you can't easily trim/edit other people's comments.
  • It has an "inbox" that forces me to keep track of all my documents, which defeats throwaway uses where etherpad excels.
  • Sharing with other users is a pain because they have to sign up and I have to authorize them, etc.
  • The Google Wave screen has more clutter and less content than the etherpad screen.
  • Google Wave has a zillion settings; etherpad has no learning curve at all.
  • Google Wave wants to replace my email, but that's simply silly, because I don't collaborate on my email.
  • Google Wave wants me to live inside it: it's presumptuous. Etherpad is a tool I grab when I want, and put down when I'm done.
Specific things I hate about Google Docs (as compared to etherpad):

  • It's slower.
  • The screen layout is very very crud-filled (menu bars, etc).
  • It creates obnoxious popovers all the time, like when someone connects/disconnects.
  • Its indication of who changed what is much clumsier.
  • Its limited IM feature treats conversation as transient and interruptive, not a valuable companion to the document.
  • The UI for sharing a document (especially with users outside @gmail.com) is too complicated for mere mortals, such as me, to make work. I'm told it can be done, but it's as good as missing.
  • I can't create throwaway documents because they clutter my personal "list of documents" page that I don't want to maintain.
  • I have to save explicitly. Except sometimes it saves automatically. Basically I have no idea what it's doing. Etherpad saves every keystroke and has a timeline slider; anybody can understand it.
  • It encourages "too much" WYSIWYG: like MS Word, it's trying to be a typewriter with paper page layouts and templates and logos and fonts and whatnot, and encourages people to waste their time on formatting. Etherpad has WYSIWYG formatting for bold/italic/etc, but it's lightweight and basic and designed for the screen, not paper, so it's not distracting.
There are probably additional things I would hate about Wave and Docs, but I avoid them both already because of the above reasons, so I don't know what those other reasons are. Conversely, I use etherpad frequently and love it. Try it; I think you will too.

Update 2011/03/13: In case you would like to know the true story instead of my made up one (yeah, right; that would be like reading the book instead of watching the TV movie), you can read a response by one of the etherpad creators. Spoiler: they have, at least physically, emerged from the Google Vortex.

Update 2011/03/13: Someone also linked to PiratePad, which is a modification of etherpad that includes #tags and [[hyperlinks]]. That means they accomplished one of my dreams: making it into a wiki!

March 13, 2011 22:32

2011-03-16 »

Parsing ought to be easier

I just realized that I've spent too much time writing about stuff I actually know about lately. (Although okay, that last article was a stretch.) So here's a change of pace.

I just read an interesting article about parsing, that is, Parsing: The Solved Problem that Isn't. It talks about "composability" of grammars, that is, what it would take to embed literal SQL into your C parser, for example.

It's a very interesting question that I hadn't thought of before. Interesting, because every parser I've seen would be *hellish* to try to compose with another grammar. Take the syntax highlighting in your favourite editor and try to have to it auto-shift from one language to another (PHP vs. HTML, or python vs. SQL, or perl vs. regex). It never works. Or if you're feeling really brave, take the C++ parser from gcc and use it to do syntax highlighting in wordpress for when you insert some example code. Not likely, right?

The article was a nice reminder of what I had previously learned about parsing in school: context free grammars, LL(k), and so on. Before I went to school, I had never heard of or used any of those things, but I had written a lot of parsers; I now know that what I had independently "invented" is called recursive descent and it seems to be pretty unfashionable among people who "know" parsing.

I admit it; I'm a troublemaker and I fail badly at academia. I still always write all my parsers as recursive descent. Sometimes I even don't split the tokenizer from the grammar. Booyah! I even write non-conforming XML parsers sometimes, and use them for real work.

So if you're a parsing geek, you'd better leave now, because this isn't going to get any prettier.

Anyway, here's my big uneducated non-academic parsing neophyte theory:

You guys spend *way* too much time caring about arithmetic precedence.

See, arithmetic precedence is important; languages that don't understand it (like Lisp) will never be popular, because they prevent you from writing what you mean in a way that looks like what you mean. Fine. You've gotta have it. But it's a problem, because context-free grammars (and its subsets) have a *seriously hard time* with it. You can't just say "addition looks like E+E" and "multiplication looks like E*E" and "an expression E is either a number or an addition or a multiplication", because then 1+2*3 might mean either (1+2)*3 or 1+(2*3), and those are two different things. Every generic parsing algorithm seems to require hackery to deal with things like that. Even my baby, recursive descent, has a problem with it.

So here's what I say: just let it be a hack!

Because precedence is only a tiny part of your language, and the *rest* is not really a problem at all.

When I write a parser that cares about arithmetic precedence - which I do, sometimes - the logic goes like this:

  • ah, there's the number one
  • a plus sign!
  • the number two! Sweet! That's 1+2! It's an expression!
  • a multiplication sign. Uh oh.
  • the number three. Hmm. Well, now we have a problem.
  • (hacky hacky swizzle swizzle) Ta da! 1+(2*3).
I'm not proud of it, but it happens. You know what else? Almost universally, the *rest* of the parser, outside that one little hacky/swizzly part, is fine. The rest is pretty easy. Matching brackets? Backslash escapes? Strings? Function calls? Code blocks? All those are easy and non-ambiguous. You just walk forward in the text one token at a time and arrange your nice tree.

The dirty secret about parsing theory is that if you're a purist, it's almost impossible, but if you're willing to put up with a bit of hackiness in one or two places, it's *really* easy. And now that computers are fast, your algorithm rarely has to be even close to optimized.

Even language composition is pretty easy, but only in realistic cases, not generalized ones. If you expect this to parse:

	if (parseit) {
		query = "select "booga" + col1 from table where n="}"";
	}

Then you've got a problem. Interestingly, a human can do it. A computer *could* do it too. You can probably come up with an insane grammar that will make that work, if you want to allow for potentially exponential amounts of backtracking and no chance of separating scanning from parsing. (My own preferred recursive descent technique is almost completely doomed here, so you'll need to pull out the really big Ph.D. parsing cannons.) But it *is* possible. You know it is, because you can look at the above code and know what it means.

So that's an example of the "hard problems" that you're talking about when you try to define composability of independent context-free grammars that weren't designed for each other. It's a virtually impossible problem. An interesting one, but not even one that's guaranteed to have a generalizable solution. Compare it, however, with this:

	if (parseit) {
		query = { select "booga" + col1 from table where n = "}" };
	}

Almost the same thing. But this time, the SQL is embedded inside braces instead of quotes. Aha, but that n="}" business is going to screw us over, right? The C-style parser will see the close-brace and stop parsing!

No, not at all. A simple recursive descent parser, without even needing lookahead, will have no trouble with this one, because it will clearly know it's inside a string at the time it sees the closebrace. Obviously you need to be using a SQL-style tokenizer inside the SQL section, and your SQL-style tokenizer needs to somehow know that when it finds the mismatched brace, that's the end of its job, time to give it back to C. So yeah, if you're writing this parser "Avery style", you're going to have to be writing it as one big ugly chunk of C + SQL parser all in one. But it won't be any harder than any other recursive descent parser, really; just twice the size because it's for two languages instead of one.

So here's my dream: let's ignore the parsing purists for a moment. Let's accept that operator precedence is a special case, and just hack around it when needed. And let's only use composability rules that are easy instead of hard - like using braces instead of quotes when introducing sublanguages.

Can we define a language for grammars - a parser generator - that easily lets us do *that*? And just drop into a lower-level implementation for that inevitable operator precedence hack.

Pre-emptive snarky comments: This article sucks. It doesn't actually solve any problems of parsing or even present a design, it just complains a lot. Also, this type of incorrect and dead-end thinking is already well covered in the literature, it's just that I'm too lazy to send you a link to the article right now because I'm secretly a troll and would rather put you down than be constructive. Also, the author smells funny.

Response to pre-emptive snarky comments: All true. I would appreciate those links though, if you have them, and I promise not to call you a troll. To your face.

March 16, 2011 02:13

2011-03-19 »

Celebrating Failure

I receive the monthly newsletter from Engineers Without Borders, an interesting organization providing engineering services to developing countries.

Whether because they're Canadian or because they're engineers, or both, they are unusual among aid organizations because they focus on understanding what didn't work. For the last three years, they've published Failure Reports detailing their specific failures. The reports make an interesting read, not just for aid organizations, but for anyone trying to manage engineering teams.

I wish more organizations, and even more individuals would write documents like that. I probably should too.

March 18, 2011 22:03

2011-03-20 »

Suggested One-Line Plot Summaries, Volume 1 (of 1)

"The Summer of My Disco-tent."

Discuss.

March 20, 2011 20:42

2011-03-24 »

The Google Vortex

For a long time I referred to Google as the Programmer Black Hole: my favourite programmers get sucked in, and they never come out again. Moreover, the more of them that get sucked in, the more its gravitation increases, accelerating the pull on those that remain.

I've decided that this characterization isn't exactly fair. Sure, from our view in the outside world, that's obviously what's happening. But rather than all those programmers being compressed into a spatial singularity, they're actually emerging into a parallel universe on the other side. A universe where there *is* such a thing as a free lunch, threads don't make your programs crash, parallelism is easy, and you can have millions of customers but provide absolutely no tech support and somehow get away with it. A universe with self-driving cars, a legitimate alternative to C, a working distributed filesystem, and the entire Internet cached in RAM.

A universe where on average, each employee produced $425,450 in profit in 2010, after deducting their salary and all other expenses. (Or alternatively: $1.2 million in revenue.)

I don't much like the fact of the Google Vortex. It's very sad to me that there are now two programmer universes: the haves and the have-nots. More than half of the programmers I personally know have already gone to the other side. Once you do, you can suddenly work on more interesting problems, with more powerful tools, with on average smarter people, with few financial constraints... and you don't have to cook for yourself. For the rest of us left behind, the world looks more and more threadbare.

A few people do manage to emerge from the vortex, providing circumstantial evidence that human life does still exist on the other side. Many of them emerge talking about bureaucracy, politics, "big company attitude", projects that got killed, and how things "aren't like they used to be." And also how Google stock options aren't worth much because they've already IPO'd. But sadly, this is a pretty self-selecting group, so you can't necessarily trust what they're complaining about; presumably they'll be complaining about something, or they wouldn't have left.

What you really want to know is what the people who didn't leave are thinking. Which is a problem, because Google is so secretive that nobody will tell you much more than, "Google has free food. You should come work here." And I already knew that.

So let's get to the point: in the name of science (and free food, and because all my friends said I should go work there), I've agreed to pass through the Google Vortex starting on Monday. The bad news for you is, once I get through to the other side, I won't be able to tell you what I discover, so you're no better off. Google doesn't stop its employees from blogging, but you might have noticed that the blogs of Googlers don't tell you the things you really want to know. If the NDA they want me to sign is any indication, I won't be telling you either.

What I can do, however, is give you some clues. Here's what I'm hoping to do at Google:

  • Work on customer-facing real technology products: not pure infrastructure and not just web apps.
  • Help solve some serious internet-wide problems, like traffic shaping, real-time communication, bufferbloat, excessive centralization, and the annoying way that surprise popularity usually means losing more money (hosting fees) by surprise.
  • Keep coding. But apparently unlike many programmers, I'm not opposed to managing a few people too.
  • Keep working on my open source projects, even if it's just on evenings and weekends.
  • Eat a *lot* of free food.
  • Avoid the traps of long release cycles and ignoring customer feedback.
  • Avoid switching my entire life to Google products, in the cases where they aren't the best... at least not without first making them the best.
  • Stay so highly motivated that I produce more valuable software, including revenue, inside Google than I would have by starting a(nother) startup.
Talking to my friends "on the inside", I believe I can do all those things. If I can't achieve at least most of it, then I guess I'll probably quit. Or else I will discover, with the help of that NDA, that there's something even *better* to work on.

So that's my parting gift to you, my birth universe: a little bit of circumstantial evidence to watch for. Not long from now, assuming the U.S. immigration people let me into the country, I'll know too much proprietary information to be able to write objectively about Google. Plus, speaking candidly in public about your employer is kind of bad form; even this article is kind of borderline bad form.

This is probably the last you'll hear from me on the topic. From now on, you'll have to draw your own conclusions.

March 23, 2011 22:29

2011-03-27 »

Time capsule: assorted cooking advice

Hi all,

As I mentioned previously, I'm about to disappear into the Google Vortex, across which are stunning vistas of trees and free food and butterflies as far as the eye can see. Thus, I plan to never ever cook for myself again, allowing me to free up all the neurons I had previously dedicated to remembering how.

Just in case I'm wrong, let me exchange some of those neurons for electrons.

Note that I'm not a "foodie" or a gourmet or any of that stuff. This is just baseline information needed in order to be relatively happy without dying of starvation, boredom, or (overpriced ingredient induced) bankruptcy, in countries where you can die of bankruptcy.

Here we go:

  • Priority order for time-saving appliances: microwave, laundry machine, dishwasher, food processor, electric grill. Under no circumstances should you get a food processor before you get a dishwasher. Seriously. And I include laundry machine here because if you have one, you can do your laundry while cooking, which reduces the net time cost of cooking.

  • Cast-iron frying pans are a big fad among "foodies" nowadays. Normally I ignore fads, but in this case, they happen to be right. Why cast iron pans are better: 1) they're cheap (don't waste your money on expensive skillets! It's cast iron, it's supposed to be cheap!); 2) unlike nonstick coatings, they never wear out; 3) you're not even supposed to clean them carefully, because microbits from the previous meal help the "seasoning" process *and* makes food taste better; 4) they never warp; 5) they heat very gradually and evenly, so frying things (like eggs) is reliable and repeatable; 6) you can use a metal flipper and still never worry about damage; 7) all that crap advice about "properly seasoning your skillet before use" is safe to ignore, because nothing you can do can ever possibly damage it, because it's freakin' cast iron. (Note: get one with a flat bottom, not one with ridges. The latter has fewer uses.)

  • You can "bake" a potato by prodding it a few times with a fork, then putting it on a napkin or plate, microwaving it for 7 minutes, and adding butter. I don't know of any other form of healthy, natural food that's as cheap and easy as this. The Irish (from whom I descend, sort of) reputedly survived for many years, albeit a bit unhappily, on a diet of primarily potatoes. (Useless trivia: the terrible "Irish potato famine" was deadly because the potatoes ran out, not because potatoes are bad.) Amaze your friends at work by bringing a week's worth of "lunch" to the office in the form of a sack of potatoes. (I learned that trick from a co-op student once. We weren't paying him much... but we aimed to hire resourceful people with bad negotiating skills, and it paid off.)

  • Boiled potatoes are also easy. You stick them in a pot of water, then boil it for half an hour, then eat.

  • Bad news: the tasty part of food is the fat. Good news: nobody is sure anymore if fat is bad for you or not, or what a transfat even is, so now's your chance to flaunt it before someone does any real science! Drain the fat if you must, but don't be too thorough about it.

  • Corollary: cheaper cuts of meat usually taste better, if prepared correctly, because they have more fat than expensive cuts. "Correctly" usually means just cooking on lower heat for a longer time.

  • Remember that cooking things for longer is not the same as doing more work. It's like wall-clock time vs. user/system time in the Unix "time" command. Because of this, you can astonish your friends by making "roast beef", which needs to cook in the oven for several hours, without using more than about 20 minutes of CPU time.

  • Recipe for french toast: break two eggs into a bowl, add a splash of milk, mix thoroughly with a fork. Dip slices of bread into bowl. Fry in butter in your cast iron pan at medium heat. Eat with maple syrup. And don't believe anyone who tells you more ingredients are necessary.

  • Recipe for perogies: buy frozen perogies from store (or Ukranian grandmother). Boil water. Add frozen perogies to water. Boil them until they float, which is usually 6-8 minutes. Drain water. Eat.

  • Recipe for meat: Slice, chop, or don't, to taste. Brown on medium-high heat in butter in cast-iron skillet (takes about 2 minutes). Turn heat down to low. Add salt and pepper and some water so it doesn't dry out. Cover. Cook for 45-60 minutes, turning once, and letting the water evaporate near the end.

  • Recipe for vegetables: this is a trick question. You can just not cook them. (I know, right? It's like food *grows on trees*!)
Hope this advice isn't too late to be useful to you. So long, suckers!

March 27, 2011 01:52

2011-03-28 »

I hope IPv6 *never* catches on

(Temporal note: this article was written a few days ago and then time-released.)

This year, like every year, will be the year we finally run out of IPv4 addresses. And like every year before it, you won't be affected, and you won't switch to IPv6.

I was first inspired to write about IPv6 after I read an article by Daniel J. Bernstein (the qmail/djbdns/redo guy) called The IPv6 Mess. Now, that article appears to be from 2002 or 2003, if you can trust its HTTP Last-Modified date, so I don't know if he still agrees with it or not. (If you like trolls, check out the recent reddit commentary about djb's article.) But 8 years later, his article still strikes me as exactly right.

Now, djb's commentary, if I may poorly paraphrase, is really about why it's impossible (or perhaps more precisely, uneconomical, in the sense that there's a chicken-and-egg problem preventing adoption) for IPv6 to catch on without someone inventing something fundamentally new. His point boils down to this: if I run an IPv6-only server, people with IPv4 can't connect to it, and at least one valuable customer is *surely* on IPv4. So if I adopt IPv6 for my server, I do it in addition to IPv4, not in exclusion. Conversely, if I have an IPv6-only client, I can't talk to IPv4-only servers. So for my IPv6 client to be useful, either *all* servers have to support IPv6 (not likely), or I *must* get an IPv4 address, perhaps one behind a NAT.

In short, any IPv6 transition plan involves *everyone* having an IPv4 address, right up until *everyone* has an IPv6 address, at which point we can start dropping IPv4, which means IPv6 will *start* being useful. This is a classic chicken-and-egg problem, and it's unsolvable by brute force; it needs some kind of non-obvious insight. djb apparently hadn't seen any such insight by 2002, and I haven't seen much new since then.

(I'd like to meet djb someday. He would probably yell at me. It would be awesome. </groupie>)

Still, djb's article is a bit limiting, because it's all about why IPv6 physically can't become popular any time soon. That kind of argument isn't very convincing on today's modern internet, where people solve impossible problems all day long using the unstoppable power of "not SQL", Ruby on Rails, and Ajax to-do list applications (ones used by breakfast cereal companies!).

No, allow me to expand on djb's argument using modern Internet discussion techniques:

Top 10 reasons I hope IPv6 never catches on

Just kidding. No, we're going to do this apenwarr-style:

What I hate about IPv6

Really, there's only one thing that makes IPv6 undesirable, but it's a doozy: the addresses are just too annoyingly long. 128 bits: that's 16 bytes, four times as long as an IPv4 address. Or put another way, IPv4 contains almost enough addresses to give one to each human on earth; IPv6 has enough addresses to give 39614081257132168796771975168 (that's 2**95) to every human on earth, plus a few extra if you really must.

Of course, you wouldn't really do that; you would waste addresses to make subnetting and routing easier. But here's the horrible ironic part of it: all that stuff about making routing easier... that's from 20 years ago!

Way back in the IETF dark ages when they were inventing IPv6 (you know it was the dark ages, because they invented the awful will-never-be-popular IPsec at the same time), people were worried about the complicated hardware required to decode IPv4 headers and route packets. They wanted to build the fastest routers possible, as cheaply as possible, and IPv4 routing tables are annoyingly complex. It's pretty safe to assume that someday, as the Internet gets more and more crowded, nearly every single /24 subnet in IPv4 will be routed to a different place. That means - hold your breath - an astonishing 2**24 routes in every backbone router's routing table! And those routes might have 4 or 8 or even 16 bytes of information each! Egads! That's... that's... 256 megs of RAM in every single backbone router!

Oh. Well, back in 1995, putting 256 megs of RAM in a router sounded like a big deal. Nowadays, you can get a $99 Sheevaplug with twice that. And let me tell you, the routers used on Internet backbones cost a lot more than $99.

It gets worse. IPv6 is much more than just a change to the address length; they totally rearranged the IPv4 header format (which means you have to rewrite all your NAT and firewall software, mostly from scratch). Why? Again, to try to reduce the cost of making a router. Back then, people were seriously concerned about making IPv6 packets "switchable" in the same way ethernet packets are: that is, using pure hardware to read the first few bytes of the packet, look it up in a minimal routing table, and forward it on. IPv4's variable-length headers and slightly strange option fields made this harder. Some would say impossible. Or rather, they would, if it were still 1995.

Since then, FPGAs and ASICs and DSPs and microcontrollers have gotten a whole lot cheaper and faster. If Moore's Law calls for a doubling of transistor performance every 18 months, then between 1995 and 2011 (16 years), that's 10.7 doublings, or 1663 times more performance for the price. So if your $10,000 router could route 1 gigabit/sec of IPv4 in 1995 - which was probably pretty good for 1995 - then nowadays it should be able to route 1663 gigabits/sec. It probably can't, for various reasons, but you know what? I sincerely doubt that's IPv4's fault.

If it were still 1995 and you had to route, say, 10 gigabits/sec for the same price as your old 1 gigabit/sec router using the same hardware technology, then yeah, making a more hardware-friendly packet format might be your only option. But the router people somehow forgot about Moore's Law, or else they thought (indications are that they did) that IPv6 would catch on much faster than it has.

Well, it's too late now. The hardware-optimized packet format of IPv6 is worth basically zero to us on modern technology. And neither is the simplified routing table. But if we switch to IPv6, we still have to pay the software cost of those things, which is extremely high. (For example, Linux IPv6 iptables rules are totally different from IPv4 iptables rules. So every Linux user would need to totally change their firewall configuration.)

So okay, the longer addresses don't fix anything technologically, but we're still running out of addresses, right? I mean, you can't argue with the fact that 2**32 is less than the number of people on earth. And everybody needs an IP address, right?

Well, no, they don't:

The rise of NAT

NAT is Network Address Translation, sometimes called IP Masquerading. Basically, it means that as a packet goes through your router/firewall, the router transparently changes your IP address from a private one - one reused by many private subnets all over the world and not usable on the "open internet" - to a public one. Because of the way TCP and UDP work, you can safely NAT many, many private addresses onto a single public address.

So no. Not everybody in the world needs a public IP address. In fact, *most* people don't need one, because most people make only outgoing connections, and you don't need your own public IP address to make an outgoing connection.

(Update 2011/04/02: A lot of people have criticized this article by talking about how nasty it is to require NAT everywhere. If we had too much NAT, the whole world would fall apart, etc. That argument is awfully hard to understand: any user with a home wireless router has a computer behind a NAT. The world hasn't come to an end. What makes you think we can't handle a little more of the same?)

By the way, the existence of NAT (and DHCP) has largely eliminated another big motivation behind IPv6: automatic network renumbering. Network renumbering used to be a big annoying pain in the neck; you'd have to go through every computer on your network, change its IP address, router, DNS server, etc, rewrite your DNS settings, and so on, every time you changed ISPs. When was the last time you heard about that being a problem? A long, long time ago, because once you switch to private IP subnets, you virtually never have to renumber again. And if you use DHCP, even the rare mandatory renumbering (like when you merge with another company and you're both using 192.168.1.0/24) is rolled out automatically from a central server.

Okay, fine, so you don't need more addresses for client-only machines. But every server needs its own public address, right? And with the rise of peer-to-peer networking, everyone will be a server, right?

Well, again, no, not really. Consider this for a moment:

Every HTTP Server on Earth Could Be Sharing a Single IP Address and You Wouldn't Know The Difference

That's because HTTP/1.1 (which is what *everyone* uses now... speaking of avoiding chicken/egg problems) supports "virtual hosts." You can connect to an IP address on port 80, and you provide a Host: header at the beginning of the connection, telling it which server name you're looking for. The IP you connect to can then decide to route that request anywhere it wants.

In short, HTTP is IP-agnostic. You could run HTTP over IPv4 or IPv6 or IPX or SMS, if you wanted, and you wouldn't need to care which IP address your server had. In a severely constrained world, Linode or Slicehost or Comcast or whoever could simply proxy all the incoming HTTP requests to their network, and forward the requests to the right server.

(See the very end of this article for discussion of how this applies to HTTPS.)

Would it be a pain? Inefficient? A bit expensive? Sure it would. So was setting up NAT on client networks, when it first arrived. But we got used to it. Nowadays we consider it no big deal. The same could happen to servers.

What I'd expect to happen is that as the IPv4 address space gets more crowded, the cost of a static IP address will go up. Thus, fewer and fewer public IP addresses will be dedicated to client machines, since clients won't want to pay extra for something they don't need. That will free up more and more addresses for servers, who will have to pay extra.

It'll be a *long* time before we reach 4 billion (2**32) server IPs, particularly given the long-term trend toward more and more (infinitely proxyable) HTTP. In fact, you might say that HTTP/1.1 has successfully positioned itself as the winning alternative to IPv6.

So no, we are obviously not going to run out of IPv4 addresses. Obviously. The world will change, as it did when NAT changed from a clever idea to a worldwide necessity (and earlier, when we had to move from static IPs to dynamic IPs) - but it certainly won't grind to a halt.

Next:

It is possible do do peer-to-peer when both peers are behind a NAT.

Another argument against widespread NATting is that you can't run peer-to-peer protocols if both ends are behind a NAT. After all, how would they figure out how to connect to each other? (Let's assume peer-to-peer is a good idea, for purposes of this article. Don't just think about movie piracy; think about generally improved distributed database protocols, peer-to-peer filesystem backups, and such.)

I won't go into this too much, other than to say that there are already various NAT traversal protocols out there, and as NAT gets more and more annoyingly mandatory, those protocols and implementations are going to get much better.

(Update 2011/04/02: Clarification: I'm not claiming here that NAT traversal is currently standardized or very reliable. I'm just saying that it already sort of works - ask any Bittorrent user - and moreover, that it can be improved *without* having to upgrade/reconfigure every IP stack and every router in the world first. As more and more people end up getting forcibly stuck behind an ISP-controlled IPv4 NAT, you can bet that some huge innovation around NAT traversal will start to materialize. And it will work with multi-layer NAT, too, I guarantee it.)

Note too that NAT traversal protocols don't have a chicken-and-egg problem like IPv6 does, for the same reason that dynamic IP addresses don't, and NAT itself doesn't. The reason is: if one side of the equation uses it, but the other doesn't, you might never know. That, right there, is the one-line description of how you avoid chicken-and-egg adoption problems. And how IPv6 didn't.

IPv6 addresses are as bad as GUIDs

So here's what I really hate about IPv6: 16-byte (32 hex digit) addresses are impossible to memorize. Worse, auto-renumbering of networks, facilitated by IPv6, mean that anything I memorize today might be totally different tomorrow.

IPv6 addresses are like GUIDs (which also got really popular in the 1990s dark ages, notably, although luckily most of us have learned our lessons since then). The problem with GUIDs are now well-known: that is, although they're globally unique, they're also totally unrecognizable.

If GUIDs were a good idea, we would use them instead of URLs. Are URLs perfect? Does anyone love Network Solutions? No, of course not. But it's 1000x better than looking at http://b05d25c8-ad5c-4580-9402-106335d558fe and trying to guess if that's *really* my bank's web site or not.

The counterargument, of course, is that DNS is supposed to solve this problem. Give each host a GUID IPv6 address, and then just map a name to that address, and you can have the best of both worlds.

Sounds good, but isn't actually. First of all, go look around in the Windows registry sometime, specifically the HKEY_CLASSES_ROOT section. See how super clean and user-friendly it isn't? Barf. But furthermore, DNS [as generally configured on the Internet] is still a steaming pile of hopeless garbage. When I bring my laptop to my friend's house and join his WLAN, why can't he ping it by name? Because DNS [implementations] suck. Why doesn't it show up by name in his router control panel so he knows which box is using his bandwidth? Because DNS [implementations] suck. Why can the Windows server browse list see it by name (sometimes, after a random delay, if you're lucky), even though DNS can't? Because they got sick of DNS and wrote something that works. (mDNS, while based on DNS, is really a very new thing.) Why do we still send co-workers hyperlinks with IP addresses in them instead of hostnames? Because the fascist sysadmin won't add a DNS entry for the server Bob set up on his desktop PC.

DNS [implementations] are, at best, okay. [They] will get better over time, as necessity dictates. All the problems I listed above are mostly solved already, in one form or another, in different DNS, DHCP, and routing products [but no single product gets everything right, and not everybody uses the best DNS server products, so the net result is confusion and mess]. It's certainly not the DNS *protocol* that's to blame, it's just how people use it.

(Update 2011/04/02: in the two paragraphs above, clarified that I mean *existing DNS implementations*, not the DNS protocol. Yes, mDNS is lovely; too bad most normal people still don't have a working install of it. I hope someday we all have something like that; it will benefit IPv4 and IPv6 users equally.)

But still, if you had to switch to IPv6, you'd discover that those DNS problems that were a nuisance yesterday are suddenly a giant fork stabbing you in the face today. I'd rather they fixed DNS *before* making me switch to something where I can't possibly remember my IP addresses anymore, thanks.

(Update 2011/04/02: To be extra clear here: I am saying that DNS is *currently* not good enough and has many obvious routes to improvement, none of which pose chicken-and-egg adoption problems like IPv6 does. DNS can be improved. People are actively working on it; I love those people! And yes, the fact that DNS implementations suck is annoying with both IPv4 and IPv6. My point in this section is just that with IPv6, you can't work around the annoyance of sucky DNS as easily as with IPv4, because IPv4 addresses can be memorized, while IPv6 ones can't. Furthermore, if we fix DNS, it will help IPv4 *and* IPv6. So my advice to IPv6 proponents: fix DNS first, *then* maybe we can talk about IPv6.)

Server-side NAT could actually make the world a better place

So that's my IPv6 rant. I want to leave you with some good news, however: I think the increasing density of IPv4 addresses will actually make the Internet a better place, not a worse one.

Client-side NAT had an unexpected huge benefit: security. NAT is like "newspeak" in Orwell's 1984: we remove nouns and verbs to make certain things inexpressible. For example, it is not possible for a hacker in Outer Gronkstown to even express to his computer the concept of connecting to the Windows File Sharing port on your laptop, because from where he's standing, there is no name that unambiguously identifies that port. There is no packet, IPv4 or IPv6 or otherwise, that he can send that will arrive at that port.

A NAT can be unbelievably simple-minded, and just because of that one limitation, it will vastly, insanely, unreasonably increase your security. As a society of sysadmins, we now understand this. You could give us all the IPv6 addresses in the world, and we'd still put our corporate networks behind a NAT. No contest.

Server-side NAT is another thing that could actually make life better, not worse. First of all, it gives servers the same security benefits as clients - if I accidentally leave a daemon running on my server, it's not automatically a security hole. (I actually get pretty scared about the vhosts I run, just because of those accidental holes.)

But there's something else, which I would be totally thrilled to see fixed. You see, IPv4 addresses aren't really 32-bits. They're actually 48 bits: a 32-bit IP address plus a 16-bit port number. People treat them as separate things, but what NAT teaches us is that they're really two parts of the same whole: the flow identifier, and you can break them up any way you want.

The address of my personal apenwarr.ca server isn't 74.207.252.179; it's 74.207.252.179:80. As a user of my site, you didn't have to type the IP (which was provided by DNS) or the port number (which is a hardcoded default in your web browser), but if I started another server, say on port 8042, then you *would* have to enter the port. Worse, the port number would be a weird, meaningless, magic number, akin to memorizing an IP address (though mercifully, only half as long).

So here's my proposal to save the Internet from IPv6: let's extend DNS to give out not only addresses, but port numbers. So if I go to www2.apenwarr.ca, it could send me straight to 74.207.252.179:8042. Or if I ask for ssh.apenwarr.ca, I get 74.207.252.179:22.

Someday, when IPv4 addresses get too congested, I might have to share that IP address with five other people, but that'll be fine, because each of us can run our own web server on something other than port 80, and DNS would transparently give out the right port number.

This also solves the problem with HTTPS. Alert readers will have noticed, in my comments above, that HTTPS can't support virtual hosts the same way HTTP does, because of a terrible flaw in its certificate handling. Someday, someone might make a new version of the HTTPS standard without this terrible flaw, but in the meantime, transparently supporting multiple HTTPS servers via port numbers on the same machine eliminates the problem; each port can have its own certificate.

(Update 2011/03/28: zillions of people wrote to remind me about SNI, an HTTPS extension that allows it to work with vhosts. Thanks! Now, some of those people seemed to think this refutes my article somehow, which is not true. In fact, the existence of an HTTPS vhosting standard makes IPv6 even *less* necessary. Then again, the standard doesn't work with IE6.)

This proposal has very minor chicken-and-egg problems. Yes, you'll have to update every operating system and every web browser before you can safely use it for *everything*. But for private use - for example, my personal ssh or VPN or testing web server - at least it'll save me remembering stupid port numbers. Like the original DNS, it can be adopted incrementally, and everyone who adopts it sees a benefit. Moreover, it's layered on top of existing standards, and routable over the existing Internet, so enabling it has basically zero admin cost.

Of course, I can't really take credit for this idea. The solution, DNS SRV records, has already been invented and is being used in a few places.

Embrace IPv4. Love it. Appreciate the astonishing long-lasting simplicity and resilience of a protocol that dates back to the 1970s. Don't let people pressure you into stupid, awful, pain-inducing, benefit-free IPv6. Just relax and have fun.

You're going to be enjoying IPv4 for a long, long time.

...

Update 2011/04/02: Sam Stickland adds, "Regarding the problem of 2^24 routes in the global table, the problem isn't the amount of memory - it's whether the routing computation can take place before the next change in the topology. If it can't the network topology might never stabilise. There is lot of evidence to suggest that Moore's Law won't save us from this growth. RFC4984 goes into a lot of detail on this subject.

"Unfortunately IPv6 has exactly the same problem, and its failure to deal with this is, IMNSHO, its greatest failing. Consider, for example, 16 million multi homed organisations! Given how critical the internet is becoming in many parts of our lives this doesn't seem particularly far fetched.

Some people believe the problem is that an IP address represents both a routing location and a host identity, and splitting these will solve these scaling issues. RFC6115 gives an overview of the all the various proposed solutions - what a shame this wasn't realised when IPv6 was being designed."

...

Update 2011/04/02: There are some interesting comments at Ycombinator News. The most redeeming thing about most of them is that at least now I know people don't just blindly agree with everything I say (which was getting to be a scary trend lately). The bad news is that most of the posts are knee-jerk reactions from people who have religiously bought into IPv6 without actually thinking about it. (Check for yourself: how many of my claims are logic or facts vs. just opinion? How many of the commenters got 40+ upvotes with mostly ad-hominem arguments? (Bonus points for swearing to distract from unproven assertions.) How many of them have very low or negative scores just because they made a point against IPv6? See also: What you can't say.)

This article had the most negative reactions - and thus, of course, the most viewers - since my earlier sequence about XML Parsing and Postel's Law. Anyway, rather than replying to what would certainly be a useless flamewar, I've used the news.yc input to add some clarifications above, for the benefit of future readers. (All of them are marked with "Update 2011/04/02" or [] brackets.)

There is one really valid counterpoint that a few people made that I failed to bring up in my article above. That is: you never know how making things easier - in this case, giving every computer a non-NATted address - will encourage innovation and make new things possible that weren't possible before. It's possible that the whole world would be filled with working peer-to-peer software by now if everyone had a static IP and there was no such thing as NAT. I don't actually believe that specific utopian version, but there really is a valid argument that real innovation might have been lost because of NAT. Conversely, though, the need for NAT is also the primary reason every home router has a built-in firewall nowadays (you *know* someone would sell a super-cheap firewall-less router if they could, and you *know* a lot of people would buy it). I remember the days before NAT was widespread. The biggest innovation in "peer-to-peer" at that time was an endless series of internet worms and viruses. Maybe in this case, making it too easy to make incoming connections didn't make things better, it made things worse.

...

Update 2011/04/02: One more thing: some people have commented to the effect that "his opinions about IPv6 will change after he joins Google." First of all, the opinons on this blog have always been and will continue to be mine, not my employer's, and I don't randomly change my personal opinions based on who I work for. And secondly, even if Google does use IPv6 internally - I don't know if they do or not - it won't make any difference. They'll still be talking to apenwarr.ca over IPv4, and everything I wrote here will still be just as true.

April 3, 2011 17:50

Back: February 2011
Next: April 2011
Why would you follow me on twitter? Use RSS.
apenwarr-on-gmail.com