I love the smell of

in the morning
Everything here is my opinion. I do not speak for your employer.
April 2012
June 2012

2012-05-01 »

Okay, this is awesome: http://blog.xamarin.com/2012/05/01/android-in-c-sharp/

While I'm suspicious of all JIT-compiled languages, C# has always had a much better IL than java, even since version 1.0, making it easy to win benchmarks like this (and it translates to the real world too). I don't honestly expect Google to start switching en masse from Java to C# - if only! - but it's an amazing experiment and I love some of their reusable optimizations (like writing a C# layer that interfaces directly to the native rendering libraries, bypassing the slow Java stuff, even in your C# apps on normal Android).

For anybody familiar with VM design and optimization, performance discussions about .NET vs. Java go something like this:

.NET guy: Value types.

Java guy: Okay, you win.

2012-05-08 »

TCP doesn't suck, and all the proposed bufferbloat fixes are identical

Background: a not very clear article in ACM Queue led to a post by Bram Cohen claiming TCP sucks.

The first article is long and seems technically correct, although in my opinion it over-emphasizes unnecessary details and under-emphasizes some really key points. The second article then proceeds to misunderstand many of those key points and draw invalid conclusions, while attempting to argue in favour of a solution (uTP) that is actually a good idea. So I'm writing this, I suppose, to refute the second article in order to better support its thesis. That makes sense, right? No? Well, I'm doing it anyway.

First of all, the main problem we're talking about here, "bufferbloat," is actually two problems that we'd better separate. To oversimplify only a little, problem #1 is oversized upstream queues in your cable modem or DSL router. Problem #2 is oversized queues everywhere else on the Internet.

The truth is, for almost everyone reading this, you don't care even a little bit about problem #2. It isn't what makes your Internet slow. If you're running an Internet backbone, maybe you care because you can save money, in which case, go hire a consultant to figure out how to fine tune your overpriced core routers. Jim Gettys and others are on a crusade to get more ISPs to do this, which I applaud, but that doesn't change the fact that it's irrelevant to me and you because it isn't causing our actual problem. (Van Jacobson points this out a couple of times in the course of the first article, but one gets the impression nobody is listening. I guess "the Internet might collapse" makes a more exciting article.)

What I want to concentrate on is problem #1, which actually affects you and which you have some control over. The second article, although it doesn't say so, is also focused on that. The reason we care about that problem is that it's the one that makes your Internet slow when you're uploading stuff. For example, when you're running (non-uTP) BitTorrent.

This is where I have to eviscerate the second article (which happens to be by the original BitTorrent guy) a little. I'll start by agreeing with his main point: uTP, used by modern BitTorrent implementations, really is a very good, very pragmatic, very functional, already-works-right-now way to work around those oversized buffers in your DSL/cable modem. If all your uploads use uTP, it doesn't matter how oversized the buffers are in your modem, because they won't fill up, and life will be shiny.

The problem is, uTP is a point solution that only solves one problem, namely creating a low-priority uplink suitable for bulk, non-time-sensitive uploads that intentionally give way to higher priority stuff. If I'm videoconferencing, I sure do want my BitTorrent to scale itself back, even down to zero, in favour of my video and audio. If I'm waiting for my browser to upload a file attachment to Gmail, I want that to win too, because I'm waiting for it synchronously before I can get any more work done. In fact, if me and my next-door neighbour are sharing part of the same Internet link, I want my BitTorrent to scale itself back even to help out his Gmail upload, in the hope that he'll do the same for me (automatically of course) when the time comes. uTP does all that. But for exactly that reason, it's no good for my Gmail upload or my ssh sessions or my random web browsing. If I used uTP for all those things, then they'd all have the same priority as BitTorrent, which would defeat the purpose.

That gives us a clue to the problem in Cohen's article: he's really disregarding how different protocols interoperate on the Internet. (He discounts this as "But game theory!" as if using sarcasm quotes would make game theory stop predicting inconvenient truths.) uTP was designed to interact well with TCP. It was also designed for a world with oversized buffers. TCP, of course, also interacts well with TCP, but it never considered bufferbloat, which didn't exist at the time. Our bufferbloat problems - at least, the thing that turns bufferbloat from an observation into a problem - come down to just that: they couldn't design for it, because it didn't exist.

Oddly enough, fixing TCP to work around bufferbloat is pretty easy. The solution is "latency-based TCP congestion control," the most famous implementation of which is TCP Vegas. Sadly, when you run it or one of its even better successors, you soon find out that old-style TCP always wins, just like it always wins over uTP, and for exactly the same reason. That means, essentially, that if anyone on the Internet is sharing bandwidth with you (they are), and they're running traditional-style TCP (virtually everyone is), then TCP Vegas and its friends make you a sucker with low speeds. Nobody wants to be a sucker. (This is the game theory part.) So you don't want to run latency-based TCP unless everyone else does first.

If you're Bram Cohen, you decide this state of affairs "sucks" and try to single-handedly convince everyone on the Internet to simultaneously upgrade their TCP stack (or replace it with uTP; same undeniable improvement, same difficulty). If you co-invented the Internet, you probably gave up on that idea in the 1970's or so, and are thinking a little more pragmatically. That's where RED (and its punny successors like BLUE) come in.

Now RED, as originally described, is supposed to run on the actual routers with the actual queues. As long as you know the uplink bandwidth (which your modem does know, outside annoyingly variable things like wireless), you can fairly easily tune the RED algorithm to an appropriate goal queue length and off you go.

By the way, a common misconception about RED, one which VJ briefly tried to dispel in the first article ("mark or drop it") but which is again misconstrued in Cohen's article, is that if you use traditional TCP plus RED queuing, you will still necessarily have packet loss. Not true. The clever thing about RED is you start managing your queue before it's full, which means you don't have to drop packets at all - you can just add a note before forwarding that says, "If I weren't being so nice to you right now, I would have dropped this," which tells the associated TCP session to slow down, just like a dropped packet would have, without the inconvenience of actually dropping the packet. This technique is called ECN (explicit congestion notification), and it's incidentally disabled on most computers right now because of a tiny minority of servers/routers that still explode when you try to use it. That sucks, for sure, but it's not because of TCP, it's because of poorly-written software. That software will be replaced eventually. I assure you, fixing ECN is a subset of replacing the TCP implementation for every host on the Internet, so I know which one will happen sooner.

(By the way, complaints about packet dropping are almost always a red herring. The whole internet depends on packet dropping, and it always has, and it works fine. The only time it's a problem is with super-low-speed interactive connections like ssh, where the wrong pattern of dropped packets can cause ugly multi-second delays even on otherwise low-latency links. ECN solves that, but most people don't use ssh, so they don't care, so ECN ends up being a low priority. If you're using ssh on a lossy link, though, try enabling ECN.)

The other interesting thing about RED, somewhat glossed over in the first article, is VJ's apology for mis-identifying the best way to tune it. ("...it turns out there's nothing that can be learned from the average queue size.") His new recommendation is to "look at the time you've had a queue above threshold," where the threshold is defined as the long-term observed minimum delay. That sounds a little complicated, but let me paraphrase: if the delay goes up, you want to shrink the queue. Obviously.

To shrink the queue, you "mark or drop" packets using RED (or some improved variant).

When you mark or drop packets, TCP slows down, reducing the queue size.

In other words, you just implemented latency-based TCP. Or uTP, which is really just the same thing again, at the application layer.

There's a subtle difference though. With this kind of latency-self-tuning RED, you can implement it at the bottleneck and it turns all TCP into latency-sensitive TCP. You no longer depend on everyone on the Internet upgrading at once; they can all keep using traditional TCP, but if they're going through a bottleneck with this modern form of RED, that bottleneck will magically keep its latencies low and sharing fair.

Phew. Okay, in summary:

  • If you can convince everybody on the internet to upgrade, use latency-sensitive TCP. (Bram Cohen)
  • Else If you can control your router firmware, use RED or BLUE. (Jim Gettys and Van Jacobson)
  • Else If you can control your app, use uTP for bulk uploads. (Bram Cohen)
  • Else You have irreconcilable bufferbloat.

All of the above are the same solution, implemented at different levels. Doing any of them solves your problem. Doing more than one is perfectly fine. Feel happy that multiple Internet superheroes are solving the problem from multiple angles.

Or, tl;dr: - Yes. If you use BitTorrent, enable uTP and mostly you'll be fine.

Update 2012/05/09: Paddy Ganti sent me a link to Controlling Queue Delay, May 6, 2012, a much more detailed and interesting ACM Queue article talking about a new CoDel algorithm as an alternative to RED. It's by Kathleen Nichols and Van Jacobson and uses a target queue latency of 5ms on variable-speed links. Seems like pretty exciting stuff.

2012-05-15 »

Looking into subscribing to cable TV in order to test drive various video equipment. Became confused that apparently you have to subscribe to a large set of "channels" filled with junk (plus ads), that occasionally (not often) include copies of the actual show you want to watch. As opposed to, you know, just sending you the show you requested. Also it's expensive.

Clearly I have been away from the normal-people world for far too long.

2012-05-24 »

Just got an HD Homerun (and cable TV) at home so I can play with multicast and actually test some toys I'm working on. (The Homerun is a pretty neat device: it can tune up to 3 digital cable TV channels and rebroadcast them over ethernet using IP multicast. VLC can view them.) So I've been watching CNN in the background to see what happens. Things I learned today:

1) My home 802.11n network can just barely manage tolerable 13 Mbit/sec without dying completely. It's still glitchy; like watching digital satellite TV in the rain. Lower-bandwidth channels work fine. Direct ethernet connection is (unsurprisingly) flawless. It'll be interesting to try MoCA later.

2) CNN looks very professional but insults my intelligence repeatedly. I guess that's not such an amazing discovery.

3) New York is apparently the "third-ranked park city" in America. I was going to make a snarky comment that sure, it has a lot of parks, if you like looking at nature through spiked iron fences. But I guess what they meant by "third-ranked park city" is "city with third-ranked city park", ie. central park, which I guess is indeed an exception to the nature-behind-iron-fences rule. Thank you, CNN.

Anyway, IP multicast. Kinda fun, it turns out. But TV: still stupid. Overall I guess it's a wash.

2012-05-26 »

Here are some MoCA test results from my experiments at home, with some totally off-topic comments thrown in for extra flavour.

I had heard rumours that MoCA ("Multimedia over Coax Alliance", one of the most terrible names for a standard since 802.11ab) only works on an otherwise "interference free" home coax network: that if you had actual cable TV and/or cablemodem traffic, it would interfere. That's why companies like FiOS (which delivers its data over Fiber, not cable) provide MoCA; because those now-idle coax cables in your home can be put to good use.

Anyway, the rumours are wrong. I have digital cable TV (as of Thursday) and DOCSIS 3 cable modem, and now MoCA (as of Friday) and it all works together on the same wires. Here's a FAQ: http://mocablog.net/faq/ . The key observation is that MoCA uses a frequency range around 850-1550 MHz, while cable TV comes in under 850. (The FAQ tells us that satellite TV does use those higher frequencies, so MoCA isn't compatible with a satellite signal. I have no way of testing this but it makes sense.) While I was researching, I read up on DOCSIS 3, which of course is an extension of the earlier DOCSIS (cable modem) standards. The only big change it makes is it allows "bonding" of multiple upstream/downstream channels to increase your transfer rates.

This, in turn, gives me a clue about how DOCSIS works: it literally takes over unused TV channels and uses them for data. Kind of neat. And knowing that, we can also conclude that DOCSIS comes in under 850 MHz as well, so it also doesn't conflict with MoCA.

These theoretical results bear out in practice. With my system fully connected (cable -> cablemodem -> linksys -> 100Mbit ethernet -> MoCA master -> MoCA slave -> laptop, speedtest.net measures 48MBit/sec downstream and 5.6MBit/sec upstream, which is comfortably close to what I'm paying for, 50MBit/sec down and 6MBit/sec up. During that test (which, note, is using the same cable to transmit MoCA and DOCSIS), the TV keeps on coming, glitch-free.

A quick one-stream TCP iperf test between the two MoCA devices gives 60 MBits/sec, which is less than I get when the two are isolated on their own network with just a short coax cable between them, but still more than Wifi, which is the important thing. There's also next-to-no packet loss, which is much better than Wifi.

Speaking of packet loss, my experiments with an HDHomerun Prime (cable TV to IPTV converter) box, and separately/together with some slightly-hacked-up DVR software have been quite informative. The HDHomerun device is very classy: zero configuration (there's a web interface but it has no settings). You just plug in a cable to one side and ethernet on the other side, and there's a really simple control protocol along with some sample software. The software just uses the simple control protocol to ask it to tune to a particular channel and beam the packets - it's already a digital video stream, after all - to a given unicast or multicast UDP address:port. Programs like VLC then know how to receive UDP video streams and display them. And that's that.

What's informative, though, is noticing what happens in case of packet loss in such a stream. UDP doesn't guarantee delivery of any particular data: if it gets dropped, it gets dropped. This is nice for something like a digital cable converter, because it can work with basically zero buffering: no retries mean no buffer required. Just fire-and-forget the packets as fast as you can. On a typical non-overloaded ethernet or MoCA network, there will be virtually no packet loss, so you get a clean video stream with the minimum possible latency. Sweet.

However, Wifi networks pretty much always have packet loss. Certainly mine does. And when it does, you inevitably lose part of the video stream; this manifests as those weird digital noise blocks like you see on cable TV occasionally, and satellite TV even more often. Depending on your tolerance and Wifi network quality, the lossage ranges from "mostly okay" to "unwatchable." Plugging your VLC viewer into a wired network with the HDHomerun eliminates the loss and gives you nice, high-quality, low-latency video.

DVRs are different. They can tune into an HDHomerun stream, but they record shows and beam them back to your viewer, with features like "pause live video" and fast forward/rewind. Such a design is inherently buffering, and unlike HDHomerun, works fine on lossy connections. A quick tcpdump reveals why: because they use TCP, not UDP, so they can recover from packet loss. The tradeoff is that buffers mean you end up with added latency. Running in a cabled setup and yanking the cable gives us a clue about exactly how much latency, for the particular viewer software I was testing: about 2-3 seconds. For TV, nobody cares (and it can blast-fill that 3 seconds of video buffer on your client on a LAN in much less than 3 seconds, if your programmers aren't idiots) so this is a great tradeoff.

The next layer of cool is that you can combine the two: if you put a wire between your DVR server and your HDHomerun, then you can watch live TV on your laptop over Wifi, glitch-free. Your DVR server is essentially a very expensive buffer.

(TIP: You can tell the difference between TCP and UDP video on lossy networks just by looking. UDP video gets glitchy - the telltale garbage blocks in the middle of a frame - and in the worst case, drops out, and when it comes back you've missed something. TCP should never get glitchy (unless the upstream signal was glitchy, like if there's a satellite uplink involved); it just freezes, rebuffers and eventually resumes where it left off. For this reason, if you're writing real-time video chat software, you should not be using TCP. With two-way conversation, you'd rather have a glitch than a delay. But if you're doing a presentation you probably want TCP. Are we clear yet? Good.)
Speaking of wires, out of curiosity I did some tests of MoCA sync time (the time it takes a newly-booted MoCA device to connect to the other). MoCA, by the way, although it appears like an ethernet device to the OS, has more in common with old 56k modems than with ethernet. That's because it involves tons of complicated signal processing: it needs to work over wires that were never intended for data transfer, so it has to do frequency analysis to figure out which frequency ranges work, and which ones are doomed. So, like a 56k modem, it takes a few seconds to do its magic before data can get flowing. In my tests, I found that the baseline connection time on the simplest possible network - a short bit of coax from one device to the other - takes about 3.9 seconds. On my now-much-more-crowded network involving three layers of radio shack splitters, a TV receiver, and a DOCSIS modem, the same MoCA devices now take a little over 10 seconds to connect. And they drop out and come back whenever the topology changes in any way (including, for example, unplugging/replugging my receive-only HDHomerun device). This is probably because it changes the resistance/capacitance/reflectivity of the line, which means it has to redo the 10-second characterization step. It comes back shortly, though, and as the doctor says, "Don't do that then." More importantly, if you aren't messing with the topology, MoCA stays reliably connected and has super-low packet loss.

And speaking of radio shack splitters, those were the fiddliest part of the whole mess. I tried a few different things, and certain splitter configurations cause various devices to fail completely to connect. In theory, my expensive Electrical Engineering education should help me explain this, but as usual, it's a failure, offering simply that it will never work because, "Um, hello, improper line termination, duh." I am so going to fail this exam.

What I can tell you empirically is that there's definitely a difference between radio shack splitters of the form "in, out (-3.5dB), out (-3.5dB)" and "in, out (0dB), out (-5.7dB)". I'm quite sure this has something to do with the aforementioned line termination, though I'm too lazy to calculate what. My general inkling is that when splitting a signal cable, there's no free lunch: you can't just send the same signal to two guys and have them both get the full signal strength. You can either divide it by two (in other words, -3.5dB) or distribute it unfairly (full strength and much-lower strength, ie. -5.7dB). I found that connecting MoCA devices with a -5.7dB between them killed the connection; putting -3.5dB between them was acceptable. (My DOCSIS signal has obviously been around the block a few times - er, literally - and was unoffended by being on the -5.7dB port. It still gets full speed and no loss.) Furthermore, on splitters there's some kind of difference between "in" and "out", which are romantic historical terms that made sense when TV was one-way, but which make no sense whatsoever in a world of two-way MoCA and DOCSIS. My expensive education does add something here, because I can understand the cute little diagrams on the splitters that indicate diodes between in and out, with a comment "DC pass-through" on some but not others. I also know that diodes are more complicated than you think, given that they have different response characteristics between DC and high-frequency current. It doesn't just mean "all the electrons go only one way" like it sounds.

Anyway, the short version of my experience with splitters, in short, is "splitters are the problem." This makes MoCA fiddly and error-prone. For example, a five-port gold-plated radio shack splitter didn't work for MoCA; combining it in serial with a three-port -3.5dB splitter (so there was less total loss between the two MoCA devices) did work. On the other hand, as an experiment, putting the HDHomerun on the "in" port and the outside cable line on the "out" port really did block the cable signal from reaching the HDHomerun, so yeah, diodes.

Probably people like FiOS installers have been trained in which kind of splitters to put where, and have a truckload full of them, instead of having to buy overpriced ones from Radio Shack at the last minute.

Epilogue

Oh also, don't bother with ethernet-over-powerline products. I got the DLink DHP-300's a while ago. They are junk. MoCA is way better. Use that instead. (In fairness, at least coax cable was intended to carry a signal, while powerlines never were. But it shows.)

2012-05-28 »

Okay, I know, cable TV is a pox on humanity.

The last time I really had cable TV was back in 2001 in Waterloo up until I graduated. And okay, I did have it for a while at a place where I "lived" in Montreal, but for most of that time I didn't have a TV, and for most of the time I had a TV, I didn't actually live there. So it didn't count.

Back in September I picked up a newfangled flat screen TV (yes, I'm living in the FUTURE!), a $99 Apple TV, and a $79 Google TV (Logitech Revue, with huge discount off its normal price). Let's ignore the Google TV for now because I want to keep this post family-friendly. Oh, I also have a Wii.

So my TV choices on this device were pretty limited. In short, they came down to 1) Apple TV + iTunes, 2) Apple TV + Netflix, 3) Apple TV + AirPlay, 4) Wii + Netflix, and 5) PC. #5 is crap. #4 is blurry (no HD output on a Wii). And not to sound too much like an Apple fanboy here, but #1, 2, and 3 are all kind of dreamy and zen and I've been very happy with them for months.

But I thought, okay, you know, my life is pretty good, I should mix things up and do some experimenting and probably make them worse. So last week I got digital cable TV and a DVR. I figured it would make a decent supplement, at least in terms of getting more recent episodes of shows than Netflix, without paying iTunes zillions of dollars.

So here's the good news: I got the new shows. And the UI is actually pretty good. With my HDHomerun cable-tv-to-ip-tv converter, I can record 3 shows at a time, which is neat, even though I can only really watch one at a time. In three days I've already auto-amassed 261 GB of recorded media, which makes me feel powerful, in a vaguely pointless sort of way. But powerful is a good feeling, so I'm going to run with it.

But there are two things about cable TV, when compared to Netflix or iTunes, that drive me absolutely nuts. First, commercial interruptions. Second, idiotic TV channels that mangle the show you're watching with horrific tricks like LITERALLY SQUISHING THE VIDEO to add a useless ticker bar on the bottom of the screen, or remixing the show to make more time for commercials, or moving the show to start/end slightly before/after the hour in order to confuse your DVR. I'm pretty sure they also fade out the colours in addition to using really excessive digital compression levels in order to squeeze in more channels, because you know, quantity trumps quality every time. Also, there are things that are just confusing, like how some channels have black bars all the way around the screen; not for aspect ratio reasons, obviously, but I can't figure out why then.

Maybe people who have had cable TV all this time have come to terms with this, or maybe they get used to it after a while, but coming to cable TV after 11 years, the best description I can find for it is "crass." Alternatives include "uncultured" and "lower class" and "swill."

This reminds me of some long, boring rants I read years ago about the differences between seeing a movie in a theatre vs. watching TV, and why they are such different experiences. Hypotheses at the time included the lack of interruptions, the bigger screen and better sound, the social connection of sitting in a big room with a bunch of other people, the specialness added by paying for the privilege, and (seriously) the differences in screen flickeriness making one medium more or less hypnotic than the other (I forget which).

That all puts in context for me the concept of "home theatre," which is what people call their TVs nowadays. Someone realized that cable TV is crass, and movie theatres (although they get ever worse) are less so. Home Theatre People want to make your TV more like a theatre, less like TV.

Without thinking very hard, I had done this with my HDTV + Apple TV + not_cable_tv combo. I had a system with a big screen, good sound, no interruptions, suspicious screen flickeriness, and goshdarn it, it cost a lot so I'd better like it. Sitting down to watch something on Netflix on that thing is an experience, which now that I think of it, is kind of like going to the movies. It's intense. It requires you to pay attention. Actually it's so intense that I didn't do it much because I found it tiring. I waited until there was something I really wanted to see.

Sitting down at the same TV to watch TV is like regressing to my 20-year-old self. It's the opposite of intense. My wife says she finds commercials loud and stressful, but oddly I don't find that at all; I find that they disrupt the intensity, which, perversely, means I can do it for hours. I can get up in the middle of the show without pausing (despite the pause-live-TV features in every modern DVR). I can read/write email at the same time. Cable TV is background noise and it's surprisingly easy to tune out.

All this makes sense, of course. It explains why so many people just leave the TV on all day and all evening. It explains why TV commercials work - because after a while, you stop remembering to fast forward them because you're not all there. It explains why some channels desaturate the colours - because that way it fades better in the background, where you'll forget to turn it off. It even explains why TV is in decline: because the Internet is now, for many people, a slightly better source of background noise.

Moreover, it explains why I hate Youtube ads so much more than I ever remembered hating TV ads. (And sure enough, 11 years later, I still don't hate them that much.) Because the rare occasions I visit Youtube, I'm intense. Whatever I'm looking for, I want it now, and the ad isn't it. With TV, I'm here to kill time, and I'm not being intense, I'm not even really paying attention. It's the perfect time to throw some unrelated sales pitch at my subconscious, because I'm receptive and bored.

So what is the future of TV? If you ask Apple, the future of TV is a smooth, slick, intense movie theatre in your home… and they're really doing it. But that's different. Movie theatres and TVs have always filled different use cases; that's why we have both. Most people still want both, which is why although Apple TV is popular and cheap, it's not as popular as cable.

I never missed cable and I'll probably ditch it again now that I've had my psychology fix. But it was worth it for the experience. Now I remember how easy it is to get sucked in, and how easy it is to fill my brain with background noise and hardly notice. How much time I can waste without really thinking. And how so very many people do it every single day. And also, how much less stressful it is than the purified beautiful experience Apple is selling.

Interesting.

2012-05-29 »

Our system has a /etc and a /misc. That means it's twice as better.

2012-05-31 »

This guy has some pretty well thought out pros-and-cons lists about working at Google.

http://www.spencertipping.com/posts/2012.0530.why-i-left-google.html

April 2012
June 2012

I'm CEO at Tailscale, where we make network problems disappear.

Why would you follow me on twitter? Use RSS.

apenwarr on gmail.com