|Oh well, at least it's
Everything here is
my personal opinion.
I do not speak
for my employer.
World Tour Reflections
For some reason, I always imagined that park rangers would travel through the forest either on foot (big hiking boots) or on horseback. In retrospect, that makes no sense; of course they drive. In North and South Dakota, they drive pickup trucks. (Including, not surprisingly, the Dodge Dakota.) In California, they drive cars, which is actually more sensible, but somehow not as romantic.
The midwest is amazingly huge and empty except for farmland and/or ranches. Rumour has it that it takes about an acre of farmland to feed an average American. (If you think about it, this unit doesn't need a timespan portion; an acre of farmland per year feeds an American for a year, so you can cancel out the "per year" part.) This sounds kind of crazy, until you actually drive through it and realize that most of the land area of the U.S. is occupied as farmland, which is what allows the existence of such big, dense cities. Which is not a problem at all; there's plenty of room for everybody. As a person who's mostly ever just flown into the big cities before, I didn't realize just how much of the place is huge and empty. Canada definitely does not have a monopoly on huge and empty.
Yellowstone park is unbelievably fascinating if you care about geology, or if you've ever heard the term "fire and brimstone." They actually have fire and brimstone, and yes, I mean that literally. (Brimstone is a fancy word for sulphur or something like that.) Geysers are great. Old Faithful, while famous, is not the biggest, best, nor most reliable of the geysers, though, so don't limit yourself if you visit.
The Grand Canyon, while indeed very grand, is a lot like the terrain for miles and miles in any direction. There's lots of trees, rocks, and grass, and the mountains are made of sandstone. The main difference is that the height of the grand canyon is negative, while the height of the nearby mountains is positive. But the sandstone looks about the same. Apparently it's tremendously unsafe to try to hike to the bottom and back in one day; however, it looks like it would be a very fun hike to go all the way down, across, and up the other side. Reputedly this takes 3-4 days, but it would be quite the adventure.
The game F-Zero for Gamecube appears to have had its levels modeled after Utah (level 1.3), Las Vegas (all the casino levels), and Redwoods National Forest in California (the foresty/pipe level). The game's landscapes are in fact remarkably lifelike, although in real life the road goes upside-down less often.
The other really interesting thing about Utah is that so much of it is so desolate, but not a desert. It's more of a tundra or something; hard to explain. It's neat because unlike, say, Northern California or Oregon or British Columbia, where the nature constantly reminds you how valuable it is and how careful you need to be about it, you get the feeling that you could build anything in Utah and not do any significant damage. It's kind of an engineer's dream: infinite possibility, minimal consequences. Thrilling, in an empty sort of way.
While in Utah, we were sure to visit the old Wordperfect office (in Orem, UT). Of course, Wordperfect isn't there anymore; it was acquired years ago by Canadian company Corel, which proceeded to more or less rip it to shreds and utterly fail to capitalize on it. Nevertheless, its former building still stands proudly in the middle of its industrial park, which is a pretty nice industrial park really. The building has been taken over by a cooking school. In honour of the building's heritage, I was happy to see that they have named their cafeteria "Wordperfect Cafeteria." Of course, this is a bit ironic, since as it's a cooking school, I expect the food is neither perfect (probably prepared by students) nor word-related. We wanted to go eat there for historical reasons, but unfortunately time did not permit.
The Novell office in Provo, UT was a disappointment. It's just a really boring mass of cubicles. Or at least, that's how it seems from peering through the windows and trying not to get caught by a security guard. It's not even the head office anymore, apparently, since Novell's head office was moved to Boston, I suppose due to Ximian-related activity. Bleh.
We went through the Nevada desert, which was, frankly, a disappointment because it doesn't look anything like the Sahara desert does in movies. In retrospect, I shouldn't have expected it to look like some other place that it isn't, in the same way that I shouldn't have expected the food in Spain to taste like Mexican food. But it didn't, and it didn't, and I was disappointed both times. Oh well. The Nevada desert has a lot of vegetation. As far as I knew, the lack of vegetation is rather the definition of a desert. Apparently not; it's just that deserts have less than a certain threshold of annual rainfall. This saddens me somewhat, for the same reason that classifying Vancouver as a rainforest doesn't turn out to make it warm or filled with giant man-eating spiders.
Even more saddening was that it rained in Las Vegas the first night we were there. Desert, my bum. The rest of Las Vegas was as advertised. One interesting observation: "the strip" (the new area) is exactly as it's portrayed in the movies; we leaned on the balustrades of The Bellagio's giant desert fountain, just like they did in Ocean's 11 (or was it 12? or 13?). Meanwhile, the rest of Las Vegas is just like it's portrayed in books; kind of dirty and drug-infested and poor and with lots of pawn shops. It had never occurred to me that movies and stories tend to portray Vegas in such completely opposite ways. Even less did it occur to me that Vegas could actually be both ways at once.
Southern California was fascinating and unique in the sense that it's probably the only place in the world that is portrayed completely accurately by Hollywood movies, on account of it being the only place in the world that Hollywood people actually know well.
San Diego's beach district is exactly like "surfer movies" depicted it in the 1960's through 1980's. I thought that culture must have died out by now; no. They just stopped making movies about it. Even the hairstyles are the same (maybe with slightly more advanced hair gels now). Although I will never really fit in with surfer culture myself, I was gratified that San Diego was much more like its stereotype than I ever would have guessed, and it's a pretty positive stereotype.
Los Angeles is far scarier than expected, but its mood exactly matches the tone of DJ Tiesto's In Search of Sunrise 5: Los Angeles. It's such a perfect soundtrack for the place that I was honestly a little bit shocked. This makes me want even more to someday visit Latin America, since I really liked In Search of Sunrise 4.
Northern California (highways 1 and 101) are scary and the passenger in your car will probably get carsick, but it's just as worth driving as everyone says. The views are amazing.
Oregon is way prettier than advertised, and lots of people have been known to advertise its prettiness. Many of the campgrounds allow you to rent a pre-installed yurt, which sounds pretty great. Unfortunately I didn't know this before the reservation deadline, so I couldn't get one. There are also sand dunes in Dune City, which were very cool; much more like the Sahara than Nevada is, if you ask me. Portland is also a great city; excellent public transit, sensible weather, a giant bookstore, and excellent people.
As I write this, I'm in Seattle. All three of my friends that I met up with here work at Amazon.com, and they don't know each other, which tells me two things. First, apparently there are surprisingly few tech companies in Seattle (Redmond, home of Microsoft, is near here, but I somehow don't know anyone who works there anymore). Second, Amazon is obviously really big. Apparently in the past 9 or 10 years their revenues have expanded insanely, but you don't hear that much about them. Moreover, their hosting services, which you hear all about (S3, EC2, etc) form only a tiny fraction of their business. The real Amazon.com - the one that sells tangible and intangible things to customers in exchange for money - is massive, very profitable, and rapidly growing. And they don't use Amazon Web Services for all their web services.
Seattle, like Las Vegas, had rain. As a whole this trip has been pretty non-rainy. We'll see what happens for the last leg of the journey.
Summary: the U.S. is a pretty good place all around, and has lots of very interesting diversity, despite rumours to the contrary. For politics, I still greatly prefer how we do things in Canada, but as countries go, you can see why so many people like it here.
Update 2010/11/06: Erin tells me that there are, in fact, giant
man-eating spiders in Vancouver, so I no longer have to be disappointed
about that part. Now if it only it were warm.
Getting an Apple Airport Express to join a DD-WRT wireless network
...should be easy. But it's not, unless you pick "WPA2 Personal Mixed" and "TKIP" instead of "WPA2 Personal" and "TKIP." What's the difference? Who knows! Probably the "mixed" setting is somehow insecure in random unadvertised ways. But at least my music plays through the wireless speaker connection now.
Also, while DD-WRT claims to support WDS, as does the Airport Express, I am not nearly smart enough to figure out how to set it up. Just pick "join an existing wireless network" instead of "extend my wireless network" in the Airport setup tool.
(I'm writing this mostly so that other people who Google for these terms
might have some chance of coming up with a useful answer. Naturally all the
random sites Google popped up for me were totally useless, and
trial-and-error won the day.)
More on 802.11 wireless
As part of my top secret plans to try to make a space-age new wireless router, I've decided to wade through the official IEEE 802.11 specification.
Now okay, I decided that before I actually found out the thing is 1233 pages long, so I might yet change my mind. And let me tell you, IEEE reading isn't quite as captivating as IETF reading. There are at least a couple dozen pages of definitions, like "STA" as the abbreviation for "station," because there is apparently a worldwide shortage of lowercase letters.
Word to the wise: if you're interested in this spec, you might want to start at section 5, which actually gives a decent architectural overview. I actually read the entire definitions section before I got there, which was confusing and maybe non-optimal, but I do feel like I recognize a lot more of the unnecessary acronyms now.
My major goal when I started reading the spec was to find the answers to these two questions:
Before you ask, the answer to the first question is definitely not "you can join as many networks as you have antennas." I know enough electrical engineering to know why that's nonsense, since I was somehow granted a Bachelor's degree in such things; just enough knowledge to be dangerous. But even if the details are fuzzy, let's try this thought experiment:
Back in the before-times, people used to have these analog powered whatzits called "televisions" which would receive "signals" from the "airwaves" using "antennas." Some of these antennas had cutesy sounding names like "rabbit ears," presumably so that people would be allowed to bring them in the house and ruin the careful job the Local Interior Design Authority had done arranging the furniture.
But if you really got fancy, you could get a big fancy antenna and mount it outside somewhere. Then you could put a splitter on the wire from that antenna, and run its signal to more than one TV at a time. Or to your "VCR," which could record one channel even while you watched a totally different one!!! All with a single antenna!!!
I know, it sounds like science fiction, but bear with me, because I clearly remember it, in an abstract sort of way, from my childhood.
(If you used a splitter, you ended up getting less signal strength to each of your receiving devices. But let's ignore that factor here; that's just an analog artifact, similar to what happens when you copy from one "video tape" to another (another item of science fiction; someday, we'll uninvent DRM and get this sort of stuff back). If the antenna is connected to just one digital signal processor, we should be able to mangle it a million different ways and not worry about analog losses.)
So anyway, as much as technology changes, it still mostly seems to stay the same. 802.11 channels are a lot like TV channels; each one gets its own little band of the frequency spectrum. (I was a little surprised that such a recent technology didn't bother using spread spectrum / frequency hopping stuff, but that's how it is.) Thus, just like with your old TV network, you should be able to use a single antenna and receive as many channels as you want.
Relatedly, it seems that 802.11n gains most of its speed by using multiple channels at once. I haven't gotten to that part of the spec yet; I read it elsewhere. But I notice from my online browsing that there are 802.11n "lite" routers with only one antenna, and 802.11n "real" routers with two or three. I think this is pretty theoretically bogus - one antenna ought to be enough for anyone - but probably practically does make a difference.
Why? Because I have a feeling the chipset manufacturers are still in the past. The problem is, sending/receiving on multiple channels at once is kind of hard to do, even if you're working in a purely digital world. At the very least, you need a much higher clock frequency on your DSP to handle multiple full-rate baseband signals simultaneously. But worse, I don't know how much of this stuff is purely digital; they're probably still using analog modulators/demodulators and whatnot. If so, it's probably hard to modulate/demodulate multiple channels at once without using an analog splitter and multiple analog modulators... which would degrade the signal, just like it did with your old TV antenna.
It sounds to me like a solvable problem, but without having yet looked at the hardware/software that implements this stuff, I'm guessing it hasn't been solved yet. This is some pretty leading-edge signal processing stuff, and cheapskates like you are only willing to pay $50-$75 for it, which makes it extra hard. So it was probably just easier to mount multiple antennas and include multiple DSP cores and modulators - in fact, maybe just throw in the same Broadcom chip more than once on the motherboard - and just run them simultaneously. Not optimal, but easier, which means they got to market faster. Expect single-antenna, full rate 802.11n boxes eventually.
So from the above reasoning - all unconfirmed for now - I conclude that, even still, you ought to be able to send/receive on as many channels as you have antennas. And if there's more than one wireless network (SSID) on a single channel, you should be able to join all those wireless networks at once using only one antenna.
As it happens, already by page 42 of the spec I've read the part where it says you absolutely must not join more than one network (literally, "associate with more than one AP") at a time. Party poopers.
But why? The stated reason for the rule is that otherwise, alas, the poor helpless network infrastructure won't know which AP to route through when it looks for your MAC address and multiple APs respond that they're connected to it. But that actually can't be true, because shortly after, they say that you "must attempt to send a disassociate message" when leaving an AP, while admitting that's kind of impossible to do that reliably, since the reason you're leaving might be that you went out of signal range, and how would you know that in advance? Thus, if you're carrying your laptop around and you move out of range of one AP and into range of another and you don't get to disassociate from the first one, the network must be able to handle it, and therefore by extension, it can handle it if you deliberately join more than one network, since the network won't know the difference.
Apparently the guys down at the IEEE 802.11 working group have never heard of crash-only programming; there never should have been a disassociate command in the first place, just like having a DHCP "release my IP address" command was a stupid idea.
Anyway, question #1 looks promising; it looks like a software hack could let us join multiple networks at once. And systems with multiple antennas could even join multiple networks on multiple channels, perhaps.
For my second question, about forwarding packets from one network to another, things are much more screwy. I suspect that forwarding packets between two networks on the same channel will be a problem unless you're careful (ie. receive packet on A, send it out on B, but someone sends the next packet on A while you're sending on B and they interfere), because the APs on the two networks can't easily coordinate any collision control. On separate non-interfering channels it should be okay, of course. But I'll need to read much more before I can conclude anything here.
Interestingly, the standard has accrued a whole bunch of QoS stuff, supposedly designed for real-time audio and video. I doubt that will go anywhere, because overprovisioning is much simpler, especially on a LAN. But the otherwise-probably-pointless QoS stuff includes some interesting timeslot-oriented transmit algorithms (don't expect the 802.11 guys to ever say "token ring") that might be fudgeable for this kind of forwarding. We could just reserve alternate timeslots on alternate networks, thus avoiding overlap. Maybe.
I bet nobody implements the QoS stuff correctly, though, which is why every router I've seen lets you turn it off.
Other interesting things about 802.11
You might know that WEP stands for "wired equivalent privacy." After reading the spec - which mentions in a few places that WEP is deprecated, by the way, which is wise since it was hacked long ago - I think I see where they got that strange name. See, they correctly noted that all IEEE 802 networks (like ethernet) are pretty insecure; if you can plug in, you can see packets that aren't yours. And the world gets along even so; that's why they invented ssh, which is why I invented sshuttle, and so on. You don't need ethernet-layer security to have application-layer security.
However, they didn't want to make it even worse. The theory at the time they were inventing 802.11 must have been this: the security requirement that "they must be able to physically plug in a wire" isn't very strong, but it's strong enough; it means someone has to physically access our office. By the time they can do that, they can steal paper files too. So most people are happy with wired-level security. With wireless, it goes one step too far; someone standing outside our locked office door could join our office network. That's not good enough, so we have to improve it.
And they decided to improve it: exactly to the same level (they thought) as a wired network. Which is to say, pretty crappy, but not as crappy.
From what I can see, WEP is simply this: everybody on your network takes the same preshared key to encrypt and decrypt all the packets; thus everybody on the network can see everybody else's packets; thus it's exactly as good as (and no better than) a wire. Knowing the digital key is equivalent to having the physical key to the office door, which would let you plug stuff in.
And actually that would have been fine. Wired-equivalent security really is good enough, mostly, on a private network. (If you're in an internet cafe, well, mere wires wouldn't save you, and neither will WEP or WPA2. Imagine that someone has hacked the router.) Unfortunately WEP ended up having some bugs (aka "guess we should have hired a better security consultant") that made it not as good as wired. Reading between the lines of the spec, I gather that one major flaw in WEP is replay attacks: even if someone doesn't have the key, they can replay old packets, which can trick hosts into doing various things even if you yourself can't read the packet contents. You can't do that on a wired network, and therefore WEP isn't "wired-equivalent privacy" at all.
So anyway, all that was interesting because I hadn't realized that WEP wasn't even supposed to be good. The only problem was it was even worse than it was supposed to be, which put it over the edge. The result was the massive overcorrection that became WPA, which as far as I can tell ends up being overkill and horrendously complex, reminiscent of IPsec.
Admittedly I haven't read all the way ahead to WPA though, and the fact that lots of people have implemented it successfully (and interoperably!) kind of implies that it's a better standard than IPsec. (Still: see my previous post for an example of how either dd-wrt or Apple Airport Express apparently still *doesn't* implement it correctly.)
The WEP thing is also a good example of a general trend I'm observing while reading the spec: 802.11 does a lot of stuff that really doesn't belong at the low-level network layer. Now, the original "OSI protocol stack" has long been discredited - despite still being taught in my horrible university courses in 2001 and maybe beyond - but the overall idea of your network stack being a "stack" is still reasonable. The whole debate about network stacks comes down to this: higher layers always end up needing to assume things about lower layers, and those assumptions always end up causing your "stack" to become more of a "mishmash."
Without necessarily realizing it, this happened with the world's most common network stack: ethernet + IP + TCP.
First, people have been assuming that ethernet is "pretty secure" (ie. if you're on a LAN, encryption isn't needed). Second, TCP implicitly assumes that ethernet has very low packet loss - packet loss is assumed to mean Internet congestion, which is not true on a wireless network. And third, most IP setups assume that a given ethernet address will always be on the same physical LAN segment, which is how we should route to a particular IP address.
The 802.11 guys - probably correctly - decided that it's way too late to fix those assumptions; they're embedded in pretty much every network and every application on the Internet. So instead, they hacked up the 802.11 standard to make wireless networks act like ethernet. That means wired-equivalent (and with WPA, better-than-wired-equivalent) encryption to bring back the security; device-level retransmits before TCP ever sees a lost packet; association/disassociation madness to let your MAC address hop around, carrying its IP address with it.
It's kind of sad, really, because it means my network now has two retransmit layers, two encryption layers, and two routing layers. All three of those decrease debuggability, increase complexity (and thus the chance of bugs), increase the minimum code size for any router, and increase the amount of jitter that might be seen by my application for a random packet.
Would the world be a better place if we turned off all this link-layer stuff and just reimagined TCP and other protocols based on the new assumptions? I don't know. I suppose it doesn't matter, since I'm pretty sure we're stuck with it at this point.
Oh, there was one bit of good news too: 802.11 looks like it's designed well enough to be used for all sorts of different physical wireless transports. That is, it looks like they can switch frequencies, increase bandwidth, reduce power usage, etc. without major changes to the standard, in the same way that ethernet standards have been recycled (with changes, but surprisingly small ones) up to a gigabit (with and without optical fibre) and beyond.
So all this time developers have spent getting their 802.11 software stacks working properly? It won't be wasted next time we upgrade. 802.11 is going to be around for a long, long time.
Update 2010/11/09: Note that a perfectly legitimate reason to have
more than one antenna is to improve signal reception. I don't know if
that's what routers are actually doing - I half suspect that the venerable
WRT54G, for example, just has them to give the impression of better
reception - but it's at least possible. The idea of multiple antennas to
allow subtracting out the noise goes all the way back to the old days of TV
rabbit ears, which generally had two separate antenna arms. Or ears, I
guess. The math is a bit beyond me, but I can believe it works. My point
was that you shouldn't, in theory, need multiple antennas to use multiple
A quick review of btrfs
Yesterday I tried out btrfs, an experimental new Linux filesystem that seems to be based on many of the ideas of the more famous "zfs," which I gather is either not available on Linux or not properly ported or not licensed correctly or whatever. I really didn't do my zfs research; all I know is it's not in the 2.6.36 Linux kernel, but btrfs is, so btrfs wins.
Anyway, the particular problem I wanted to solve was for our startup, EQL Data. Basically we end up with some pretty big mdb files - a gigabyte or two, some days - and we want to be able to "branch" and "merge" them, which we do using git. Perhaps someday I'll write an article about that. Meanwhile, the branching and merging works fine, but what doesn't work fine is creating multiple copies so that different people can modify the different branches in different ways. It's not that bad, but copying a gigabyte file around can take a few seconds - more on a busy virtual server - and so if we want startup times to be reasonable, we've had to resort to pre-copying the files in advance and keeping them in a depot so that people can grab one, modify it, and send it back for processing. This works, but wastes a lot of background I/O (which we're starting to run out of) as well as disk space (which directly costs money).
So anyway, the primary thing that I wanted was the ability to make O(1) copy-on-write copies of large files. btrfs is advertised to do this. I won't keep you in suspense: it works! There's a simple btrfs command called "bcp" (btrfs cp), and it works the way you'd think. Only modified blocks take any extra disk space.
Our second problem relates to the way we actually run the databases. We have a complex mix of X, VNC, flash, wine, Microsoft Access1, and some custom scripts. Each session runs in its own little sandbox, because as you might have heard, Windows apps like to think they own the world and scribble all over everything, especially the registry, which we have to undo after each session before we can start the next one.
We've tried various different cleanup techniques; the problem is that each wine tree consists of thousands of files, so even checking if a particular tree has been modified can be a pretty disk-grindy operation. With experimentation, we actually found out that it can be faster to just remove the whole tree and copy a new one, rather than checking whether it's up to date. This is because the original tree is usually in cache, but the old one often isn't, and reading stuff from disk is much slower than writing (because writeback caching lets you minimize disk seeks, while you can't really do this when reading). Nevertheless, it still grinds the disk,2 and it also wastes lots of disk space, though not as much as you might expect, since we use hardlinks extensively for individual files. Too bad we can't hard link directories like Apple does in Time Machine.
Luckily, btrfs can come to the rescue again, with O(1) writable snapshots of entire subvolumes. This is slightly less convenient than using bcp on a single file. Snapshots don't really act much like hardlinks; they work by quietly creating an actual separate filesystem object and showing it as if it's mounted on a subdir of your original filesystem. (This doesn't show up in /proc/mounts, but if you stat() a file in the copy, it shows a new device id.) Presumably this is because otherwise, you'd end up with files that have the same device+inode pair, which would confuse programs, and you can't correctly renumber all the inodes without going from O(1) to O(n) (obviously). So all in all, it's clever, but it takes more steps to make it work. It still runs instantaneously once you figure it out, and the extra steps aren't that bad. It means we can now have the ability to clone new virtual environments instantly, and without taking any extra disk space. Nice!
Oddly, in my 2.6.36 kernel (plain btrfs as included in Linus's pristine kernel), I could *create* snapshots as a normal user (once I had set things properly with chmod), but I couldn't *delete* them as a normal user; I had to be root. This can't really be considered correct behaviour, so I might have to file a bug or join the mailing list or something. Later.
Next I'm going to start looking into some stress tests and figuring out if we can safely take it into production (again, in our controlled environment where loss of a filesystem and/or incompatible filesystem format changes aren't the end of the world).
1 Yes, we have all the required licenses. Why does everyone assume that just because we're running wine, we must be slackers trying to dodge license agreements? We're using it because hosting on Linux is vastly more memory-efficient than Windows. Also I just like Linux better.
2 Nowadays it grinds the disk less than it used to, because in
the newer kernels, Slicehost has switched to using ext3's data=writeback instead of
data=ordered. The tech support guy I talked to a few months ago said he
had never heard of this, and no, there was no way to do it. I wonder if
they changed their setting globally because of me?
I'm a little late to the party, but this article by Derek Sivers is really good. I especially like the top part, about focus. Not because I expect this advice will do anyone any good (I think you're either driven and focussed or you're not), but because it gives me insight into my own behaviour:
But the casual ones end up having casual talent and merely casual lives.
Looking back, my only Berklee classmates that got successful were the ones who were fiercely focused, determined, and undistractable.
When you emerge in a few years, you can ask someone what you missed, and you'll find it can be summed up in a few minutes.
The rest was noise you'll be proud you avoided.
-- Derek Sivers
Ironically, I read this because I was distracted from real work.
On sheep, wolves, and sheepdogs
I never thought I would have anything nice to say about an article that encourages more people to carry guns, including into church, but this one is very well written and makes good points:
Even more than the United States, Canadian citizens have a "sheep" mentality in the terminology of the article. Most individual Canadian citizens would be thoroughly unprepared if violence broke out. As a country, we also spend vastly less per capita on our military. Of course, it works for us; as a country, we get into fewer wars, and as individuals, we get into fewer fights, and in both cases our fights are (at least somewhat) less violent and better justified.
I've often thought that Canada is cheating a bit, however. The
reason Canada doesn't need a huge military is that if someone seriously
thought about attacking us, the U.S. is right next door. The Canadian
pacifist system is highly efficient and desirable - as long as someone
bigger and stronger is nearby to scare off the predators.
Canada and the Guaranteed Annual Income idea
The Globe and Mail recently published an article titled To end poverty, guarantee everyone in Canada $20,000 a year. (Note: I think the $20,000 is a made-up number but I'll keep using it here for illustrative purposes.) It was then picked up in the comments at YCombinator news.
Unfortunately, both the article and the comments seem to misrepresent the actual proposal. A better reference is Income Security for All Canadians (pdf) that explains in more detail.1
But you don't necessarily need more detail to understand why this idea is a good one. It's simple: the proposal doesn't change anything except the administrative overhead.
In Canada, true homelessness - sleeping on the street involuntarily - equals death. It simply gets too cold here. If you don't want people to die, you provide social services, whether through private organizations or through the government. And so we do, in a complex combination of both.
Whether or not you agree with this idea - keeping people alive and healthy and, to some extent, comfortable even if they can't earn a living - can be debated, and it often is. Every country does it differently, and the rules and amounts are constantly changing.
But in any case, in Canada, the overall outcome of our complex set of rules is clear: there is a minimum guaranteed annual income. If you're not earning it, some government agency or another is willing to give it to you, if you jump through the required hoops.
The "guaranted annual income" proposal, then, is not a proposal to start guaranteeing a minimum annual income. We already have that. It's just a renaming and simplification of what we already have. Instead of a network of complicated government agencies to guarantee your annual income, and a bunch of complicated proof requirements, we just have one agency and the only proof required is that you're a Canadian.
Where does the money come from? That's easy: it comes from where it already came from. It's the same money as before, going to the same people.
What if people abuse the system? That's easy: the same thing that already happens. It's the same money and the same people as before.
Won't it cause inflation? No, because it's the same money as before.
Won't it make people lazy and not bother to get a job? Maybe, but so will the current system. And the income we're talking about is really not a comfortable income; it's just barely enough to survive. People complain regularly, and probably quite accurately, that the current welfare situation in Canada still leaves many people below the poverty line. That won't change.
Won't people just take the money and spend it all on drugs and alcohol? Probably, yes. But they would do that anyway. Such problems can't be solved with money alone.
Why do we give it to every citizen, instead of just the citizens who need it? Good question. The answer is: because the math works better that way. Here's why:
Right now, if you're taking unemployment insurance or welfare or receive a disability pension, the short version of a complex story is that every $1 you earn - by taking a job - reduces your free government income by about $1. At a really basic level, that sounds fair; we guaranteed you would get a survival income, so if you're earning money yourself, we'll supplement it, but only up to the survival amount. Why shouldn't we give more charity to someone with no job than someone with a job?
Thinking about it more deeply, though, the answer becomes clear: because if you take away money from people when they earn money, they have no incentive to earn money.2 A 20 hour a week job and $20k/year is worse than no job and $20k/year. Sure, maybe with a 40 hour a week job, you could earn $40k/year, which is more than the basic welfare amount; but to get there, you have to get started. A lot of the people who need welfare couldn't just go get a job that, on day 1, would pay more than $20k/year. They have to gain experience, get promoted, and so on. Sure, in the long term it might pay off, but if everyone was good at long term planning, we wouldn't be in this mess.
What about the rich? Are we going to give them $20k/year too? Technically yes. But just as technically, we'd obviously raise the higher tax brackets so that their effective income would be the same as before. Progressive taxation makes sense; if you earn $1, the government takes away $0.40 or whatever, but you still have $0.60 more than if you didn't earn that $1. So you still have an incentive to work more.
In summary, the main advantages of the guaranteed annual income proposal are:
It's very interesting to me that it's the conservatively-minded politicians in both Canada and the United States that seem to be working on this idea; I would have expected it to be a more liberal suggestion. Perhaps the reduced administrative overhead ("smaller government") is appealing.
A government that managed to pass such an overhaul would certainly be remembered forever in Canadian history. Um, in a good way. I think.
1 If you really want a lot of detail, I recommend A Fair Country, by John Ralston Saul. It's a really good book if you want to understand where Canadian culture comes from. (Spoiler: it comes largely from our aboriginal population. Bet you didn't see that coming. Neither did I. But he convinced me. Quiz question: which of our supposed primary influences, the British or the French, is the reason we're so polite? So accepting of other cultures? So biased toward peacekeeping instead of empire-building? Answer: neither of them, obviously.) That book was where I first heard of the guaranteed annual income proposal in a well-presented way.
2 To be honest, I'm a little shocked and/or disbelieving that it
works this way, because it's so obviously stupid. But I've talked to people
on the receiving ends of these programmes, and more than once I've heard
that there's no point for them to get a job, because if they did, they'd be
cheating themselves out of free money. A rather obvious halfway
implementation of the guaranteed annual income scheme would be to simply
deduct only a fraction of your earnings from your government
supplements; it's not as elegant as the whole deal, but at least you aren't
outright discouraging people from getting a job. (If that's already what
happens and I'm misinformed, well, maybe that's why this GAI idea is
necessary; because it's so complicated that even the people receiving these
services don't understand them. I admit to never having collected welfare,
disability, or unemployment insurance, so I have no real experience here.)