It's the beginning of a new year, and that's made me think a bit about thresholds - those moments where things suddenly change from one way of being to another.
If you're starting a company, the route to massive overwhelming success (as opposed to normal success, which is easier) is to correctly predict and bet your product on one of those threshold transitions. Before the transition, your product was impossible, so of course there are no competitors; after the transition, your product is critical, so you'll sell a lot.
Unfortunately, there are different kinds of transitions. Sooner or later, people will get completely sick of advertising, and advertising-based business models will crash... but we don't know when. It might be next month, or it might be in fifty years. If you bet on the death of advertising, you're most likely going to lose.
Other thresholds are based on surprise scientific discoveries; for example, someone discovers a new super-high-density chemical for making batteries, or discovers the secret of nuclear fusion, or cures cancer, or whatever. Maybe the experts in a particular field can make some kind of guess at when these will happen, but it's tough. If you don't time it within two or three years at worst, your company will be dead - or obsolete - by the time the transition comes.
But some kinds of transitions are easier to predict: the ones that follow something like Moore's law. In the book The Innovator's Dilemma, Clayton Christensen discusses several of these situations, from disk drives to printers to hydraulic cranes. It's like magic; you can graph the progress of a new technology, and at the point where its capability exceeds the capability of an older one, suddenly the whole world is different.
Here are a few of those upcoming transitions. I won't try to tell you when they'll come, but perhaps they'll give you some ideas. For context, I'll include a few that have already happened.
Handheld storage becomes large enough to store your whole music collection. (This is really what put the early iPods over the top compared to the silly 256MB players around at the time.)
Laptop hard drives become larger than anybody needs. (Desktop drives already have.)
Handheld storage becomes larger than anybody needs.
Solid state (flash) disks get so popular that optimizing operating systems for rotational latency becomes irrelevant.
Solid state disks get big enough to store most databases, so optimized high-end database engines based on disk latency become irrelevant.
It becomes cheaper to buy a new laptop than to replace the video card in a desktop PC. (I don't know how this will happen. Economies of scale as fewer and fewer people buy desktop-size cards?)
Electronic displays become easily readable in sunlight. (Supposedly the Amazon Kindle has this, but it's only black and white?)
Electronic displays become clearer (contrast, DPI, colour accuracy) than paper.
Electronic displays get about as cheap and reliable as other materials - for constructing interior walls.
Computers get so fast that you can't tell the difference in speed between dynamic and static languages. (We're right on the edge of this.)
Programming (and automated testing) becomes so easy that it's almost always easier to rewrite code for a new platform than maintain it on the old one.
Virtualization can run any DOS application at its original native speed or better. (Done: DOSBox.)
Virtualization can run any Windows 95/98/ME application at its original native speed or better. (Almost done? It seems graphics are still a problem.)
Virtualization can run any Windows NT/2000/XP application at its original native speed or better. (Not yet.)
Windows Vista actually runs on normal computers at a speed that makes it more pleasant than Windows XP. (That happened this year! I saw a sub-$1000 PC with 6 gigs of RAM and Vista ran great on it.)
Microsoft .Net becomes fast and ubiquitous enough that people stop making native Windows apps. (Slowly but surely.)
Computers become fast enough that all native Windows apps ever created will run fine under virtualization, so you can drop Win32 entirely.
The Internet becomes sufficiently fast and widespread that it's cheaper to collaborate on software across the world than to write your own separate implementation. (This is what allowed Open Source.)
The Internet becomes sufficiently fast, and disks get sufficiently large, that giving the entire development history of a project to every developer is a good idea. (We're on the edge of this: distributed version control is catching on.)
It becomes sufficiently cheap to develop and distribute software that you no longer need significant financing for most projects. (That's really the Web 2.0 movement in a nutshell.)
Wireless networking becomes fast, reliable, and cheap enough to replace wired networks to the home.
Wireless devices become so easy to build that your home entertainment centre no longer has its components wired together. And the clocks will be right.
Professionally-run Internet-based services have higher uptime than the server in your office. (This is already true for the servers themselves, but often not your link to them. Then again, small business servers have notoriously low reliability and high maintenance costs.)
Professionally-hosted Internet-based desktop applicatons have higher uptime than apps running locally on your PC. (This will never happen since your PC is needed to run Internet apps. Note the asymmetry with Internet-based servers.)
Latency of an Internet-accessible server is as low as a LAN-connected one. (This will never happen, dooming various efforts that still depend on the assumption that it will.)
Batteries last so long that you no longer think about whether you're killing the battery. (Blackberries already have this; iPhones are reputedly close; laptops not at all, except maybe the old PPC-based Macs.)
Solar power saves more in electricity fees than it costs in up-front investment.
Electronics become sufficiently lightweight and low-power that you can make remote controlled flying toys using insect-like aerodynamics instead of man-made style.
Power density in batteries gets high enough to make electric cars sensible.
Thanks to computer-controlled guidance and diagnostics, cars become so safe that they become essentially uncrashable, and the (physically heavy) crash-safety features are no longer needed.
February 11, 2009 17:06
I have frequently been annoyed by the weird-looking anti-aliasing results with freetype fonts under Linux, especially when you're using white-on-black instead of black-on-white. The culprit is gamma, a complicated topic that's (unfortunately) complicated to sort out in Linux.
(Incidentally, MacOS X has a really really good and easy monitor calibration tool that solves this whole class of problem elegantly.)
So here's what you need to know:
- Find a good gamma calibration diagram. The one I link to here is complicated, but helpful.
- Instructions all seem to say "set your monitor's contrast to maximum and then adjust the brightness," but this advice doesn't seem to work on all monitors, particularly LCDs. This is probably because LCD monitors don't actually adjust brightness and contrast in the same way as CRTs do. So anyway, yes, you have to fiddle with contrast too, not just brightness.
- Use the "xgamma" command-line tool (it comes with X nowadays, specifically the xbase-clients package in Debian), combined with your monitor's contrast and brightness settings, to make all the squares - and the splits within each square - as visible as you can.
- Note that people seem to claim that gamma levels around 1.8-2.2 are supposed to be right. This wasn't true for me; around 1.0 is the only range that didn't make things look totally insane. (1.0 is also the default for X.org's server, so this probably makes sense.) Perhaps X's gamma number is different from the one used by everyone else.
- In my experience with LCDs, the high range (rightmost columns) is the trickiest to get right. It's also the most important for modern "Web 2.0" web browsing, where they use a lot of minimally contrasting backgrounds for things like alternating row colours. So pay special attention to the bottom-right.
- You can put an xgamma command in your ~/.xsession file to make it run every time on startup. Reputedly Gnome and KDE also have gamma tools, but I'm not sure exactly how they work.
February 11, 2009 17:06
I remember one really critical part of the Larry Smith Economics lectures I attended in university. It was the day he told us the secret that, he claimed, if it got out, would make the whole economy grind to a halt. Ironically, the economy is suffering dramatically, but not because the secret got out. In fact, the so-called "economic stimulus" plans being executed and/or proposed aren't really going so well, probably because people don't know this secret.
What's Larry Smith's big secret? Simple: Money is worthless.
Deep down inside, you know this is true. Money isn't good for much unless someone else is willing to take it from you in exchange for something with value. There's a lot of talk about Ponzi schemes lately; but money itself is the biggest Ponzi scheme of all. People invest real work in exchange for "shares" of this "economy" thing, in the hopes that they can someday redeem those shares - plus interest - for something useful.
Most of the time, the valuelessness of money doesn't really matter. As in a Ponzi scheme or a run on the bank, unless everybody tries to cash out at the same time, nobody ever notices that the bank didn't actually still have all the money you gave it in deposits. Conversely, unless everybody tries to actually exchange their money for goods and services all at once, nobody realizes that the economy didn't actually have all the goods and services you thought you could pay for. When people realize it all of a sudden, that's when you get inflation.
But people aren't worrying about inflation right now. They're worrying about deflation. Why? In fact, because the opposite problem has occurred: people were all cashed out, exchanging all their money - more money than they had, ie. credit - for goods and services. And now they've changed their minds. They want their money back so they can keep it and not obtain any goods and services after all.
Because, as it turns out, most of those goods and services that seemed so life-critical yesterday are kind of a waste.
But oh no! My job was producing those useless goods and services! And if people don't buy my useless goods and services, I won't be able to make money, which means I won't be able to buy food, which means I'll starve! PLEASE, SOMEONE, STIMULATE THE ECONOMY SO I DON'T STARVE!
Whoa. Stop. Take a deep breath. Think about what you just said.
You just said that if people like you don't keep producing stuff nobody really needs, then there won't be enough food.
Well, who the heck decided that "food" is in the category of "stuff nobody really needs?" Doesn't it make more sense that if people stop producing stuff nobody really needs, then there will be more time and effort available to produce stuff that people do need - like food and shelter?
It does make sense. Sadly, things aren't so simple.
The fact is, people use money as a placeholder - one that determines what people get to work on. The people who produce the food - even though there's more than enough food for everyone - won't give it to you unless you have money. And if you can't find something important to do, you can't make money, even if there is nothing important to do.
The depressing thing about the great depression of the 1930s, as well as the current potential one, is that there may simply be not enough important things to do to keep everyone busy.
It's hard to imagine a realistic solution to this problem.
I mean, we can't just give people food for free.
That would be ridiculous.
February 11, 2009 17:06
I mean it.
You really should read The Collapse of Globalism and the Reinvention of the World, by John Ralston Saul.
I've been a fan of John Ralston Saul for about 10 years, since I saw him speak at the University of Waterloo. I picked up this particular book a few weeks ago at Chapters on clearance discount for $10. It seems to be very unpopular; it only has one review on Amazon, and that one looks kind of fake.
And that's too bad. The book's only fault is that it sticks to the facts, backs them up really well with more facts, lots of references, and a good bibliography, and isn't sensationalist in the least. It puts things in an excellent historical perspective, pointing out interesting truths like:
- This is not the time in history where the borders have been the most open (the colonialist days had lots of "free trade" inside the huge empires);
- Global economic growth rates have been much less in the last 30 years (with globalization) than the preceding 25 (with lots of border controls);
- Political nationalism is on the rise despite the huge drop in trade tariffs;
- China and India have been doing pretty well and pulling up the global average growth rate; entire continents (ie. Africa) have had negative growth in the last 30 years compared to positive growth before that;
- Governments still have the power to control corporations, but they've given up that power willingly;
- Countries (eg. Malaysia) that closed their borders and pegged their currencies have been more successful than countries that didn't;
- Hedge funds are out of control and making a terrible, very risky mess by undermining the financial system (this was back in 2006!);
- Globalism is already dead and declining rapidly, but with little fanfare (the WTO protests reached their height years ago, and for good reason);
- Terrorism and guerilla warfare have to exist because conflict still exists but normal warfare is obsolete.
I admit it, the book is pretty dense and hard to read. I read another business book right after this one, and it had about the same number of pages and the same font size, but took about 20% as long to slog through.
But if you actually want to understand the economy and what's been going on in the world, it's worth it.
February 11, 2009 17:06
dcoombs asks a question about how PINs are used in the fancy new smartcard-enabled Visas vs. Mastercards.
Specifically, he notes that you can change your Visa PIN over the phone, which suggests that the PIN is stored on your bank's servers, not on the card itself. (He also notes that you don't have to store it on the card either; you can encrypt the signing key on the card, so the PIN is never stored at all, anywhere.)
As it happens, I've had some occasion to look into credit card payments in the past. (I do work at a banking software company, after all.) So while I didn't know the answer to the question, I knew where to look.
Where to look is EMVCo, the Europay Mastercard Visa Company, which publishes the EMV Payments Specification. Conveniently for our purposes, you can actually download that very specification from that very link, and learn more than you ever wanted to know about the communication protocol used in payment cards.
Now, the spec is long and boring, so I used the magic of full-text search to find what I was looking for. I alert you to section 5.2.6 of Common Payment Application Specification v1 Dec 2005.pdf (oh yes!), which discusses the various "Cardholder Verification Methods (CVMs)" that are used to... verify cardholders.
From this section, you discover the terms "offline PIN" and "online PIN," which turn out to be what you might expect. Each card identifies its preferred CVMs. The former one means that the card checks the PIN by itself; the latter means that the PIN gets checked by the bank. It appears that your card could require multiple CVMs, although I was too lazy to read in enough detail to be sure of that.
So anyway, the "insecure" method dcoombs describes as being used by his Visa can definitely exist. But I guess we already knew that because it exists.
More interesting is the "more secure" method (offline PIN) presumably used by his Mastercard. The real question is: are they really using offline PINs, or do they just not let you change your PIN over the phone? I don't think we can tell, unless we construct a terminal according to the specs and ask our terminal to read the CVM list from the card :) So we don't really know if Mastercard is "more secure" than Visa; they just don't make it obvious. On the other hand, the spec says they could be "more secure" if they wanted; that feature exists too.
Now, I've been "quoting" the terms "more secure" and "insecure" above. The reason is that I suspect both methods are perfectly fine, and (as we'd hope!) vastly better for security than the old magstripe systems.
The key feature of a smart card is not actually that it keeps your PIN secure. Banks, I suspect, have rightly observed that keeping your PIN super-secret is not really going to happen. There are just too many ways to steal it.
For example, a common form of credit card fraud nowadays is to have fake card readers where they swipe the card and you enter the PIN, and it records the PIN and card number before forwarding it on to the "real" reader device that does the transaction. There is no way to prevent such a system from stealing your PIN; the only option would be to carry around your own keypad for entering your PIN, because you know that keypad isn't hacked... but nobody wants to do that, so forget it.
The other common way to steal your PIN is to watch you type it into a bank machine. Trust me, you're not as secret as you think you are. Or even if you are, the next guy won't be.
So let's accept that your PIN is really not that secure. What can we do?
Well, we can make it really hard to steal your credit card number. This is what smartcards do. As far as I know, the only way to steal the encryption key directly out of such a card is to do some awfully weird stuff to the card (X-rays, super-slow low voltage analysis, etc). Nobody in a corner store or restaurant is going to get away with doing that stuff to your card without you noticing, so you're pretty darn safe. When your card authorizes a transaction, it generates an authorization key for only that one transaction; it never reveals the card number itself, so a card reader machine can't steal it.
You could reverse-engineer your own card, but it wouldn't accomplish anything; if you really need to copy your own card, just ask your bank for another copy. (This problem is why the original "as good as cash" smart card idea wasn't so great. They carry around money and do transactions without the help of a bank - which means that if you can hack your own card, you have a license to print e-money. You don't want to give people incentives like that.)
So the reality is that as long as you don't tell your PIN to everyone, then the probability that someone both knows your PIN and steals your physical card (since they can't copy it) is extremely low.
The remaining question is whether it's secure to let people change their PIN over the phone. Well, nothing on the phone is very secure. But interestingly, even that isn't a big deal; they still need your physical card to make a transaction. They can steal your physical card and go change the PIN over the phone; in that case, they'll need to confirm some personal information. That seems like the most likely attack vector, but it only works if they manage to steal your physical card, which you'll probably notice pretty fast.
Also note that if all this analysis turns out to be wrong, they can just issue a new card that demands offline PIN and disables online PIN. Or vice versa, if it turns out there's something wrong with the offline PIN implementation(1) but online PIN is secure after all.
All in all, I think they did a pretty good job of it.(2)
(1) I can think of one way that offline PIN would turn out to be less secure than online: remember, a PIN is typically only four digits. Four digit passwords are stunningly insecure, protected only by the fact that these systems will shut down if you guess wrong more than n times, where n is a small number like five. But if you steal and hack someone's card, you can read out the key directly, and simply try decrypting it with every possible PIN (all 10000 of them); there's no lockout feature. Even if your PIN isn't "stored on the card," it's still as good as there. You're potentially better off having the card in one physical location and the PIN in another.
(2) On the other hand, did you know that EMV (smartcard) support is optional in the fancy new contactless cards? Basically, EMV support is independent of contactless support. You can have either, neither, or both. Contactless payments are a great idea, but without EMV too, people could actually copy your credit card by passing a reader near your wallet. Crazy. I don't know for sure if this was ever deployed, but if the standard exists, I guess it was; if you have a contactless card (like Mastercard Paypass) without a smartcard reader on it, it's probably this insecure kind. Disclaimer: I am not an expert on this, I just skimmed some standards. Anybody who can confirm/deny, please send me an email.
Update (2009/01/18): Adrian wrote to say that he's tried PC Financial Mastercard and Washington Mutual Mastercard. Both have Mastercard PayPass (the contactless payment system) but no smart card. So that's a lovely security update.
Update (2009/01/18): ppatters wrote to note that various methods (X-rays, low voltage, cold, etc) that used to work will nowadays trigger self-shutdown sequences as an anti-reverse-engineering measure. The question then is: what's more likely, that someone will find a new method that still works on smart cards, or that someone will break through your bank's firewall and steal a list of PINs? Beats me.
February 11, 2009 17:07
I am often accused, sometimes by myself, of being a complete nutcase. This is one of those times.
"But wait!" I say to myself. "Sure, I might look crazy, and I might be crazy, but don't you at least agree that there might be a point to all this?"
I look back at myself suspiciously. "And if there is?"
"Well, that would mean..."
"Never! Don't even say it! It's all nonsense! You're just trying to make me go along with another one of your insane schemes!"
Anyway, yes, I did it. I put all of Windows under git version control.
You see, I installed Windows 98 inside Win4lin Terminal Server (the old Win4lin, before the useless qemu-based "Win4lin Pro" came out). To do it, I had to downgrade my Linux kernel to 188.8.131.52, the last one that Win4lin ever made a patch for. But that's no big deal; it's an old machine anyway. It works fine with the old kernel, even in Debian Etch.
Now, Win4lin (the old one, the only one that matters) has the little-known but extremely useful property that it shares its filesystem directly with the Linux host system. That is, unlike VMware and other "pure" virtualization systems that use "disk images," the files in your Win4lin system map exactly to files in a subdirectory on your Linux system, usually ~/win. So there are files like ~/win/autoexec.bat, ~/win/windows/explorer.exe, and so on.
In the olden days, this was nice primarily because it meant the virtual Windows system didn't need to have its own disk cache. Also because Linux's disk cache and filesystem are fantastically more efficient than anything in Windows, by a very large margin. (Trust me, I've done comparisons. I'm sure other people have too. Maybe they just don't publish the results because they don't look believable enough.) Oh, and of course, you can access files on your Linux system without using Samba, which means things go way faster.
So those are all reason enough to use Win4lin. Or were, in the olden days. Nowadays, Windows 98 is looking a bit old, and the old Win4lin doesn't support Windows NT-based systems (like 2000, XP, and Vista). So to tolerate the limitations of Windows 98, you need a pretty good reason.
This week I found that reason: git!
I've been working on a project that requires me to develop plugins that are backwards compatible with old versions of MS Office, perhaps as far back as Office 97. I also need to test with all the newer versions: 2000, XP, 2003, and 2007. So here's the thing: all those versions, except 2007, work just fine on Windows 98, and Microsoft is really good at backward compatibility. So if I make a plugin for Office 97 on Windows 98, it should run with (close to) no problems on newer platforms. I should be able to just do a cursory check every now and then to make sure.
So, I thought, win4lin should be a good system to check all the old versions on. Then if I throw in a VMware with XP and one with Vista, I should be all set.
After setting it all up (which was admittedly a bit painful), I realized just how efficient Windows 98 is... at least compared to later versions. Did you know a base install of Win98 is less than 100 megs? Why, I have bigger source trees than that lying around nowadays.
...source trees... hmmm...
I had to try it, of course. I went into ~/win, typed "git init", and "git add .", and "git commit". Ta da, a working git repository with my fresh Win98 install.
Then I created separate branches, one for each version of Office, and installed them one by one. And now I can easily test new versions of my plugin: "git checkout office2000; win" or "git checkout office97sr2; win".
Now, the final trick will be to get this whole system running inside VMware. If that works, then the major limitation on this setup - the old kernel that will surely be missing a necessary driver eventually - goes away. I'll be able to use this setup forever to test Office plugins up to Office 2003.
Unfortunately, I can't advise you to try to duplicate my setup. I happened to have a valid Windows 98 license and a valid Win4lin license, neither of which you can buy anymore, and a collection of valid MS Office licenses acquired over time (including the most recent ones via MSDN).
In fact, the rarer Windows 98 licenses become, the more distinctive my amazing setup will make me. Bow down to my power, lowly normal people!
And all this makes me think of something I should add to my thresholds list: the day when a Windows XP install is "small enough" to put under version control.
In other news, ReactOS is looking surprisingly promising lately.
If I get a virus, I can 'git revert' it.
February 11, 2009 17:07
As you've probably noticed by now, my journal doesn't allow comments. This is for two reasons: first, because the software it uses is kinda basic (I like it that way!) and simply doesn't support them; and second, because it takes a lot of time to delete spam and stupid comments, and this increases the threshold of entry into the discussion.
Now, that doesn't mean I don't "enjoy" the comments occasionally. Sometimes my stuff gets picked up on syndicated sites and collects some discussion. Just so nobody gets the idea that I don't care, I'll summarize a few of my responses.
Oh, right!! Geez, I'm such an idiot. I obviously forgot to read the man page, which is how I missed the "--dont-destroy-all-my-data" option.
Dear "bring back the gold standard" people: Gold is also intrinsically worthless. I know this is true because I don't own any and I don't care, and if I had some, it would affect my life precisely not at all. The gold standard worked because people believed it would work, just like any other monetary system.
Also, being rare does not make you valuable. OMG! Smallpox is rare!! Sign me up!
It seems nobody was particularly able to criticize this article because they were stunned by my insanity. I aspire to this level of achievement in all my writing.
(But to the guy on reddit who thinks NTFS is faster than ext3: I use them both on a daily basis. I didn't do any pro-style fancy-pants double-blind study benchmarks, but... try copying 1000 small files sometime. The one that does it more than twice as slowly loses. I'm just saying.)
Thanks to everyone who writes to me or on the web about my articles, as
usual. You're all great. No, that doesn't mean I'll be enabling
January 29, 2009 18:27