I love the smell of

in the morning
Everything here is my opinion. I do not speak for your employer.
December 2005
February 2006

2006-01-06 »

Observations from my Trip to London

  1. You can't say "Trip to London" without people assuming you mean London, England, even though I actually mean London, Ontario. For some reason this doesn't happen with Waterloo or even, irony of ironies, New England.

  2. The Pontiac Pursuit is the first car I've driven in a long time that continually makes me think "awesome." Apparently due to my complete lack of media input lately, I'd never even heard of it before yesterday. Their web site even talks about its pulsating performance - as if it were a chicken, of all things! I wish I needed a car.

  3. After two days spent essentially in tech support for someone else's product, I now hate programmers even more than ever.

3b. Also, Windows sucks much more than I usually give it credit for.

3c. So does everything else, particularly commercial database servers and anyone who has ever written "sleep(10)" for a "wait for operation to complete" function. I suspect those people are often one and the same.

Java constness and GC

pcolijn wrote about some things he likes/doesn't like in Java. I'm going to disagree with one from each category:

constness: I think you have your inheritence backwards. An immutable instance isn't a subclass of a mutable instance where some operations (gag) throw exceptions by surprise. No, in fact, a mutable instance extends an immutable one. In other words, start with the immutable interface, which only has getThis(), getThat(), etc. From that, derive an interface that adds setThis(), setThat(), etc. People require one of the two interfaces on their objects, and all is well, and you didn't really need const after all. The method you suggested - pretending to implement the mutable interface and then throwing exceptions - is like doing exactly the same thing in C++, and just as evil. On the other hand, in C++, a parent class can talk about a function with "const" whatever, while a child can implement the "non-const" whatever, and things will work sensibly. But if you think about it, that's just the same as what you can do in Java.

Now for the part about Java you do like: garbage collection. I'm actually a big fan of GC as a concept, because the idea of not crashing all the time because of randomly overwriting memory kind of appeals to me. However, I've learned two important things about GC:

1) Not having to explicitly delete things doesn't actually mean you don't have to explicitly think about object deletion; it just makes you think you don't have to think about it, which is much worse. Hence, where C++ programs tend to have explosions, Java programs (seemingly universally) have nasty memory leaks and no good way to find them (because they're not "leaks" in the usual sense; if nobody had a pointer to them anymore, they would be auto-deleted). They also have non-deterministic destructor behaviour, which is very restrictive. A GC language is good as long as people actually think about object lifecycles, but my experience is they don't, and so GC's don't solve anything. (If you think about object lifecycles to the extent that you have to, then you turn out not to need GC because your smart pointers will all do the right thing.)

2) I heard that Java object creation/deletion is so slow that people tend to use "pools" of object types that are created/deleted a lot, and explicitly place those objects into and out of the pools. You know what that is? It's bypassing the GC so you can get explicit memory allocation/deallocation. Snicker. Again, this is nothing against GC specifically (which can be quite fast), but it's definitely a sign that all is not right with the world.

2006-01-08 »

Assorted commentary

Markham/Richmond Hill still has the best bubble tea.

Blue Mountain is surprisingly non-bad, although I get the impression I must be kinda dumb, because everyone there kept telling me, "But you have better hills out near Montreal, don't you?" Yes, we do. But I'm not there, now am I? And the weather was nice.

Also, I needed an excuse to drive the Pontiac Pursuit some more. Mmm, pulsating. And I have to give it back soon. Sigh.

My failed attempt to defend Java

Flying bejeezus, pcolijn. There are some things I just didn't want to know. I quit.

Also, I think I'm getting a bit too technology dependent. I had to ask Google how to spell bejeezus.

Elections

No, I don't know who you should vote for. But I do know the best campaign tagline ever. "At least in Quebec, there's the Bloc." The rest of you poor saps don't really have much to choose from, do you?

On getting smarter

Since August sometime I've been a very different person. You might not have noticed. But now that I understand everything so clearly, it's very weird to keep talking to and dealing with people who don't understand, or who can't understand, or who refuse to understand. It's particularly eerie to argue with people who were so thoroughly brainwashed by the previous me that they actually thought I was right and now use my own fallacious arguments to explain why I'm wrong. It's like talking to a ghost.

But I'm not crazy. I'm sure of it. It's everyone else who's crazy. I may have to do something actually crazy just to make sure I can tell the difference.

2006-01-16 »

Historical vs. Model-based Predictions

We've had some discussions at work lately about time predictions, and how we never seem to learn from our prediction mistakes. We have two major examples of this: first, our second-digit (4.x.0) releases tend to be about 4.5 months apart. And second, our manual QA regression tests (most tests are automated nowadays) take about 2 weeks to complete each cycle, with several cycles per release.

Those two numbers - 4.5 months and 2 weeks - are what I call historical predictions, that is, you simply measure how long an operation has taken in the past, and then predict that it will take that long in the future. As long as you aren't making major changes to your processes (eg. doing more/less beta testing, doubling the number of tests in a test cycle, etc), this is an extremely accurate method for prediction. Just do the same thing every time, and you'll predict the right timeline for next time. Since accurate predictions are extremely valuable for things like synchronizing advance marketing and sales/support training with your software release, it's a big benefit being able to predict so accurately.

But historical predictions have a big problem, which is that they're empty. All I can say is it'll take about 4.5 months to make the release - I can't say why it'll take so long, or what we'll be doing during that time. So it's very easy for people to negotiate you down in your predictions. "Come on! Can't you do it in only 4.0 months instead of 4.5? That's not much difference, and 4.5 months is too long!"

To have a serious discussion like this about reducing your timelines - which of course, is also beneficial, for lots of reasons - you can't use historical prediction. Trying to bargain with broad statistics is just expecting that wishing will make your dreams come true. I'd like it to take less than 4.5 months to make a release; that won't make it happen.

To make it actually happen, you have to use a totally different method, which I'll call model-based prediction. As you might guess from the name, the model-based system requires you to construct a detailed model of how your process works. Once you have that model, you can try changing parts of the model that take a lot of time, and see the difference it makes to the model timeline. If it makes the prediction look better, you can try that change and see if it works in real life.

Of course, successfully using that technique requires that your model actually be correct. Here's the fundamental problem: it's almost never correct. Worse, the model almost always includes a subset of what happens in real life, which means your model-based predictions will always predict less time than a historical prediction.

Now back to the discussion where someone is trying to bargain you down. Your model predicts it'll take 2.5 months to make a release. History says it'll take 4.5 months. You did change some stuff since last time; after all, you're constantly improving your release process. So when someone comes and wants you to promise 4.0 months instead of 4.5 months, you might plausibly believe that you really maybe did trim 2 weeks off the schedule; after all, the right answer must be between 2.5 months and 4.5 months somewhere. And you give in.

We make this mistake a lot. Now, I'm not going to feel too guilty about saying so, since I think pretty much every software company makes that mistake a lot. But that doesn't mean we shouldn't try to do something about it. Here's what I think is a better way:

  • Remember, your model-based prediction is worthless (ie. definitely wrong and horribly misleading) unless it predicts the same answer as your historical prediction. Tweak it until it does.

  • Each time you run the process, if your model-based prediction was wrong, look back at real life and note what parts you were wrong about, then update the model for next time. This might require taking more careful notes when you do things... maybe using some sort of... uh, schedulation system.

  • If you change your processes, start with your previous model (which must match with history!), then adjust it to match your new plan. Use the prediction from that. Remember: almost all changes to the model will not significantly affect the timeline. Most actions are not on the critical path. So you know your model is wrong if almost every change you make has some kind of significant effect. You can suspect your model is getting close if most changes don't have an effect.

  • If, after changing your processes, your model-based prediction was wrong (most common failure: it took just as long as history said it always does), then your model is wrong. Compare the new and old models, and figure out why your change was not actually on the critical path even though you thought it was. Try again.

I'm sure lots of people have written entire books about this sort of thing, but neither I, nor most programmers, and certainly not most managers, have read them. What they do instead is just ignore statistics completely and try random things, leading to a horrible pattern: "Darn it, you guys keep releasing stuff and it takes too long! We'd better drop all the weird parts of your process and do something more traditional." Then that doesn't improve the process, and it might even make things worse. But nobody will ever say, "Darn it, we're doing it just like everyone else, but it takes too long! We'd better try something randomly different!"

If you don't understand your underlying model and do what you do for a clear reason, eventually someone will talk you into doing things the formal, boring, and possibly painful and inefficient way, whether it actually helps or not. And you won't have any firepower to talk them out of it.

And that's why you can't just use historical predictions, no matter how accurate they might be. Of course, you can't just use model-based predictions either, because we know they're inaccurate. Instead, the one true answer is the one that has both models correct at the same time.

2006-01-20 »

Choices make things worse

My company just created a new pricing combination called the "small business protection package." The idea is that it includes basically all the features we sell, with an inflated number of user licenses, along with an extended warranty, all at a price that's slightly discounted from the total of all those things put together. Now, it turns out that our resellers and customers just love it, even though actually what they would have bought before was probably less stuff at a lower price.

You've seen this before. It's the fast food "combo" model. People just hate adding up a bunch of small prices and making a big one; it makes them worry that they're not getting a good deal. But when there's one "standard" package with a lot of stuff and a simple price and they save money versus buying things separately, then everything seems easy. (The "save money" thing is mostly just an illusion. They're buying virtual stuff - software licenses - that they wouldn't have bought before anyway, so when we sell more of it and then give a discount, it's mostly the same to us, and the customer often spends more money.)

Anyway, simplicity is worth paying for. Who knew?

On a similar note is a mistake I made a couple of years ago in introducing a concept I called "dial-a-vacation" for developers. The idea is that, on top of your basic vacation allowance, you could trade away part of your salary for an equivalent number of vacation days. It's basically like taking unpaid vacation, only spreading the "cost" of it throughout the year, so you don't go broke when you're actually taking your vacation. Since "normal" vacation is obviously mathematically the same, but less flexible, I thought this would obviously be a benefit.

It's not. People like getting more vacation days because it's an employment "perk" and it's "free." Trading away your salary for vacation days just hurts: you feel like you're losing something, not gaining something, even though technically it's equivalent.

Almost all job benefit programmes work on the same principle. You can give programmers a $20 t-shirt and raise morale way more than a $20 annual salary increase would do. You can buy a $2000 foosball table, spread among 20 people, and seem like a way cooler company than one that got everyone a $100 raise. Everyone wants eyeglasses added to our health insurance plan, even after we explain that eyeglasses aren't really insurance (either you just always need them, or you just don't) and so the "insurance" overhead is just giving money - money that could be your salary - away.

Just like all those examples, "free" vacation is worth a lot more happiness than the same amount in dollars. This literally means that you can be paid less and not have dial-a-vacation, and you'll be happier than if you get paid more and do get dial-a-vacation.

Psychology is strange. Oh well, live and learn.

And I won't even try to explain my plan for buying monthly subway passes in order to feel rich. But trust me, it's exactly the same idea.

2006-01-31 »

It Takes Two to Fail

It struck me today how management success is kind of an OR operation. So many things in life are AND: if you do this and this properly, then it works! But if you miss that one step, then boom!

But management is different. On the one hand, if you're a good manager and you align things just the right way, you can take a reasonably good group of people and make them massively successful. (Example: most big companies.) But on the other hand, even if you suck like crazy at management, if the people you manage are really great, they can succeed despite the way you've kind of set them up to fail. And of course, there's every point in between those two extremes.

But what's interesting is that because of this effect, when you do manage to fail (or your project is late, or buggy, or whatever), then there's always someone to blame it on: the person doing the work. Because obviously the person doing the work didn't rise to the challenge, blah blah, and look, it sucks!

I think a proper manager's job is to reduce the challenge, so nobody will fail to rise to it. But maybe not eliminate the challenge entirely, or else it's too boring.

UPDATE: I didn't like the old title. Sorry.

December 2005
February 2006

I'm CEO at Tailscale, where we make network problems disappear.

Why would you follow me on twitter? Use RSS.

apenwarr on gmail.com