Wow, has it really been that long since I posted something here? I guess I'm either lazy or overworked. (In fact, those two things are tightly related. Or the same?)
Branch Constraints Revisited
It's been about 8 months since I posted the original paper and started implementing my own advice. So, how did it go? I've learned a few more things since then, but the advice turns out to be correct - math is a wonderful thing - and understanding it has led to some definite improvements in our inter-release time and bug counts. One missing detail: I didn't originally consider inverted CVS branches, where you create unstable branches and only merge into the release branch when the code stabilizes. This complicates the math considerably, but can allow shorter release cycles, especially for "big" features. There's more for me to say eventually...
NITI products get lots of positive reviews in the press, but it seems that every computing product does. I used to think that this is because, as "everyone knows", you simply pay off a magazine and they give you a good review. Therefore I was suspicious when they - universally - not only didn't ask us for money before reviewing our products, but also gave us extremely positive reviews despite our lack of kickbacks or advertising. Someone finally explained it to me a few months ago: computer magazines depend on getting free samples of products in order to review them. If they got a reputation for putting out bad reviews, nobody would send them stuff to review. Only positive reviewers survive. So all reviews - even comparative ones - only say positive things. It's just that some are less positive, and some are more positive. If your product is really seriously sucky, you'll maybe get a neutral review (facts only), but you have to be really, amazingly bad.
Then I talked to a musician on the plane from Montreal to Thunder Bay, where I'm visiting for the Christmas holidays. He reminded me that in music, it's not the same at all: there are lots of music reviews that totally trash the music in question. And then he told me why that is: while in computers, a negative review could destroy you, in music, the worst possible thing that could ever happen to you is a "lukewarm" review. Art is about stirring up people's emotions, and, love it or hate it, as long as you care, they've been successful. So the hardest music review to find should be the lukewarm one.
Architecture, Implementation, and Glue
Speaking of plane-musician conversations, we also had some discussion about electronic music and the tendency to endlessly remix things, to the point where it's now an entire art form to simply mix other people's music together well. In the extreme other direction, I saw a concert by Kalmunity - Live Organic Improv in Montreal a few weeks ago. They generate their music in real-time, using only non-electronic instruments, and they're very good at it. Somewhere in between is a "traditional" musician that composes his own music and then assembles a band to play it with.
So programming is an art form, right? I see some parallels here: the above three types of musician correspond to gluers, virtuoso coders, and architects, respectively. I've been thinking of this a lot, because I'm trying to find a better way of organizing software development teams. Most effort in this area concentrates on finding a great architect (composer) and then combining him/her with a great team of virtuoso coders. If you can do that, you're doing really well. But open source, like electronic mixing, makes possible a new kind of developer that didn't exist before: the gluer. This person is an architect, in a way, but the architecture is imposed after the individual components have been built. The open source world doesn't have a lot of great architects or gluers. It seems to me that if you want to maximize your success with open source, the best way to do it is to become an excellent gluer yourself - you can become famous for assembling other people's work and making a final product. KDE is more famous than KMail.
I've been playing more with document correlation in the last couple of days, this time trying to subdivide documents into groups. This is based on the vague feeling that our tech support call logs probably have some kind of trends, and if we could just identify the "most common" kinds of calls automatically, we could focus more effort on improving the product to get rid of that kind of support call. Auto-categories are kind of screwy, though, because they might just as easily group documents by support technician's name instead of by problem type - and that's obviously not what I want in this case, but I have no way to express that. (And don't even talk to me about metadata! I would have an entire rant about metadata, if anyone ever bothered to tell me what it was!)