Keep it beautiful
Everything here is my opinion. I do not speak for your employer.
October 2005
December 2005

2005-11-03 »

Restructuring

It often seems like a good idea to throw out all your code and start again; especially when you just took over someone else's code and it's now your job to maintain it. The reason it seems like such a good idea is that the problem space sounds so simple... but the code looks so complicated. So you throw it away, and write it from scratch. That's when you realize why the old code was so complicated: there were lots of special cases for all the weird bits that you didn't realize were part of the problem space in the first place. So your rewrite gets big and complex too. If you're very smart, at least it eventually gets to be better than the original; but even then, it takes a long time to do. And if you're unlucky, your rewrite is merely bigger and slower, not better.

Companies are the same. Company processes are designed and encoded (eg. through forms or software systems) over a long period of time, and they cover lots of weird special cases. When you look at the problem space, it sounds so simple, and you wonder why the solution has to be so complicated. So you throw it away and start from scratch, redesigning a bunch of things to better align with your way of seeing the world. But it means you've lost the benefit of all the tweaking you've been through over the last few years; perhaps your overall model is better, but the surface details are lumpy, inefficient, and downright wrong at first. If you're very smart, hopefully it eventually gets to be better than the original; but even then, it takes a long time. And if you're unlucky, your redesigned company is merely bigger and slower, not better.

With any great redesign, you have to be constantly on the lookout for mistakes. No new design or implementation is perfect right from the start. The biggest danger is to assume that it is, and find out only very late that you're wrong. (That way is called the "waterfall method," and it's very inefficient.)

Looking for a job in Toronto?

Speaking of restructuring...

NITI has a job posting for a QA Manager in Markham, Ontario. If the corporate version of the job description looks uninspiring, see instead my version: Evil Death Ray. Actually, Evil Death Ray is a "QA Person", while the new job is "QA Manager." But the idea is the same, only more so.

You'd be working on testing our award-winning Nitix distribution of Linux, the only one that has ever been successful in servers for small business. Yes, we are open source friendly. And you'd get to play with our super fun, room-filling Project Death Ray test cluster. As well as annoying me, one of the shoddy developers whose code you'd be breaking.

2005-11-05 »

Design Requires Persistence

People were teasing me a bit over the last few days because of my insistence that the room where we were relocating my desk was "evil." This wasn't me being a pest and refusing to move - this was the simple fact that the room's layout so violated my sense of aesthetics that I had to, in good conscience, refuse to work there.

Since I finally understood things back in August, one underlying, general rule has become extremely clear to me: good "design" consists of finding a way to satisfy all the constraints at once. I've always known that choosing a side in an argument and violently adhering to it, in opposition to other goals or other points of view, just felt wrong; but I had thought the alternative was compromise. It's not. In a compromise, your solution has each side giving in, so that nobody loses too badly. But the right solution to a problem, in fact, is a solution in which both sides get exactly what they want, despite how the two might seem initially to be opposites.

It is not about sacrificing the good of the one for the good of the many, or vice versa; it's the sacrifice itself that is wrong. You have to find a way that the good of the one is served perfectly at the same time as the good of the many.

In the end, after many hours, with advice from many people around the office (who tried to be helpful, despite visibly tolerating my obsessiveness) we finally found a "right" solution: no desk had to sacrifice itself to an inconvenient orientation, and yet the room at last found its coherency.

Was this worth it, to fuss for hours just for the trivial details of the layout of a single room? No, probably not in itself. But the larger purpose was to test myself, and to prove a point. The most important lesson of my life so far is that you don't have to settle for contradictions. This is not a lesson that has stood unchallenged. Better than that: it has been challenged, and it won.

Implementation Details

Special thanks to mich, whose tolerance was not visible, and, moreover, who actually did the work of "implementing the final design," so to speak.

2005-11-10 »

On Naming Things

If I had a battleship, I would name it the HMCS Impediment.

Slogan: "It's mostly the name that gets in the way."

2005-11-12 »

Religion and Non-compromise

Today, while wandering down the street, I accidentally learned about Falun Dafa (aka Falun Gong), a Buddhism-influenced religious movement from China.

I won't try to explain their whole story in as much detail as it was explained to me. But the part that stuck with me (probably due to selection bias, of course) was their explanation of persecution by the Chinese government. Some religions would explain it away as "There's a reason for everything", or "God is punishing us", or whatever. That view never really worked out too well for me. But my local Falun Dafa representative explains it this way: "Suffering is always wrong; there is no good reason for it. But when there must be suffering, the right thing to do is to endure it, not try to avoid it." And then, of course, you launch an international campaign involving millions of people to try to get rid of the cause of the suffering.

Why is this interesting? Because it parallels my earlier comments on stupidity: when stupidity is forced on you, as an individual you have to take the "less stupid" of your available options. But it's the core stupidity itself that is wrong; that's what forcing you to do something stupid yourself. You have to make the root cause of the stupidity go away.

What does all this have to do with programming? Uh, er, use your imagination. But it all fits together in the end.

2005-11-16 »

Weasel Words

Adrian had some interesting comments about how if someone doesn't say something in the simplest possible way, there's probably a reason.

Someone asked me today about a comment I made in one of my papers, and I thought about it in those terms.

Windows, although of course nothing is perfect, makes a great desktop system.

I'm trying to appease two opposite types of people with this sentence: people who like Windows, and people who don't. There are lots of IT people in both categories. I need to bring both of them around to agree that our system is better, at least in this particular case. See all the things I'm doing with only a few words:

  • Nothing is perfect. Of course. I've got nothing against Windows, you know, but...

  • Windows makes a great desktop system.

  • Windows, by implication, doesn't make such a great server system, or I would have said it "Makes a great system" or something.

  • Windows' imperfections are less important than its greatness; that's why it's a subordinate clause instead of the main clause or an ending.

  • The end of a sentence sticks with people more than the middle (this is what subordinates subordinate clauses in the first place). We end on a positive note.

  • The next sentence is about server vs. desktop, not imperfections, so the end of the sentence leads into the next one.

Okay, so maybe I massively overanalyzed this one. But massive overanalysis is what I do, really. The real question is: was I really thinking all that while I was writing, or did I just make it up afterwards and make it sound good? That answer is the key to my personality, I think, so don't expect me to just give it away. :)

Obligatory Correlation to Coding

For the advogato audience, here's how it all ties into programming. I wrote earlier about restructuring to simplify a design not really working, because you lose all those hidden details that deal with the many special cases.

Well, there you go. When you're looking at a program, ask yourself why it's so complicated. Give people the benefit of the doubt. Sure, maybe you can find a better way to do it. But make sure you first know what "it" is.

2005-11-19 »

Societal Interstructures and Bigness of Thought

Our company has been working lately at partnering with several different other "ISV" companies who want to use our product as a platform for their product. After being basically screwed around by one of the larger of those ISVs in the last few days (nothing really serious; just super annoying), I came to this conclusion:

    Trust people in your company, but don't expect to trust people in other companies.

And therein lies a very interesting lesson, which I will attempt to relate back to software via a mild digression in the opposite direction.

One of my pet theories of capitalism (or at least, the non-insane variant practiced in Canada) is that, unlike idealistic theories like communism or libertarianism, capitalism tries to combine the best of both worlds:

  • It is impossible to centrally control an immensely complex system, like an entire country's economy, so we don't try. We implement a complex, mostly self-organizing system instead, by carefully controlling the rules of the game.

  • Simpler systems, like small groups of people, are much more efficient when organized as cooperative, not competitive, groups with a centrally organized set of goals. That's why employees of any particular successful company aren't generally set in cutthroat competition with one another.

Why does it work? Because up to a certain size, a single mind can hold and optimize the entire structure. I don't mean just the person at the top of the pyramid; I mean that, in a proper organization, anyone can see and understand the structure they're working in. That means they can understand other people's goals and how those goals fit in with the big picture. When you can do that, you can resolve your differences of opinion by finding the one "right" non-compromise answer. In other words, you can work efficiently.

In groups that are too large, this is impossible. You really can't understand why the person you disagree with is doing what he's doing; or worse, perhaps he's doing that because his goals are actually different from yours and the system is too complicated to find a non-compromise solution. That's why we have multiple companies competing with each other, and it works better than if they just tried to all get along.

But what exactly is too big?

This is the fun part. The last few decades have seen a huge increase in the number of smaller companies, while simultaneously bigger companies have merged and gotten even bigger (and fewer). This is because of two completely separate effects.

First, big companies get bigger because of technology: with better communication and management technologies, it's possible to centrally organize larger and larger groups. What used to be impossible with pencil-and-abacus accounting systems is now possible with computers. This trend should continue, and big companies will be able to get bigger.

The second effect is weirder. It's also because of technology, but causes exactly the opposite effect. With technology, smaller groups can do more and more complex things. Complex things are difficult to manage centrally.

So which companies get bigger? The ones doing simple, parallelizable things. Kraft mass produces food products in giant vats. Various companies massively drill for oil. GM mass-produces cars. Sony mass-produces electronics.

And IBM and Sun mass-produce and recombine Java objects that go on to mass-produce billable consulting hours. Meanwhile tiny little companies start up, produce and recombine open source tools, and (sometimes profitably) produce complex, not always open source, products in small quantities, and almost never in Java. Aha. And here we are back in the world of software.

Linus's rule for functions: The maximum length of a function is inversely proportional to the complexity and indentation level of that function. Take that to the world of OO, and the maximum size of a class is inversely proportional to its complexity. My claim is that it's also proportional to Bigness of Thought (BoT), that is, the amount of complexity of this type that can be held in a human brain - the particular brains working on your project - at once.

Java encourages teeny tiny objects that do only one tiny thing, hopefully well. That's because many Java programmers are people like financial analysts, who never really wanted to be programmers, and so their BoT for programming concepts is teeny tiny. These people can program in Java. You might need a horde of them to get anything done, but they can get it done. Pretty neat, really.

Some people have a very high programming BoT. I think I fall into that category. The danger for those people is that they can write amazing programs nobody else can understand, which in the end, doesn't do anyone any good, because it's impossible for someone to maintain it after they move on. Worse, even people with very high programming BoT can't understand the programs, because everyone with high BoT has a different mental structure. I'm great at remembering concepts, but can't remember my postal code; other people can remember dozens of interest rates and formulas with no problem, but they need to draw a picture to see any difficult concept. Both groups have a high BoT, but for totally different things. Both groups can even make very good programmers, but for totally different programs.

What's the point of all this? Well, I could write for hours about the results, but my main point is: the BoT of your group defines the maximum implementation complexity of your objects. When the overall project exceeds your group's BoT (which is virtually always), you need to subdivide your objects into sub-objects with understandable interfaces that reduce the BoT needed to build the object that combines them. And, as technology (eg. programming languages and libraries) improves, the amount of organized complexity you can squeeze into the same BoT increases.

It's just like capitalism. If you can't centrally manage it all, you split it into two companies to make it achievable. If you have two companies but you could centrally manage it, you merge the two companies to make it more efficient.

And if your partner company ("library programmer") has some idiots ("bugs") you're going to have to ask them nicely to get your problems solved, because your poor brain doesn't have the capacity to hold all the details of the whole picture all at once.

Interesting Side Notes

Compromise (solving each person's problems poorly, instead of solving them all perfectly at once) should be necessary only when the problem exceeds the BoT of all affected parties.

Concepts near the limit of your BoT are very difficult for you to explain to anyone else unless they have an exceedingly high BoT. People with a low BoT, but who manage to understand a particular concept anyway, will be good at explaining it.

It is possible to reexplain difficult solutions from one BoT domain such that they make sense to people in another BoT domain. For example, some concepts can be explained using diagrams, even if the person who solved the problem didn't need a diagram to do so. This is sort of like translating from French to English; something is bound to get lost, but it's better than nothing.

By extension, UML is about as useful as, say, Babelfish.

2005-11-27 »

Predictions and Promises

I have been pondering the Great Dichotomy between the two offices of NITI a lot lately. We have one office in Toronto (sales, marketing, support, manufacturing, etc), and another in Montreal (R&D), and their cultures are very different. Neither is exactly wrong, but people from one culture have a lot of trouble understanding the other.

I think I now at least partly understand the difference. And, surely to the joy of the more technical readers here, I can explain it in terms of task scheduling algorithms.

In Montreal we use a system called Schedulator that I originally "designed" (it has since been rewritten). It basically takes Joel Spolsky's Painless Software Schedules essay and automates it.

Schedulator, and Joel's original essay, is about schedule prediction. It helps you predict when your project is going to reach various milestones. Used under the right conditions, it can work very well.

Now what happens if you get some surprise bugs in an earlier release? Joel didn't say anything about that. Well, what happens is your schedule will slip, because high-priority tasks insert themselves in front of everything else. Schedulator manages this automatically, and each day you can look at a pretty graph of your project schedule and watch the end date slip further into the future.

Inside R&D, there are other mini-deadlines as well. We have this concept of a Zero Bug Bounce (yes, stolen directly from Microsoft), where the idea is that all tasks are either complete, found too recently to be reasonably fixed, or shovable into a future release. And it's someone's job to predict when the next bounce will be, then make sure you get done on time.

Hey, what's this "make sure" business? Schedulator updates its predictions in real time, right? So the bounce date might slip, but only for a good reason, right? So there's nothing we can do, right?

Sort of. The problem is, if the last release-critical bug just doesn't seem very important, something more important will always come up, the new thing will jump to the front of the schedule, and the bounce will never get done. Thus, we have to introduce a form of deadline-based prioritization as we get very close to a bounce date. In the deadline-based system, you convert your prediction into a promise: that is, Schedulator tells you, as of right now, when you think you can get it done. You add a bit of time, just in case. And then you promise that you will absolutely have it done by that time, no matter what.

The same overall method applies for software final release dates (sooner or later, you just stop fixing bugs - even important-seeming ones - in the old version so you can finally just finish the new one). This method feels wrong; we're raising the priority of obviously lower-priority tasks above obviously important ones. Developers want things to be simple; they like prediction, but they don't like keeping promises that force them to break their priority scheme.

Here's my big insight. Marketing people are exactly the opposite. They don't care about predictions at all. They only care about promises. You can predict bug fix times, bounce dates, or anything else, but in the end, they want you to commit to a release date, and they want to do their work, confident that you'll be done by the date you promised. Exactly what that date is isn't very important; that you guarantee it is what's critical, because otherwise they can't properly do their thing.

And Marketing-type people carry their concern for promises above prediction way too far. Joel's article, above, talks about how Microsoft Project is useful, but not for writing software. I now understand why: it's because the type of work is very different. Some tasks a salesperson might have to do - like arranging a meeting - can take two weeks, but only, say, 5% of their effort during that time. Software developers work mostly linearly (one 100% task at a time), while salespeople do a lot of tasks in parallel. So it's easy for a salesperson to promise, "I'll have this meeting set up within about two weeks, no problem." Even if something new and "more important" comes up, they don't delay the previous task; they just do one more thing in the same amount of time. But after a person gets sufficiently heavily loaded with 5% tasks, each new task does delay the work, the same way as it would for a developer. That's the exception, however, not the rule, because simply not overloading your salespeople can dodge the problem.

A couple of weeks ago, the CEO told me, "I think we need a Schedulator for the rest of the company." This was true, in a way: we have people in sales and marketing who are having trouble making and keeping their due date promises. Schedulator, in contrast, was created to help developers, who were having trouble making predictions. It doesn't help them keep promises (although better predictions are one necessary part of saner promises); in fact, unless you're careful, it gives them an excuse for breaking promises when the predictions change. The design we came up with for a "marketing schedulator" is actually totally different from the original Schedulator. It involves a central, top-down plan from Microsoft Project, and only three bits of information fed back from individuals: notes about each task, % complete, and a checkbox: "in danger of missing the due date." Despite your best intentions, you might still have to sometimes break promises, but when this is (albeit rarely) the case, now you have to check a checkbox and have the CEO get angry at you.

I suppose the ultimate Schedulator of the future could combine the two concepts. It would predict dates, then help you promise them by auto-increasing the priority and moving tasks around so that future predictions always leave your existing promises intact. Then everyone on both sides of the fence could use it. Meanwhile, I think we need two systems, which will be strangely perpendicular to each other.

October 2005
December 2005

I'm CEO at Tailscale, where we make network problems disappear.

Why would you follow me on twitter? Use RSS.

apenwarr on gmail.com