100% Pure

accept no imitations
Everything here is my opinion. I do not speak for your employer.
June 2006
August 2006

2006-07-06 »

Dynamic Equilibrium

I wrote before about precious instability, the idea that I design my life in an "unstable" fashion, then carefully control it, in order to make it more manoeuverable. For people who know me, some recent events might have put that into a bit more context.

But one thing I realized since then is that this concept is much more widely applicable than just me and my idiosyncratic ways. Modern high-tech business requires this kind of instability, but only when it's in the right places.

Put simply, there are two fundamental ways to manage any project: creatively (ie. a prototype) or using a well-defined process. Well-defined processes are efficient and scalable; creative solutions are resilient and flexible. (I wrote about efficiency vs. resilience before too.) An obvious way to balance the two is what I've been trying to do lately: do it (whatever "it" is, even if it's a business activity) the first time as a creative prototype, then try to generalize what you did into a repeatable process. As with software, making a mostly-right generalized design is usually possible, for a good designer, after you've done it about twice creatively.

But once your process is well-defined, then what? I've experimented with the "hand it off entirely to process-driven people" technique, but that often doesn't work very well, because if your process isn't perfect at the start, process-driven people aren't very good at fixing it. Meanwhile, trying to find some middle ground ("we'll follow the process, but bend the rules whenever necessary") is a disaster, because it's not a combined solution, it's a compromise, and it ends up being neither very efficient nor very creative. Gross.

I don't exactly have a tried-and-true solution to this, but here's what I'm thinking. Joel Spolsky has written about The Development Abstraction Layer, which is a partly-right variant of what I'm going to suggest. What he wants is a programmer's ideal world: the programmer just builds what he wants, which is assumed to be the right thing, and the rest of the company is an "abstraction layer" that gets all the annoying repetitive cruft out of the way. In terms of creativity vs. processes, that means the "developers" are creative, and the rest of the company delivers things based on defined processes (or at least inter-group communication follows defined processes).

The part where I disagree with Joel is the part where I've already specifically screwed up so I figure I can serve as a warning to everyone: developers shouldn't be the ones running the show. Not quite. Smart people who get stuff done (another Joelism) need to be running the show, but it's not about programming and designs and version control and operating systems and XML. It's about business and customer relations and good communication and... being smart, and getting stuff done. It just turns out that Joel's kind of programmer is exactly the right personality type... if they're deliberately caring about the right things, which is more than just the software.

So how about this. Classify your people into two categories: the creative solutions people and the process-driven people. Don't separate them physically or logistically; they have to work together. The creative solutions people design and update business processes. The process-driven people execute those processes - maybe only after the first prototype - and give rapid feedback on results. Don't worry, Joel, both groups should be smart and get stuff done. But they're two different kinds of people. Some very smart people really have a great time fixing bugs and optimizing failure rates and avoiding creativity at all costs. (Seriously, there are lots of people like this!) These aren't my favourite people to work with, but they're extremely useful because they like doing exactly the kind of stuff that I don't.

At the same time, process-driven people risk falling into dangerously stable ruts and getting blindsided when changes become necessary, because they enjoy their stability and will try to retain it at all costs.

So as a business designer, you need to set up the incentives of the creative and process people correctly. My suggestion? Don't let the process people define their own processes. Make them responsible for executing defined processes efficiently and make creative people responsible for changing the processes to make them efficient.

I call this "dynamic equilibrium," after the term from high school science class. It looks pretty stable, because the majority of people are sitting around executing repeatable processes. It's just that it's not really stable, because exceptions will come up, and when they do, weird stuff happens automatically to re-stabilize the system.

Disclaimer: I haven't tried out this advice yet, but it sounds like fun.

Footnote

Crap, I really wanted to work in my example about airlines losing my baggage, but it didn't fit... so here it is anyway.

Airlines process absurd amounts of baggage every day very efficiently, and lose a vanishingly small percentage of it. That's an efficient business process.

But every now and then they lose a bag, and they don't know what to do, because if they knew what to do, it wouldn't be lost, sort of by definition.

You need business process people around to deal with the majority of cases, where the bags aren't lost. And you need creative people around to handle the exceptions, learn from them, and fix the processes so even fewer bags get lost.

2006-07-12 »

A few loosely correlated comments this time.

Stubbornness is a Virtue - Sometimes

I've been thinking a lot lately about what makes someone a good high-level leader. There are lots of things to consider, of course, but here's one of the things that has struck me most in the last little while: the best leaders are stubborn and also good listeners.

What, it's possible to be both?

Yes. People who are stubborn persist in a particular way of thinking despite encountering huge resistance. If you're trying to convince someone about a revolutionary new idea that will solve their problems, stubbornness is critical. Meanwhile, people who are good listeners adapt their thinking when someone convinces them that they've made a mistake or were missing some information. If you're wrong - which is frequently, if you have a lot of revolutionary ideas - you have to be a good listener in order to correct your mistakes fast.

The proper combination, though, is important. The best leader will persist in thinking something until someone properly convinces them otherwise. Once they've been convinced, then they change their mind and they're stubborn in the new way. That's not flip flopping; that's just a smart way to learn from your mistakes.

If you're missing one of those two traits, you'll fail in one of two very common ways: you'll persist in believing something stupid even it's been made totally clear and obvious to everyone else that it's wrong. Or else you'll never settle on a single direction, constantly changing your mind whenever anyone raises an objection.

Some people take it a step further and switch back and forth between being too stubborn and being not stubborn enough. One example I've seen a few times is a stubborn belief that "the investors are always right." So whatever it is that the investors want, you'll give them... even if they change their mind day after day. So you're left spinning in circles because of your persistent belief in someone else's idea. Other popular variations of this are "the customer is always right" and, most annoyingly for me at the moment, "the CEO is always right."

So my suggestion: believe in yourself. And then don't be afraid to let other people try to change your mind.

Trust and Respect

On another note, I was thinking about what would be the ideal company culture. This goes back to my earlier notes on membership control for working in a group.

I think a culture based on two primary concepts would be nice: trust and respect. It works like this:

First, you need to trust that people will be honest and upstanding (if you're paranoid, you can check their backgrounds before you admit them to the group). Since you trust them, you don't need annoying controls to make sure they're not trying to screw you, because you're not worried about that. If anyone ever gives evidence that they can't be trusted, remove them immediately; there's nothing more deadly to a social environment than a person who can't be trusted.

Second, you only admit people to the group whom you respect. That is, if you don't think they can do a good job or think clearly or impress customers or whatever, then you don't want them in your group. (In his book Blink, Malcolm Gladwell discusses a researcher who has discovered the #1 indicator of marriage breakups: a spouse who obviously looks down on the other.)

These two factors work together to produce interesting results. For example, if you respect someone, you will automatically become more trustworthy; would you screw over someone that you respect? Would you slack off at work, if it would let them down or hurt the project they're working on? Probably not. And that means they can trust you even more.

Meanwhile, because you respect them and so they trust you, you're actually working harder and producing more. That makes it easier for other people to respect you. And since they do, they won't want to screw you over, so they'll be more trustworthy and work harder. And so on.

This system covers most of the other factors in building a group as special cases. For example, if you're not a good listener, you obviously aren't giving the people the respect they deserve when they talk to you. And if you're not stubborn - if you just keep changing direction randomly - then eventually people will realize that you can't be trusted, because your opinion is based on whichever way the wind is blowing at the moment.

And about that Smart and Gets Things Done rule? Well, if you're not smart, that's going to show up pretty fast, nobody will respect you, and you're out. If you don't get things done, then you can't be trusted in an emergency, and you're out.

This system even deals with interpersonal compatibility in an interesting way: different people respect other people for different reasons. For example, a group of musicians might highly respect someone with a lot of musical talent, regardless of their table manners; a group of aristocrats probably wouldn't. (They can buy musicians for a dime a dozen, but it takes money to become an aristocrat!) This is interesting because the groups are probably fundamentally incompatible. If each group selects its members based on the trust/respect rule, things should rapidly work themselves out into two separate groups.

And so on. I like this rule because it makes the "membership control" system easy. I wouldn't feel the least bit guilty about kicking someone out of my company if I don't respect their abilities or if they can't be trusted. And yet those two things cover all the other important things.

Good to Great

I just finished reading the book Good to Great. It's interesting and makes many believable points, but it does scare me a bit with its selection bias. They tried really hard to avoid sampling errors, but didn't quite make it: the problem is that highly risky behaviours are correlated with both huge success and huge failure, while highly unrisky behaviours are correlated with mediocrity. So when you compare a set of "great" companies with a set of "merely good" companies, you will definitely find some statistically significant differences. Those differences might very well be absolutely critical to becoming great. However, they might also greatly increase your chances of killing yourself.

To name just a couple of examples, the "Level 5 leadership" concept makes perfect sense in terms of growing a company for the long term. But if you're "almost" a level 5 leader, then maybe you're just a quiet person with a vision and nobody really listens to you and so your company goes in random directions. Or the "hedgehog concept" - do one thing, and do it well. This is great, as long as it's the right thing. The book even gives some good advice on how to choose the right thing. But what if it's not quite the right thing? Then you're screwed. Massive diversification may prevent greatness, but at least you can hedge your bets.

Beware of advice that seems too good to be true.

2006-07-13 »

Optimizing for Productivity

I was reading in the aforementioned Good to Great about different "great" companies and their way of reducing their vision to a single "hedgehog concept," which has various useful boring attributes that I won't go into right now, as well as one more that I will: an optimization goal, which they call the "key denominator." Basically, to be maximally financially successful, a company needs to optimize its operations for the best ratio of profit to something.

Some companies use things like "most profit per store" or "most profit per product line." They talked about a bank in the U.S., which optimized its operations for "most profit per employee" when banks became deregulated. The most obvious way to do this is to reduce the number of employees and/or otherwise chop your HR costs, but in a world where banks are suddenly not protected monopolies anymore, that might be the only thing you can do.

That particular optimization goal got me thinking. I don't really like big companies very much. Once they get too big, they get annoying because it's hard to keep track of everyone and there are always some stupid people. So optimizing a new company in terms of profit per employee - which isn't nearly so painful in a company with no initial employees as it is when you'd have to lay off a bunch of people - might be quite entertaining.

And then I thought, wait, that's not quite it. You can increase profit per employee by simply making your employees work longer hours and paying them less, which isn't really what I want. Okay, so I work really long hours, but that's just me, and it's still sort of cheating; it makes me look more productive but I'm only producing more, not producing more per hour. If I was really smart, I would be able to produce more per hour than the next guy. If I also work more hours, well, great, but that's just a bonus.

And so that's it. We want the smartest, most productive people around, and the people we get, we want to make even smarter and more productive. So why not maximize profit per man-hour? Why not do it all the way across the company, and why not help your customers do it too?

In other words, laziness, impatience, and hubris. It's not just for programmers anymore!

2006-07-14 »

Mouse Angles

prozac writes and Chicago responds about popup menus and the annoying "90 degree problem" with navigating submenus.

This is a good time to mention that Apple, being a bunch of freaking geniuses, actually solved the problem long ago in MacOS. This was demonstrated to me about 6 years ago on MacOS 9, so hopefully the behaviour is retained on MacOS X.

Whether or not the submenu pops up or disappears depends on which direction and how fast you move the mouse. Basically, if it's mostly down and to the right, it sticks on the currently popped-up menu, even if you mouse over another one, because we assume you're trying to get to the submenu (moving to the right, after all!); if it's mostly up or to the left, it pops up the menu you're currently over, instantly, because you're obviously not heading toward the current submenu. And there are never any arbitrary delays.

It's really hard to explain, but it's total genius. The "wait for half a second arbitrarily" technique and the "never delay" technique are massively inferior to this. I have a feeling Apple patented this technique. I wonder if it's expired yet?

Also, when you underline "Gaggle" in MacOS, the gg isn't underlined because it has descenders, but the underline stops a little bit into the gg, so it looks perfect and you can't fake it on other OSes by just asking to underline Ga and le.

That is all.

2006-07-20 »

Brooks' Singularity

In The Mythical Man-Month, Fred Brooks explains that the amount of communication necessary for a project time is n(n-1)/2, where n is the number of people on the team. Eventually, the communications effort begins to overshadow the actual productive effort. This is the primary factor limiting team sizes, which is why people start breaking into multiple teams, hierarchies, etc beyond a certain size - hierarchies may not be all that efficient, but at least it reduces the need for huge amounts of communication.

Now, this efficiency decrease is some kind of decaying function. If I add 5 people to a 5-person project, it'll be pretty hard on efficiency. If I add 5 people to 100-person project, I probably won't notice the difference. People tend to focus on the 100-person part of the curve (Brooks did, since he was talking about a massive project at IBM), which is why they summarize Brooks' Law as "Adding manpower to a late software project makes it later."

The converse, of course, is not necessarily true: removing manpower from a late software project doesn't necessarily make it earlier. Wouldn't that be nice!

But what if you tried? Eventually you would get back to the early part of the curve, where n is some small number. What if n=1: you're working all by yourself? Then n(n-1)/2 = 0, with zero communications effort! Awesome! I can get infinite work done in zero time!

Well, no, of course not. When n is large, communications efficiency becomes the controlling factor in your overall efficiency. If you take the simplified formula towards n=0, it will give you a "singularity," that is, a place where the number explodes because you're dividing by zero.

In real life, as the number of people on the project decreases, effects other than communications become the limiting factors. This means that if you're going to try to produce more work with fewer people than anyone else, you will have different problems than everyone else. (Of course, this makes it an inherently interesting question.)

So what are those limiting factors? Well, as a person who has worked on quite a few solo projects, I can give the top two that I've found: first, working totally alone, there's no chance for mutual motivation. Second, you run a greater risk of groupthink effects, where you don't have enough diverse opinions and you might end up going on terrible, inefficient tangents and failing to find anything like an optimal solution. For example, a one-man company run by a programmer will probably lack any business sense, and thus fail to make money even if the programmer is a genius.

What's interesting is that nowadays, smaller teams can produce much more than ever before. That's because something is already mitigating the most serious of these effects. Motivation? Well, if more people can do what they love, motivation is easier. Groupthink? Well, the Internet. QED.

I wonder how far you can take that model?

Side Note

The technological singularity can be expected to follow a similar pattern to what I called "Brooks' Singularity" above. Eventually, we will find out that our formula isn't complex enough and there is a speed-limiting factor on technological progress. Believe it or not, the time between "paradigm shifts" can't actually proceed linearly through zero until it takes negative time to do a shift.

Completely Offtopic

Q: So, how are you feeling about everything that's been happening lately?
A: Um... resigned?

2006-07-21 »

Grand Old Unified Relational Documentation

I'm pleased that what is probably my last major innovation on behalf of NITI, the new Knowledgebase system codenamed GourD, has finally been opened to the public. Even though the code was rewritten almost from scratch after I last had my dirty fingers in it. Thank chrisk for that - it's much better now.

But I still claim credit for the idea, and even if I don't, it's a cool idea anyhow. Among the features of the new system are:

  • It's based on mediawiki, so its UI is designed for collaboration.
  • We greatly improved the search engine, whose ranking and summarization algorithms are better than any other knowledgebase I've seen (and a major improvement over standard mediawiki, too).
  • It has our whole searchable user manual in it, not just individual knowledgebase articles.
  • It has a really neat AJAX "rate this article" system that's actually slightly less obnoxious than other such systems.
  • You can subscribe to articles or whole categories, and get a nightly notification of which articles have been newly published or changed in a non-minor way: this is less confusing for newbies than RecentChanges type pages.
  • It automatically produces a remarkably accurate and useful "similar articles" list based on a variant of my document correlation algorithm that I wrote about so long ago.

But my favourite part is really:

Separation of style, content, and... structure?

So everybody knows that the big glamourous thing nowadays is HTML4, XML, CSS, and so on, the point of all of which are to be able to separate "style" from "content." That is, someone correctly noticed that most writers are poor layout artists and vice versa, so why not do the jobs separately instead of what computers normally encourage you to do, which is spend more time fiddling with fonts in Word than actually writing?

Okay, so that's sort of worked out for some people, and we get a bit closer to the ideal every time HTML makes another evolutionary step.

But anyway, what if we took it one step further? What if we split out the high-level structure of your documentation, and made it somebody else's job?

What? Okay, this sounds a bit confusing. But think of it this way. The world didn't just lose layout experts when it started forcing writers to format their own documents; the world is also losing professional copy editors as more and more people can publish their work directly and have it look half-decent. That means more and more documents are... kind of crappy.

Think of a big 500-page technical manual (our product has one, for instance). Nowadays, those 500 pages won't be written by a single person; not at all. They're written over the course of many years by several different people, where some of those people are actually joining and leaving the company while the book is evolving. The result? Inconsistent drivel, like almost all modern technical documents have become.

The rare exceptions are documents that are written quickly enough, by a single person with a good sense of style, that they're actually good. Except they rapidly get outdated and can never be very long (ie. complete and thorough) because both of those preclude having a single writer enforce his/her artistic integrity; artistic integrity requires a pass over the entire document, by a single person, in a relatively short time.

Incidentally, all this is why books like the "for dummies" series are so popular. They're written by one or two authors in a relatively short time, edited by professional copy editors, laid out by professional layout artists, and rewritten from scratch every few editions. That produces (comparatively, at least) high quality work that really beats what most companies produce.

So what I observed from all this was simple: most KB systems already abstract style from content; generally, you enter plaintext and they format it like a KB article automatically. Good, so your layout artist isn't the same as your tech writer.

But I've never seen a technical documentation system that separates content from structure. That is, there are generally a variety of writers - which is mostly unavoidable in a modern long-term development project, unless you want to rewrite from scratch all the time, which is horrendously inefficient. But there is no editor. So the writers tend to each contribute a new chunk of prose (if you're lucky, it's a whole chapter), and then some unlucky soul - possibly one of your writers - gets to tack it into the big unwieldy book somewhere, until the big unwieldy book gets so big and unwieldy that nobody could possibly read it from cover to cover, then nobody even reads it at all, then you stop including printed copies with your product and just give them a pdf, and then you kind of stop shipping the pdf and just leave it on your web site, and then eventually people don't read documentation at all and resort to forums and knowledgebase searches.

Sound familiar?

Well, GourD has the solution. Documentation is written in convenient, bite-sized chunks, by anyone who wants to contribute it. These start off life as searchable, categorized knowledgebase articles with various writing styles. But then our "artistic integrator" (currently apm) can go through and clean up any stylistic inconsistencies between articles - in articles that matter.

She then has the ability to create "books" out of a collection of similarly-styled articles. Where necessary, she can edit articles to make them read better in the sequence of the book. When the articles are not in book form, these changes don't disrupt anything because any confusion can be resolved by just hyperlinking terminology from one article to another or using the "similar pages" or "categories" features; welcome to the web.

The "structure" that the artistic integrator adds is through the process of weaving together a collection of articles. Imagine you had a big bucket of pearls collected over a period of time. To make a good necklace, someone has to choose pearls of similar size and shape and assemble them in the appropriate sequence, maybe polishing a few here and there. The result is beautiful, but not because the person who made the necklace spent dozens of clam-years growing pearls; no, the pearls were produced by someone else, and the final beauty comes just from combining them the right way.

You can produce a series of 100-page books on useful topics just by mixing and matching and polishing GourD (it's much more than just a "knowledgebase" when used like this) articles - and you can do it very quickly, and the artistic integrator doesn't have to write them all from scratch, so it's highly efficient and high parallelizable.

And your books will be better than any of your competitors'.

Wouldn't this be a great way to write documentation for Open Source projects?

2006-07-28 »

Dialogue

Random Person: Hey, what's in the garbage bag?

Avery: Stuffed corporate bunnies to accompany my accounting poetry.

Random Person: ...oh.

2006-07-29 »

REaction Engine

I've realized that proactivity is just reactivity with negative latency.

What? Okay, first of all, "be proactive" is sometimes known as common management blabber, and being a one-time manager myself, I'm allowed to use it. But used correctly, "proactive" is actually a useful word. It means solving problems before they happen.

And reactivity is solving problems after they happen. Aha. So proactivity is just reactivity where the effect happens before the cause: negative latency.

Easy, right? Well, no, since because some something or other with the space-time continuum you can't actually do that. But negative latency has been successfully employed in the past.

Here's what I'm thinking. If you operate in a normal "reactive" mode, your simple goal is to react as quickly and accurately as possible. Having given up on preventing problems, you want them to at least go away, if at all possible, before too many people notice. This is a good way to run a startup company, where you can't possibly expect to prevent all problems at first; there are just too many of them to start with.

The better you get at being reactive, the faster you can solve the problem, and latency approaches zero.

Imagine if your boat hits an iceberg and gets a hole in it. You want to start bailing water as soon as possible. If you notice that the hole is too big for that to work, you'll need to start evacuating the ship, and so on. But the better you react, the fewer people who drown.

As latency approaches zero, the amount of work increases exponentially; latency can never really reach zero using a reactive method, but it can come infinitely close with an infinite amount of work. This is a singularity.

Now what about that negative latency? Well, it requires some different techniques, and creative thinking, versus positive latency. Since you can't run time backwards, you have to instead be living in the future, reacting now to what you expect to happen, say, 10 minutes from now. Living in the future is impossible, of course, but you can try your best to predict what it will be like. And predictions are about probability. This is the only real difference between being reactive and being proactive.

Many people don't realize this, and think of the two concepts as requiring a total change in technique: "proactively prevent all problems." This leads people to overanalyze the situation, taking only extremely safe, risk-free approaches that will never have problems - but will also be the slowest possible way to get there. This is sort of like sending all the bytes from all the files just in case the TFTP client might want them someday.

In fact, just like the current problems that you react to, each future problem has a cost associated with its occurrence, and increasing costs associated with not solving the problem once it occurs. Reactivity helps you avoid that increasing cost; proactivity removes the fixed initial cost. But while reactivity has an easy-to-calculate cost-to-benefit ratio (you know the cost of not solving the problem, you know the problem has occurred, and you know how much it'll cost you to solve it), proactivity has an extra element to it: risk. Some potential problems, even though they might occur, won't. For those ones, avoiding them is purely a cost and has no benefit.

So as with all negative latency problems, probability and intelligence are key. But you can outdo anybody who is merely reactive or naively proactive, just by thinking harder and more carefully than they do.

June 2006
August 2006

I'm CEO at Tailscale, where we make network problems disappear.

Why would you follow me on twitter? Use RSS.

apenwarr on gmail.com