2007-01-01 »
NITI in Retrospect
In the last little while I've been working at setting up a new company, having found myself in the odd situation of having easy access to financing even before I had an idea that needed financing. While convenient, that means I have a lot of responsibility, so I've been very careful not to take advantage of it until I know for sure what I want to do. Thus, I've spent the last few months trying to figure out what exactly that is.
During that process I've spent quite a bit of time discussing various options and theories and philisophies with a bunch of different people, and especially thinking back to the early days of NITI: what did I do last time I started a company? How did I know what was right or wrong? What mistakes did I make? What mistakes didn't I make? And what changed as NITI became more mature, so that what seems perfectly natural now in a mature company is completely the wrong choice when you're starting again fresh? The past seems like a good place to start. So I've decided I'll do a series of articles about decisions I remember from the early days of NITI, and their outcomes. I think you might find them interesting, and in any case I'm going to get a lot for myself out of writing them down.
True Love
Let's start with something relatively recent. In 2006, I went to the IBM Lotusphere conference to investigate and discuss the possibility of what is now Nitix for Domino. When I came back, I did a presentation to our developers that I titled "True Love," in which I described something important I learned at that conference.
At Lotusphere, there were two kinds of products being pushed by IBM. First, there was Lotus Notes/Domino and its addons. Second, there was the "IBM Workplace" series of applications. The former group had religiously addicted, passionate users who understood both technology and business needs; the latter had boring presentations and had been adopted by businesspeople who didn't know anything about technology. Incidentally, and you can quote me on this, the IBM Workplace technologies all suck rather massively. Lotus Domino, in stark contrast, only apparently sucks.
And this was my big revelation. Lotus Notes/Domino is rather famous among techies like me for having a gratuitously horrible user interface, for being slow and hard to install, and so on. All these things are true. But they have lots and lots of passionate users anyway who are in love with the product. Why? Because the core of Domino is so beautiful; I don't have the space to explain in detail right here, but at that conference I started to understand what that meant. The people who love Domino love it for very good reasons. When you use it to solve your weird business problems, it just works in ways that no other existing product does.
At the risk of sounding mushy, that revelation goes a lot deeper than my experiences with a random software product. What it taught me is the real difference between liking something, and really liking something, and being truly in love with something. I realized that being "truly in love" is a feeling that's not only reserved for that one special person in your life. But even when it's applied to a product, or concept, or company, it's exactly the same feeling. So in a slightly absurd way, Lotusphere 2006 is where I learned the meaning of true love.
By this point in the presentation I suppose I had alienated pretty much my entire audience, so I made a few jokes and drew diagrams with coloured balls and tried to make some kind of point about making our customers fall in love with our product. But the really important conclusion that I was drawing is that I had only ever been in love once before in my life, and it wasn't with a person. I was in love with my own life's work of more than eight years, namely NITI itself.
As I post the other "NITI in Retrospect" articles I plan to write, it won't sound exactly like a love story. But it is.
2007-01-03 »
NITI in Retrospect: The Weaver 1.0 Manual
Ah, the irony. Just a day after I told sfllaw that I'm too lazy to put any pictures in this journal, I just spent about half an hour uploading some image scans (thanks, dad!).
Collection: Weaver Manual |
My dad has what I believe to be the only traceable copy of the Weaver 1.0 manual currently in existence. Weaver, for those following the story, is the product-name-turned-codename of the early versions of Nitix.
Manuals That Don't Suck
Not so long ago I wrote about GourD, NITI's new system for generating quality documentation in a hopefully scalable way. I talked about the idea of an artistic integrator: someone who has been through the entire thing, from cover to cover, and made sure it's consistent and flows in a reasonable order.
Back then, that integrator was me. The content was co-written by dcoombs and myself in about a week, and then I went over it for stylistic consistency. The result? One of our earliest customers actually said that "it was so interesting that I ended up reading it cover to cover." So did the people making the (now defunct) Corel Netwinder: it earned me a contract to write the manual for their Netwinder OfficeServer. That contract, in turn, was the turning point in our tiny little startup: the payment on completion made us profitable for the very first time (including founders' salaries).
Other things of interest in the Weaver 1.0 manual: the pages were 5.5x8.5 inches (exactly half an 8.5x11 sheet, not coincidentally), which makes it look less scary than a full-sized 8.5x11 manual. The binding was cerlox, not "true" glue binding, because cerlox lies flat on a table like any reference manual should do. And there were less than 100 pages, because nobody wants to read a manual longer than 100 pages. (Of course, Weaver 1.0 had almost no features.)
Click the image to see a few page scans. My favourite is the Tunnel Vision diagram, from back when tunnelv was a hub-and-spoke system.
Blank Page Poetry
One of the great things about treating your manual as an art form is that you get to be an artist. When I get to be an artist, apparently I'm silly.
This particular art form was inspired by all the wasted space in manuals on "this page intentionally left blank" pages. Wasted space?! We're running a startup company on a shoestring budget, here, we can't be wasting space! So we didn't. Here's the progression of intentionally blank pages as you weave your way through the manual.
-
This page intentionally left blank.
---
This page intentionally not blank.
---
Blankliness is next to godliness.
---
There once was a page that was blank;
Some people thought 'twas a prank.
They read it out loud
In front of a crowd
And everyone thought that it stank. (--dcoombs)
---
A blank page is not to be read;
It should be skipped over briefly instead.
If you follow this rule,
You'll not be a fool,
Nor confusedly scratching your head. (--me)
---
We made this page blank not for spite;
Nor even to give you a fright.
They're publishing woes,
For everyone knows
That chapters must start on the right. (--me, or maybe dcoombs)
---
A blank page causes despair;
Something's supposed to be there.
Did the printer break down?
Did the ink jets leave town?
We were hoping that no one would care. (--my mom)
---
Please send useless filler to
poetry@worldvisions.ca
(The poetry@ address is long dead. You can still send useless filler directly to me, if you want. And people do.)
And the moral of the story is, um, dcoombs writes better limericks than me.
Bonus: Deleted Netwinder Jokes
Incidentally, the aforementioned Netwinder guys changed almost nothing about the manual I wrote for them except for two jokes, which I will record here for reference since they're now too bankrupt to sue me.
First:
- (Context: I'm talking about using email aliases as mailing lists.
We've added several names, including Al Gore, who back in 1999 had recently
invented the internet, to a "smart-people@example.com" mailing list.)
You can edit existing aliases in a similar way. For example, if
you realize that Al Gore isn't smart after all, you can remove him
from the smart-people alias by clicking the word Edit next to
smart-people in the alias list. Edit the list of
destinations, changing it to just einstein newton, and press OK.
What did they change? They replaced "Al Gore" in my example to "Mickey Mouse." Mickey Mouse is widely agreed not to be smart, but that's not the point. Apparently insulting what would turn out to be about 48% of the American voting districts (more than 50% of the popular vote) was not expected to be good for business. Wimps.
Second:
- (Context: Here, I'm explaining the concept of virtual web domains.)
Consider what happens when you phone the cable company to request a
service. You'll be sent to a voice mail system that asks you to
push one button for sales, another button for service, another
button to check your current billing information, and so on. Now,
you and I both know that the cable company only employs one person
to answer the phone calls during normal business hours (1 PM to 3
PM on alternate Tuesdays). But because you've gone through all the
voice mail menus and your selections show up on his computer screen,
the representative can answer the phone differently based on your
requests. For example, he might choose to say "Hi, you've reached
the sales department, please hold," or "Hi, you've reached the
service department, please hold," as appropriate.
They removed this paragraph completely. Apparently insulting all the cable companies in the world at once was a bit much for them, since they were hoping to license their product through cable companies (sound familiar?). Wimps.
2007-01-04 »
Canada vs. U.S. Venture Capital
Steven Bloom at BrightSpark has lots of interesting things to say about the Canadian VC market.
BarCamp Montreal
Did you see the Montreal Gazette article about it?
2007-01-05 »
NITI in Retrospect: Fish
At some point during the early development of weaver, dcoombs gave me some sage advice:
When you're buying frozen, breaded fish, you should only buy the fish in a box that shows the fillet sliced open so you can see the fish inside, not just the breading. Any fish vendor that won't show you a photo of the fish itself is hiding something.
And this advice is works perfectly every time: frozen breaded fish that only shows the breading on the box is universally gross; frozen breaded fish that shows the insides is universally tasty (although some brands are tastier than others, of course).
I'm sure you can think of numerous ways to tie this advice back to starting a business, so I won't bore you with the details. Remember, the fish is a symbol. Of some sort.
2007-01-06 »
NITI in Retrospect: Don't need to advertise for employees
For most big companies, like Google or Microsoft or Amazon or Apple, all your hiring problems come down to finding good enough people as fast as you want to hire them. It's hard work! Some of them hire hundreds of programmers a month. They have lots of clever ways of sorting through that, and it's surprisingly effective. Meanwhile, oddly, much smaller companies have just as much trouble, or worse, hiring just one person every month or two or longer. That's because at least everybody recognizes Amazon and Google; if you're a student looking for a job after you graduate, you probably apply to those two by default, because you probably even know someone who works there who'll get a bonus if you get hired. But with a tiny company, it's unlikely that you know anyone there, and it's unlikely you've heard of them yourself, so unless they're really special, you simply won't think to apply.
In my youthful naivety, I never really considered that this might be a problem for a tiny startup like NITI, and miraculously, it wasn't. Some of the best people available realized they wanted to work at NITI, and the best people know the other best people, and they tell their friends. That's the point of this post. The other point of this post is that as time went on, it became a problem, because NITI became a less desirable working environment. The easy supply of great workers dried up, because the best people didn't want to work at NITI, and the best people know the other best people, and they tell their friends.
Ironically, I wouldn't even have known I was doing something right if we hadn't then proceeded to do it wrong and learn the consequences.
Here's my advice: if you're a small company, you should never need to advertise on monster.ca. This is not to say that you can't find good employees on a job site, but you're counting on luck, and worse, the luck is biased against you. The best people go work where their friends, the other best people, tell them to go work. They don't need the job sites. If you go there, you'll only find the people who do.
How do you avoid needing to advertise? - Rely on word-of-mouth. - Hire co-op students: they have a huge word-of-mouth network, they're cheap, and you get rapid, honest, continuous feedback about your working environment. - Hire at a steady pace, not in bursts. If you need to hire a bunch of people all at once for a specific project, you're doing it wrong. - Be the kind of place where the best people are so proud to work that they brag to their friends. - Be the kind of place where people would be ashamed to recommend their friends to you if their friends aren't awesome enough. - Take recommendations from employees extremely seriously compared to outside applicants.
I thought of giving more specific advice, but that misses the point. I can't tell you how to make your company a place where the best people are proud to work; I can't even tell you who the "best" people are, because it's pretty subjective. But I know that a certain kind of people loved to work for us, and a certain kind of people hated it. I'm not sorry about the latter group; they didn't fit, and everyone knew it, and a self-selecting environment just makes the best people that much more proud to work with you.
As for me, I just did what felt right. I built an environment where I would want to work, and hoped that awesome people are sufficiently like me. That technique works pretty well. If you're not awesome, you probably shouldn't be starting a company anyway.
Disclaimer: This essay is about the NITI-Montreal office, which is now closed. I can't speak for the experiences of the NITI-Toronto office. It seems they aren't having too much trouble finding employees, which is good news, but they're a different kind of employees than we had in Montreal. Better or worse? I'm not qualified to say.
2007-01-07 »
Goodbye Baby Boomers
The Globe and Mail predicts that the Canadian workplace will be totally different in 5-10 years because the baby boomer generation is retiring. Apparently something like 60% of most major companies is supposed to retire in that time. That could explain why Ontario recently repealed the mandatory retirement rule at age 65. Sadly, that probably won't help much, as the average retirement age in Canada has been falling fast.
Anyway, back to the Globe and Mail article. Their theory is that employers will have to get a lot more competitive, since there will be fewer people available and a lot of new job positions opening up. But that's only half the story. The other half is that product demand will also be changing dramatically because the same people won't be buying the usual stuff. And a handy rule of thumb for that "change" will be this: if you're the kind of company who's retiring a huge boatload of employees, you're probably also the kind of company whose market is about to decline.
So I'm not exactly sure there's going to be hiring desperation to match the retirement, at least not in every case. What there will be, with a reduced workforce size, is an inclination to do more work with less people like I wrote about before. This makes me feel like I have the right idea. Moreover, a company that can help other companies do more work with less people could have very good chances, because most companies won't have a clue where to start.
This is interesting to me because the current trend in business software is to solve problems with the brute force of huge development teams, so companies who try to do more with less are perhaps just throwing away the opportunity to become huge and mega-rich. But as time goes on, the brute force approach will be less and less viable.
2007-01-08 »
Arcnet does nuclear physics
Apparently they're switching the photomultiplier tubes in the Super-Kamiokande project from old, obsolete Sun VME-based Arcnet to that newfangled Linux Arcnet thing, which is, I hasten to point out, still not dead.
It would seem that I wrote the first version of that driver about 13 years ago. Egads. Incidentally, did you know that it had poetry too? I sense a theme.
2007-01-09 »
Avery on new-age religion
"Avery, you're a programmer about everything," said one of my friends a few days ago.
Okay, I admit it, I'm an addict. Here I go again.
One of the big science vs. religion debates is about whether we actually have souls, or whether our brains are just big sacs of chemicals and electrical impulses. Today I realized that the question is really an easy one.
Of course our brains are sacs of chemicals and impulses. Which is like saying that our computers are just boxes of sand with electrons bouncing around. It's true, but that doesn't mean the software doesn't matter, even though it's just made up of those electrons.
Software is like your soul. It's there, made up of electrons, but it has an existence that matters much more than the individual electrons do. It can even move around and multiply, conforming other boxes filled with electrons to match its own structure. And its pattern was created, put there for the first time, by a power greater than itself.
Where does your soul go when you die?
That's simple too. Just because one box of electrons dies, the software doesn't disappear. Its pattern has had many effects on many other boxes, and the resulting patterns are just as important as the original program, and just as influenced by its creator. Software might change, but once it becomes part of the network, it's never really gone.
2007-01-10 »
Roaming Profiles
It's not my imagination: Roaming Profiles in Windows really are nearly useless. The article is pretty depressing in itself, but read through the comments in response.
2007-01-11 »
Other products
Nitix
Variable Declarations in C# 3.0
I've never seen this kind of variable declaration before. It's statically typed, but you don't have to declare the type. In other words, it's perfect.
2007-01-12 »
NITI in Retrospect: Job Titles and Roles
Part of the fun of working in NITI-Montreal in the earlier days was the flexibility: everybody did a little bit of everything. If you're in a startup company, that's the way it has to be. For the right kind of person, it's more fun anyway.
At first, every single developer who worked in our Montreal office was known as a "Human Cannonball." They all did different things; some people are better architects, some are better coders, some are better debuggers or testers or spec writers or infrastructure specialists. But they all had the same title. That made people feel like peers.
There was a job description that went with that title. It made being a Human Cannonball sound hard, which it was. And so, by implication, if you managed to become one, you must be awesome. Different people are awesome at different things, but the point was, you were awesome, because you wouldn't be there if you weren't. That made people feel proud of themselves and of their peers.
We used the same title and job description when hiring co-op students. I've never seen results like that before in a job posting: the applicants were incredibly self-selecting. Where normally you might choose to interview 10-20% of the resumes for a particular job - if you're lucky - with Human Cannonball, it was sometimes worth interviewing up to half of them. Why? Because people don't want to look like idiots in an interview, so if the job looks like it's way over their head, they simply won't apply. Combine that with past co-op students who go back to school and say how great the company is and that we have very high standards, and the less awesome applicants simply go elsewhere. The most common failure mode for this job description? People who thought they were awesome, but frankly, weren't. But those people are pretty easy for sufficiently awesome interviewers to detect.
But one important feature for all of this was that the job title didn't make any sense. That wasn't just fun and games; again, in my naivety, I suspected it might have been, and I'm not opposed to fun and games. But there's more to it than that. Meaningless job titles prevent people from jumping to conclusions. If you advertise for a "Programmer", you'll get people who assume their first-year university Java skills are sufficient, and they drop their resume in the slot. If you advertise for a "Human Cannonball," people have to stop and think.
As time went on, we introduced a new concept on top of this: roles. Roles were temporary, didn't change your job title or salary or peer ranking, and Human Cannonballs would shift in and out of multiple roles as time permitted. We used a mix of meaningless and meaningful names for the roles: FeaturePusher, ReleasePusher, HumanBalance, Architect, and so on. Exactly what these roles entailed isn't too important right now, but what's important is that the fuzzy role names helped people not to make assumptions about what the role entailed. For example, a "FeaturePusher" was something like a Project Manager. But Project Managers have a stigma attached to them; they're often not programmers, yet they're somehow put in charge of programmers; they get paid more; they hire/fire people; they do employee evaluations. None of those things were what we wanted. Some of those jobs belonged to the HumanBalance (which is sort of like an HR manager, except not). The FeaturePusher did indeed "manage" a "project", but that's entirely different from managing people. Changing the title removed the assumptions, and helped people to think more flexibly.
All that was how we did things in Montreal. Nowadays at NITI, people have job titles (which encompass roles) like Project Manager, Team Lead, QA Manager, and so on, with predictable results: they're afraid to take on roles outside of their job title, they take on roles they're unqualified for because of their job title, and so on. Hopefully, they rise to the challenge.
Epilogue
For my next company, I'm thinking of balancing the job titles a bit better between silly and boring (some people are just embarrassed to have a silly job title, and that's fine), but I don't want to lose the lack of clarity in the process. I'm kind of inspired by the research labs where everyone is an "Associate" or just a "Programmer" or sometimes literally "Just a Programmer." The advantage of these is that it makes the lack of clarity the explicit goal - it doesn't hide that goal behind silliness, which people tend to wrongly assume is the goal in itself. Also, job titles like "Programmer" at least have the advantage of helping people who are used to thinking of themselves in a role to find the job, where they might never look twice at "Human Cannonball."
So here's what I'm thinking: "Programmer, etc." "Test Automation, etc." "Entrepreneur, etc." "Sales, Marketing, etc." It gives the general idea, and then loosens the meaning with "etc," which makes you think twice.
Would you work for me if you could be a "Genius, etc?"
2007-01-13 »
NiR: NetIntelligence 2.0 and second-system effect
After hiring people at NITI, we didn't really suffer from the second-system effect (with the possible exception of UniConf). That's probably because I got it out of my system in the early days when it was just me and dcoombs.
Even in version 1.0, Weaver had two related features called NetMap and NetIntelligence. NetMap was a passive packet-sniffing program that monitored the network and tracked which IP addresses were where; NetIntelligence (before its name was co-opted to include other features) analyzed the data in NetMap and used it to draw conclusions about the local network layout.
The first versions of Weaver couldn't even act as an ethernet-to-ethernet router; they were designed for dialup, so you routed to them on your ethernet, used them as your default gateway, and the Weaver's default gateway would either be nonexistent, or a PPP connection, or a demand-dial interface that would bring up a PPP connection if you tried to access it. In those days, NetIntelligence's job was easy: it just had to detect the local IP subnet number and netmask, and pick an address for itself on that network. (Weavers were often installed on networks without a DHCP server so they could become the DHCP server; requesting an address from DHCP usually didn't help.)
In later 1.x versions of Weaver, we added ethernet-to-ethernet routing in order to support cable modems, T1 routers, and so on. We extended NetIntelligence to do three other relatively easy tasks: figure out which network interface was the "Internet" one, figure out which device on that interface was the default gateway, and set up the firewall automatically so that the "Internet" would never be allowed to route to your local network, even for a moment. This code was very successful and worked great; it was the origin of the "trusted vs. untrusted" network concept in Weaver, and it's pretty easy to find out which node should be your default gateway when you know you can't lose. (That is, it's always better to have a default gateway than no default gateway, so even picking the wrong one is okay as long as the user can fix it.)
That was version 1. NetMap/NetIntelligence 2.0 was where things started going wrong. I decided that this concept was so cool that we should extend it one more level: what if we install Weaver on a more complex network, with multiple subnets connected by routers scattered about? What can we do to eliminate "false" data produced by misconfigured nodes? (Trust me, there are always misconfigured nodes.) What if there's more than one Internet connection, and sometimes one of them goes down? Wouldn't it be great if Weaver could find all the subnets automatically, configure the firewall appropriately, and allow any node on any connected subnet to find any other node using Weaver? It seemed like a great timesaver.
Except that it wasn't. First of all, it took a long time to write the code to handle all these special cases, and it never did really work correctly. We had some very angry customers when we put them through our 2.0 beta cycle and Weaver regularly went completely bonkers, auto-misconfiguring its routes and firewall so badly that you couldn't even reach WebConfig anymore. Or sometimes you'd end up with 100 different unrelated routes to individual subnets, because Weaver wasn't sure that 100 routes through the same router really meant that was your default gateway. Those messes were the origin of the "NetScan" front panel command, which made NetIntelligence forget everything it knew and start over. To this day, I consider this a terrible hack. But it's sure better than 2.0beta1, which didn't have a NetScan and had to have a developer (ie. me) come on-site to debug any network problems.
NetIntelligence 2.0 was a perfect example of the second-system effect: we chose to add a lot of cool but not-really-necessary features all at once, we had a non-working product until the whole thing was done, it was an order of magnitude more work than the 1.0 version, and bugs in the new features caused old, 100% reliable features (like the ability to reach WebConfig!) to fail randomly. It was a disaster.
In retrospect, the mistake is easy to see. Not long after, the proliferation of DHCP meant that auto-discovering subnets was much less important. But more importantly, Weaver's network discovery feature was supposed to make Weaver easy to configure on simple networks. Any IT administrator who managed to set up a network with multiple subnets already knows what those subnets are and how he wants to route between them, so auto-discovering isn't worth anything. The existence of a complex network implies the ability to configure a router for it. We sacrificed sanity on networks where people didn't have the ability, all in the name of giving a useless feature on complex networks that didn't need it. Oops.
By the time we were actually hiring developers back in 2000 and 2001, we had already been through all this mess. Nowadays in Weaver (now Nitix) 3.x and 4.x, we've wrangled NetIntelligence under control, and all those broken-but-cool features from 2.0 actually work and do cool stuff. But to this day, once in a while, it still produces a huge, insane list of correct-but-pointless subnet routes that you have to delete by hand.
So yes, I know a thing or two about the second-system effect.
As applied to business
As I continue to lay the groundwork for a new company, it's important to keep this sort of thing in mind. Just because a few "cool" things were missing the first time around, don't lose sight of the basics in round 2.
2007-01-14 »
NiR: Ice Storms and the Slowness of Startups
Ah, ice. Yesterday I awoke to find my car and all the nearby trees covered in it, due to a recent minor bout of freezing rain. Well, it was minor in London, Ontario. Apparently it was a bit more serious in other places.
Thanks, google images! |
This event reminds me of the Great Quebec Ice Storm of 1998, which I was lucky enough to participate in from the comfort of my then-home in Ste-Anne-de-Bellevue. That home, conveniently, was on the same electrical circuit as a major hospital nearby. We lost power for maybe, oh, 45 minutes or less. In the middle of the night, while I was sleeping, so I didn't pay very close attention.
Many people lost power for longer. Rumour has it that this resulted in a minor Quebec baby boom nine months later. I wouldn't know anything about human babies, but at the time me and ppatters were safely indoors, entirely with electricity, discussing and building the earliest beginnings of Weaver 1.0 and NetMap, and integrating it all with WvDial.
I don't have too many terribly insightful things to say about those hazy days, except to observe that, hey, dialup really doesn't seem as popular now as it did then, does it?
But dialup, and WvDial, were the things that sold the first bunch of weavers.
Speaking of which, here's a quick timeline to remind me how slowly this all actually started:
- Summer 1997: thought of the general idea.
- Fall 1997 (+3mos): incorporated Worldvisions Computer Technology, Inc. Confusingly similar to a certain massive charity, but that didn't matter, because we're in a different industry. Heh heh.
- Winter 1998 (+6mos): ice storm, working at General DataComm by day, coding on Weaver at night.
- June 1998 (+1yr): first 5 weavers actually sold, but only because I wrote Tunnel Vision 1.0 the weekend before. (Side note: it's highly disturbing that tunnelv is still the second Google hit for "tunnel vision." Stop it already! OpenVPN is way better!)
- Fall 1998 (+1.5yr): dcoombs and I take four months to work full time on Weaver 2.0 and to sell, sell, sell! with reasonably passable results, especially considering neither of us had ever sold anything before.
- Early 1999 (+1.8yr): Licensed Weaver sales rights and tech support to sestabrooks & co. for a share of 50% of gross profits. They apparently sold quite a few of them.
- Summer 1999 (+2.0yr): During a failed attempt to sell out to the now-defunct Netwinder people, licensed NetIntelligence 1.0 to them non-exclusively and, oddly, got hired to write their user manual.
- Fall 1999 (+2.3yr): Met opapic and gmcclement and formed Net Integration Technologies, Inc., the replacement for Worldvisions. Made opapic the CEO, taking over leadership from me, to the benefit of all involved.
- Sometime in 2000 (+2.5-3.0yr): First round of real financing. I forget exactly when. Opened an actual office.
- June 2001 (+4yr): Hired mcote, the first developer other than me and dcoombs.
- January 2002 (+4.5yr): opened the MontrealOffice for the first time.
That's all for now. Off I go to meditate on the idea that we only had two part-time developers (ie. me and dcoombs) until about four years after the project's initial conception. And my new project is only about four months in. Historical expectation based on the above: we should have just chosen a name, incorporated, and now be moving at putting together the beginnings of our first product.
Wow. Historical predictions are still scarily accurate. Why did it seem so fast at the time?
2007-01-20 »
Your company might be on the right track when...
...customers start calling you just based on a single very vague press release. That, my friends, is an entirely new experience for me.
Also, today I learned what Operational Risk Management is. As it turns out, our software already fully supports it. It's a big change for me, not being the one that wrote the majority of the software I'm actually helping to commercialize.
2007-01-24 »
While you weren't looking, operating systems became irrelevant
I wrote briefly before about how I now own a Mac laptop and am thus automatically entitled to be a Mac zealot. What I said at the time was true, but also a bit tongue-in-cheek.
What's actually true is that I'm a "use the right tool for the right job" zealot, and Macs happen to be the right tool an increasing amount of the time lately. I think that has almost nothing to do with their software, and a lot to do with their hardware - and the fact that the software was made for it.
Swooshy windows, yet another variant of Firefox and OpenOffice, yet another average-quality Unix clone (or a below-average Linux clone), a rather questionable music player, a highly questionable video player (Quicktime), a buggy X11 server, a total inability to synchronize with my Blackberry, a lame port of CUPS that's supposedly an excuse for printing, and an almost-as-good-but-costs-money clone of VMware... those are really not very good reasons to want to use a Mac.
But I bought one, and it's great. Why? Well, excellent power management tops the list. Then there's the power connector with built-in LED, the volume control that knows whether I have headphones plugged in, the two-finger scrolling touchpad, the slot-based CD player, the high-quality keyboard, the tiny "mylar sleeve" case I have that's custom-designed for its shape, the CPU fan that blows somewhere that's not out the bottom so using it on a bed doesn't cause overheating.
You know, the computer. Not the operating system at all.
You know why I finally bought one of these things? Because it had an Intel processor, which meant I could run Windows (which I need) and Linux (which I like) in a VM at near-native speeds.
But here's the thing. Apple could have never made hardware this good unless there was an operating system that could handle it; most of those hardware features, like power saving, needed some special support from the software. Microsoft isn't likely to provide it, and Linux is useless to normal people, so there was no other choice.
Until recently, that meant using good hardware was this terrible compromise: you can have great hardware, and the operating system fully supports it, but oh yeah, those apps you need? They don't run. But we have these mediocre toy ones that sort of do similar things. Maybe try those?
Not anymore. Now an operating system can do what it was originally meant to do: run your hardware, and get the heck out of the way. There's a separation of the operating system from the desktop environment like the Unix people were trying to do, but at a level that actually works. It only works because CPUs are so fast now, but it works nevertheless: we emulate a whole computer, on top of the computer we built, and it's not really too slow anymore. It turns out that emulating hardware is effectively a fixed cost, not a linear cost, so beyond a certain CPU speed, it's never worth removing anymore.
I have a Linux machine. It suspends and resumes perfectly every time, and its audio volume adjusts automatically when I plug and unplug my headphones. I can do two-finger scrolling on my touchpad without violating Apple's patents.
It's running in a virtual machine on my Mac.
Epilogue
The relationship of this post with the fact that Mono is an excellent .NET clone that runs on Windows, Linux, and MacOS X, and Microsoft obviously knows this and likes it just fine, is left as an exercise to the reader. If operating systems are irrelevant, where do you suppose Microsoft wants to be today?
2007-01-25 »
More on profit per man-hour
I wrote before about optimizing a company for profit per man-hour. I'm interested in this because small teams with high Bigness of Thought can form elegant solutions where larger, less effective teams can't. And all that ties in with a hiring practice I first heard from my friend at Amazon.com, who says they only hire people who are better than their current average employee.
Why does that tie in? Because it's an interesting problem for a startup company: if you, the founder, are the awesomest guy you know, how will you ever hire someone above average? Is the most efficient size of a company exactly one person, the smartest person in the world? And if so, is that even a useful conclusion? Maybe "hire above the average" is only useful for companies the size of Google or Amazon.
Actually, I think the "maximize profit per man-hour" strategy is just an easier way to express the same idea.
If you're the founder of a one-person startup, you have a problem: you have to be good at everything. And while you might be passable at almost everything that matters, you can't be good at it all. So as you're growing, the right thing to do is to hire someone who's awesome at the thing you need most, and not bad at other stuff. By doing that, you spend more hours of your own time doing things you are good at, and fewer hours doing things you aren't. Meanwhile, the things you weren't doing well can be done by someone who is good at them, so they can be done faster and better.
That's all probably obvious to most people, but what's interesting is the mathematical effect. Person #2 might not have been any use in a one-person company; maybe they're completely missing some critical business function, like a personality or the ability to understand financial statements. But when you, who are just sort of good at most stuff, hire that person, your company's productivity goes up - not just in total, but per person!
Thus, if you hire someone who will raise the profit per man-hour, you must be hiring someone who is "above average" even if they're not above average at everything. It must be true; the average productivity increased when you hired them.
So the good news for me here is that you can follow this algorithm even when you add person #2 to your company, and it works all the way up to companies the size of Google. I find that very interesting.
2007-01-27 »
Holy owned subsidiary, Batman!
Uh, sorry, I don't actually have an article to go with that title.
2007-01-28 »
Windows Vista Development Goals
- The top developers working on Vista had to sign a pledge that whatever
feature they were working on will not "look like
ass."
That wasn't Allchin's idea, but he said he agrees with the concept, if not
the pledge's exact wording.
-- ZDNet
2007-01-29 »
Optimal Randomness
I like randomness, as long as it comes at the right times and in the right quantities. I originally adopted the username 'apenwarr' based on that idea.
Apparently I'm not the only one to think of it that way. See The Serendipity Curve. She has the interesting idea that in modern society, randomness is getting reduced down to dangerous levels, so that we have to actively try to add randomness back in.
That's probably why weblogs are so popular nowadays: the most popular ones mostly stay on a particular theme (ha, that rules me out) but they still have random digressions, unlike, say, the business section of a newspaper.
2007-01-30 »
Eric Lippert and C# 3.0 in Waterloo
For all of you still in Waterloo...
It appears that a Microsoft developer contingent will be in Waterloo on February 5 doing a presentation or something.
C# 3.0 is the amazing static language with all sorts of dynamic features that never, ever sacrifices type safety to get them. If I was there, I would go see them for sure. Maybe I will anyway!
2007-01-31 »
Making banking fun... secretly
I'm afraid I can't tell you right now how I'm making banking fun, because we're probably going to patent it, and my limited understanding of such things tells me that I shouldn't tell you what it is until we've at least filed for the patent. Or, if we take a page from Apple, I shouldn't tell you what it is until it's totally obvious because we released a product with it. Or, if we take a page from Google, I should never tell you what it is, because we might only use it to make a product, not actually make it into a product itself.
I have, however, made banking fun. I call my new plan to take over the world "The Grand Schema." You will have to use your imagination.
Interesting database reporting
Not speaking of which...
StarQuery for Excel has its problems (notably the screwy non-Firefox-friendly web site), but they have a very interesting concept: the idea of creating a database map to make it easier for end users to generate their own reports.
Basically, the database admin/programmer can use their tool to create "map" diagrams. The map file specifies a particular database connection type (say, an ODBC connection), a list of tables/views that are relevant to this map, a list of table "join" relationships (inner, left, right), a list of "measures" (columns that can reasonably be summed, averaged, etc), and several "dimensions", which are groups of columns you can use to subcategorize table rows (for things like filters, "group by", "order by", etc).
Dimensions are an interesting concept by themselves. Really, it's just a two-level hierarchy of columns, purely for human-helping organizational purposes. For example, I might take the Region, Country, Province, City, and PostalCode columns (perhaps all from different tables) and dump them all into a "Place" dimension, the LastName, FirstName, Sex, and Birthdate columns and dump them into a "Personal Information" dimension, and so on. Some columns might even belong to more than one dimension.
This is where it gets neat. After specifying all this stuff - which is essentially static information about your database - you save it in a map file, which you then import into the report designer. A different person, maybe an admin assistant or Ozzy or whoever, now doesn't have to care about the database schema; they just get the map, a convenient representation of the parts of the database you figured would be relevant to them, all pre-joined and pre-filtered and pre-categorized. They can create a sum of "revenue" (a "Measurement") grouped by "Place" (ie. the location identified by concatenating all the fields from "Place" as a primary key), or break apart "Place" into its component columns if they want; in the latter case, the Dimension box really is just an organizer to help humans find things.
The magic here is that the admin can just think about columns and sums, not tables, indexes, joins, data types, and UniqueIDs. And if your application is obscenely huge (ours certainly is), you can even create different "map" files with different overlapping subsets of the database, and use those for different types of reports.
The results of your query gets dumped into Excel, where you can format it, draw charts, and so on. It's all surprisingly elegant, and it makes it easy for non-database-experts to explore your data.
Presumably this is what "data mining" and "cube views" applications are trying to do, but this is the first one whose user interface didn't make my head explode.
Why would you follow me on twitter? Use RSS.