Tasty, nutritious

...part of this complete breakfast
Everything here is my opinion. I do not speak for your employer.
June 2023
December 2023

2023-10-06 »

Interesting

A few conversations last week made me realize I use the word “interesting” in an unusual way.

I rely heavily on mental models. Of course, everyone relies on mental models. But I do it intentionally and I push it extra hard.

What I mean by that is, when I’m making predictions about what will happen next, I mostly don’t look around me and make a judgement based on my immediate surroundings. Instead, I look at what I see, try to match it to something inside my mental model, and then let the mental model extrapolate what “should” happen from there.

If this sounds predictably error prone: yes. It is.

But it’s also powerful, when used the right way, which I try to do. Here’s my system.

Confirmation bias

First of all, let’s acknowledge the problem with mental models: confirmation bias. Confirmation bias is the tendency of all people, including me and you, to consciously or subconsciously look for evidence to support what we already believe to be true, and try to ignore or reject evidence that disagrees with our beliefs.

This is just something your brain does. If you believe you’re exempt from this, you’re wrong, and dangerously so. Confirmation bias gives you more certainty where certainty is not necessarily warranted, and we all act on that unwarranted certainty sometimes.

On the one hand, we would all collapse from stress and probably die from bear attacks if we didn’t maintain some amount of certainty, even if it’s certainty about wrong things. But on the other hand, certainty about wrong things is pretty inefficient.

There’s a word for the feeling of stress when your brain is working hard to ignore or reject evidence against your beliefs: cognitive dissonance. Certain Internet Dingbats have recently made entire careers talking about how to build and exploit cognitive dissonance, so I’ll try to change the subject quickly, but I’ll say this: cognitive dissonance is bad… if you don’t realize you’re having it.

But your own cognitive dissonance is amazingly useful if you notice the feeling and use it as a tool.

The search for dissonance

Whether you like it or not, your brain is going to be working full time, on automatic pilot, in the background, looking for evidence to support your beliefs. But you know that; at least, you know it now because I just told you. You can be aware of this effect, but you can’t prevent it, which is annoying.

But you can try to compensate for it. What that means is using the part of your brain you have control over — the supposedly rational part — to look for the opposite: things that don’t match what you believe.

To take a slight detour, what’s the relationship between your beliefs and your mental model? For the purposes of this discussion, I’m going to say that mental models are a system for generating beliefs. Beliefs are the output of mental models. And there’s a feedback loop: beliefs are also the things you generalize in order to produce your mental model. (Self-proclaimed ”Bayesians” will know what I’m talking about here.)

So let’s put it this way: your mental model, combined with current observations, produce your set of beliefs about the world and about what will happen next.

Now, what happens if what you expected to happen next, doesn’t happen? Or something happens that was entirely unexpected? Or even, what if someone tells you you’re wrong and they expect something else to happen?

Those situations are some of the most useful ones in the world. They’re what I mean by interesting.

The “aha” moment

    The most exciting phrase to hear in science, the one that heralds new discoveries, is not “Eureka!” (I found it!) but “That’s funny…”
        — possibly Isaac Asimov

When you encounter evidence that your mental model mismatches someone else’s model, that’s an exciting opportunity to compare and figure out which one of you is wrong (or both). Not everybody is super excited about doing that with you, so you have to be be respectful. But the most important people to surround yourself with, at least for mental model purposes, are the ones who will talk it through with you.

Or, if you get really lucky, your predictions turn out to be demonstrably concretely wrong. That’s an even bigger opportunity, because now you get to figure out what part of your mental model is mistaken, and you don’t have to negotiate with a possibly-unwilling partner in order to do it. It’s you against reality. It’s science: you had a hypothesis, you did an experiment, your hypothesis was proven wrong. Neat! Now we’re getting somewhere.

What follows is then the often-tedious process of figuring out what actual thing was wrong with your model, updating the model, generating new outputs that presumably match your current observations, and then generating new hypotheses that you can try out to see if the new model works better more generally.

For physicists, this whole process can sometimes take decades and require building multiple supercolliders. For most of us, it often takes less time than that, so we should count ourselves fortunate even if sometimes we get frustrated.

The reason we update our model, of course, is that most of the time, the update changes a lot more predictions than just the one you’re working with right now. Turning observations back into generalizable mental models allows you to learn things you’ve never been taught; perhaps things nobody has ever learned before. That’s a superpower.

Proceeding under uncertainty

But we still have a problem: that pesky slowness. Observing outcomes, updating models, generating new hypotheses, and repeating the loop, although productive, can be very time consuming. My guess is that’s why we didn’t evolve to do that loop most of the time. Analysis paralysis is no good when a tiger is chasing you and you’re worried your preconceived notion that it wants to eat you may or may not be correct.

Let’s tie this back to business for a moment.

You have evidence that your mental model about your business is not correct. For example, let’s say you have two teams of people, both very smart and well-informed, who believe conflicting things about what you should do next. That’s interesting, because first of all, your mental model is that these two groups of people are very smart and make right decisions almost all the time, or you wouldn’t have hired them. How can two conflicting things be the right decision? They probably can’t. That means we have a few possibilities:

  1. The first group is right
  2. The second group is right
  3. Both groups are wrong
  4. The appearance of conflict is actually not correct, because you missed something critical

There is also often a fifth possibility:

  • Okay, it’s probably one of the first four but I don’t have time to figure that out right now

In that case, there’s various wisdom out there involving one- vs two-way doors, and oxen pulling in different directions, and so on. But it comes down to this: almost always, it’s better to get everyone aligned to the same direction, even if it’s a somewhat wrong direction, than to have different people going in different directions.

To be honest, I quite dislike it when that’s necessary. But sometimes it is, and you might as well accept it in the short term.

The way I make myself feel better about it is to choose the path that will allow us to learn as much as possible, as quickly as possible, in order to update our mental models as quickly as possible (without doing too much damage) so we have fewer of these situations in the future. In other words, yes, we “bias toward action” — but maybe more of a “bias toward learning.” And even after the action has started, we don’t stop trying to figure out the truth.

Being wrong

Leaving aside many philosophers’ objections to the idea that “the truth” exists, I think we can all agree that being wrong is pretty uncomfortable. Partly that’s cognitive dissonance again, and partly it’s just being embarrassed in front of your peers. But for me, what matters more is the objective operational expense of the bad decisions we make by being wrong.

You know what’s even worse (and more embarrassing, and more expensive) than being wrong? Being wrong for even longer because we ignored the evidence in front of our eyes.

You might have to talk yourself into this point of view. For many of us, admitting wrongness hurts more than continuing wrongness. But if you can pull off that change in perspective, you’ll be able to do things few other people can.

Bonus: Strong opinions held weakly

Like many young naive nerds, when I first heard of the idea of “strong opinions held weakly,” I thought it was a pretty good idea. At least, clearly more productive than weak opinions held weakly (which are fine if you want to keep your job), or weak opinions held strongly (which usually keep you out of the spotlight).

The real competitor to strong opinions held weakly is, of course, strong opinions held strongly. We’ve all met those people. They are supremely confident and inspiring, until they inspire everyone to jump off a cliff with them.

Strong opinions held weakly, on the other hand, is really an invitation to debate. If you disagree with me, why not try to convince me otherwise? Let the best idea win.

After some decades of experience with this approach, however, I eventually learned that the problem with this framing is the word “debate.” Everyone has a mental model, but not everyone wants to debate it. And if you’re really good at debating — the thing they teach you to be, in debate club or whatever — then you learn how to “win” debates without uncovering actual truth.

Some days it feels like most of the Internet today is people “debating” their weakly-held strong beliefs and pulling out every rhetorical trick they can find, in order to “win” some kind of low-stakes war of opinion where there was no right answer in the first place.

Anyway, I don’t recommend it, it’s kind of a waste of time. The people who want to hang out with you at the debate club are the people who already, secretly, have the same mental models as you in all the ways that matter.

What’s really useful, and way harder, is to find the people who are not interested in debating you at all, and figure out why.

June 2023
December 2023

I'm CEO at Tailscale, where we make network problems disappear.

Why would you follow me on twitter? Use RSS.

apenwarr on gmail.com