I love the smell of

in the morning
Everything here is my opinion. I do not speak for your employer.
January 2019
February 2019

2019-01-14 »

Factors in authentication

Multi-factor authentication remains hard-to-use, hard-to-secure, and error-prone. I've been studying authentication lately to see if it might be possible to adapt some security practices, especially phishing prevention, from big companies to small companies and consumers.

Here's what I have so far.

(Sorry: this is another long one. Security is complicated!)

The three factors aren't independent

The traditional definition of multi-factor authentication (although I couldn't find a reference to where it originated) is that you should have two or more of:

  1. Something you know (eg. a password)
  2. Something you have (eg. a card)
  3. Something you are (eg. a fingerprint or retinal scan).

This definition is actually not bad, and has done its job for years. But it tends to lead people into some dead ends that, years later, we now know are wrong.

First of all, let's talk about #3: something you are. The obvious implementation of this is to scan your fingerprint or retina at a door or computer screen. The computer looks it up in a central database somewhere and allows or denies entry. Right?

Well, it turns out, not so much. First of all, there are huge privacy implications to having your biometric data in a central server. I have a Nexus card, which is a programme created jointly by the Canada and U.S. governments to allow theoretically slightly faster border crossing between the two countries. In reality, it is almost certainly a ploy to collect more biometric data. For some reason, Canada insisted on collecting my retinal scans, and the U.S. insisted on collecting my fingerprints, and it seems a bit unlikely to me that the information from one of those will never leak to the other. So now two of my biometrics are in two different databases, someday to be (if not already) merged into one. Whether or not this is legal is probably of only academic interest to the spy agencies doing it.

Now that's for government use, which is bad enough. But I don't need every bank, employer, or apartment building to have my biometrics, for all sorts of reasons. It's a privacy invasion. It can be used to track me well beyond the intended use case. It can be copied, by a sufficiently skilled attacker, and used to masquerade as me. And when that happens, it's not something I can replace like I can replace a stolen password. The dark joke among security engineers is at least I have ten fingers, so I get ten tries at keeping it safe.

Yeah, yeah, this is old news, right? But actually, biometrics have an even bigger problem that people don't talk about enough: false positives. If you have a giant database of fingerprints or retinas or DNA samples or voiceprints or faces, and you have an algorithm for searching the database, and you have one sample you want to look up in the database to find the right match... it turns out to be a hard problem. ML/AI people hide this problem from you all the time. There's a fun toy that tells you which celebrity you look most similar to, and it's eerily accurate, right? Sure. But the toy works because of false positives. It's easy, relatively, to find matches that look similar to you. It's tough to figure out which of those matches, if any, really are you. Which is fine for a toy, but bad for authentication.

(iPhotos and Facebook's face tagging are all susceptible to the same thing. It's pretty easy to see if a given photo matches someone already on your friend list, but very hard, maybe impossible, to match accurately against a single worldwide global database. That's why it only ever suggests tagging a face as one of your friends, even if it's wrong. This whole thing also creates serious obstacles to those dystopian "we'll assign a social score to every citizen and track them all everywhere through public security cameras" plans.)

The fix for false positives is surprisingly easy, and lets you fix a bunch of the other problems with biometrics at the same time: combine it with factor #2, ie. something you have, like a card or a phone.

Instead of storing your fingerprints or retinal scan or facial structure in a big central database, just put it on a card or a phone, in a secure element. Then, when you want to authenticate, the scanner makes a connection to your device, sends the current biometric scan, and the card compares it to its single target (you), and chooses whether or not to unlock itself in response. If it does, it tells the scanner (usually with a cryptographic signature) that all's well.

This is much better: your search algorithm can be much worse. Even a 1 in 1000 false positive rate will be undetectable by random end users, even if it would be utterly hopeless in a million-person database. You don't need to put your biometrics in a central database, which is a big improvement when that database inevitably gets stolen. And best of all, a skilled attacker can't only clone your biometrics, because they won't have the key from your device. No more showing up at the secret facility with some synthetic rubber fingers and waltzing right in. That other factor (something you have... and will notice when it's stolen) goes a long way.

But what exactly is that second factor?

"Something you have" is not quite right

So much for biometrics, factor #3. But what about #2? "Something you have" is pretty straightforward, right? A card, a phone, an OTP fob, a U2F key, whatever. If you have it, that's an authentication factor.

Well, almost. We're going to need to be a little more specific in order to explain exactly why things work the way they do, and why some attacks are possible when you might think they're impossible.

First, let's be clear. "Something you have" is always an encryption key, a long sequence of bits, a large number. It's not a screwdriver, or an 1800s-style physical iron key (physical analogies for two-factor authentication all eventually turn out badly), or an ear infection. It's an encryption key. Let's call it that.

Secondly, it's not just any encryption key. Anybody can generate an encryption key if they have a big enough random number generator. Your encryption key is special. It's a key that you have previously agreed upon ("enrolled") with an authentication provider.

So let's change the definition. Factor #2 isn't something you have; it's just a particular very large number. Maybe you keep it on a device. Maybe you memorize it (if you can memorize 256 digits of pi, I guess you can memorize a 2048-bit RSA key, but I don't envy your key rotation strategy), and then it's something you "know." Maybe you get a QR code tattoo, and then it's something you "are." But it's a big number, and it's valuable only because the other end already knows, by prior agreement, that they will trust someone who has that number.

(Side note: instead of saving each enrolled key in a database, the authentication provider might sign your (public) key using a Certificate Authority (CA). The net result is the same: in order to get a signature from the CA, you had to enroll with it at some point in the past. The difference only affects the format of the backend authentication database on the server, which is opaque to us.)

From now on, factor #2 is "your previously-enrolled private key."

All factors are not created equal

An incorrect conclusion from the "something you know, something you have, something you are" definition is that you can pick any two of these and have a system that's about equally secure as any other combination.

Nope.

In fact, factor #2 - the previously-enrolled private key - is by far the safest of the three. This is why banks give you bank cards but are willing to "authenticate" those cards with (in the U.S.) a useless pen-and-paper signature, or (elsewhere) an easily-stolen-or-guessed 4-digit PIN.

Your factor #2 key can be easily replaced (and re-enrolled). You can have as many as you want, corresponding to as many pseudonymous identities as you want. It's a truly random stream of bits, not a weak one you composed because it would be easy to remember. It's easily and perfectly distinguished from everyone else's key. It can't be used to track you between use cases (unless your device is designed to leak it intentionally, sigh). With software configured correctly, it can't be phished. It can be stolen - but (if the hardware is designed right) your attacker has to steal it in the physical world, which eliminates Internet-based attackers, which is most of them. And if they steal your physical key card, you'll probably notice pretty fast.

Even ancient magstripe credit cards share most of these advantages. Their main weakness is that a physical "attacker" can clone rather than steal your card, which is much harder to detect. That's the only real advantage of "chip" cards: they can't be trivially copied. (The switch from signatures to PINs is mostly a distraction and adds little.) We make fun of old-fashioned banks for "still" accepting magstripe cards and being trapped in the past, but those cards are vastly more secure than almost anything else your non-bank service providers use.

(Let's skip over Internet shopping, in which you just type in the not-so-secret magic number from the back of the credit card and combine it with your not-so-secret postal code. Oh well. Sometimes convenience trumps security, even for a bank. But they still keep issuing those physical cards.)

Your phone is probably doing this right

By the way, this whole post is just repeating what experts already know. Your phone, especially if it's an iPhone, already benefits from people who know all this stuff.

Your iPhone contains a previously-enrolled private key, in a secure element, that can uniquely identify it to Apple. You can generate more of them, like the key used to encrypt your local storage. Actions on the key can be restricted to require a fingerprint first, and even the fingerprint reader uses its own previously-enrolled private key link to the secure enclave, preventing attackers from tearing apart your phone and feeding it digital copies of your fingerprint, bypassing the sensor. Plus, after a power cycle, the iPhone secure element refuses to unlock at all without first seeing your PIN, which an attacker can't lift from your hotel room or jail cell like they can lift your fingerprints. And it only lets an attacker try a few guesses at the PIN before it locks iself permanently.

FaceID is similar. It doesn't really matter if the fingerprint or FaceID algorithms have a fairly high false positive rate, because of all the other protections, especially that previously-enrolled private key. Somehow, an attacker either needs your phone, or needs to hack the software on your phone and wait for you to scan a biometric.

Awesome! Let's use our phone to authenticate everything!

Yeah, hold on, we're getting there.

That pesky "previous" enrollment

So here's the catch. The whole multi-factor authentication thing is almost completely solved at this point. Virtually everybody has a phone already (anyway, more people have phones than computers), and any phone can store a secret key - it's just a number, after all - even if it doesn't have secure element hardware. (The secure element helps against certain kinds of malware attacks, but factor #2 authentication is still a huge benefit even with no secure element.)

The secret key on your phone can be protected with a PIN, or biometric, or both, so even if someone steals your phone, they can't immediately pretend to be you.

And, assuming your phone was not a victim of a supply chain attack, you have a safe and reliable way to tell your phone not to authorize anybody unless they have your PIN or biometric: you just need to be the person who initially configures the phone. Nice! Passwords are obsolete! Your phone is all three authentication factors in one!

All true!

But... how does a random Internet service know your phone's key is the key that identifies you? Who are you, anyway?

The thing about a previously-enrolled private key is you have to... previously... enroll it... of course. Which is a really effective way of triggering Inception memes. Just log into the web site, and tell it to trust... oh, rats.

Authentication is easy. Enrollment is hard.

Which leads us to a surprising conclusion: security research has actually come really far. We now have good technology to prevent phishing while decreasing the chance of forgotten passwords (PINs are easier than passwords!) and avoiding all the privacy problems with biometrics. Cool! Why isn't it deployed everywhere?

Because enrollment remains unsolved.

Here are some ways we do enrollment today:

  1. Pick a password at account creation time and hope you weren't being phished at that time. (Phishable. Incidentally, "password manager" apps are a variant of this.)

  2. Get a security token (OTP, U2F, whatever) issued by your employer in person, after some kind of identity check. (Very secure, but tedious, which is why only corporations bother with it.)

  3. Mail a token to your physical mailbox and hope nobody else gets there first. Banks do this with your credit card. (Subject to (eventually detectable) mail theft by criminals and roommates, but very effective, albeit slow.)

  4. Log in once with a password, then associate a security token for subsequent logins. (Password logins are phishable.)

  5. Your physical device (eg. iPhone) arrives with a key that the vendor (eg. Apple) already trusts, so at least it knows it's talking to a real iPhone. (Great, but it doesn't prove who you are.)

  6. Send a confirmation message to your email address or phone number, thus ensuring you own it. (Inception again: this just verifies that you can log in somewhere else, with an unknown level of security. Email addresses and phone numbers are stolen all the time.)

  7. Ask for personal information, eg. "security questions", your government SIN, your physical address, your credit card number. (These answers aren't as secret as you'd like to think.)

All the above methods have different sorts of flaws or limitations.

That said, we should draw attention to some special kinds of "enrollment" that are actually nearly perfect:

  1. Setting a PIN on my iPhone, assuming the iPhone's supply chain was secure, is reliable and easy because of a physical link between it and me.

  2. Setting fingerprint or face unlock on my iPhone is reliable and easy, again because of a physical (sensor) link between it and me.

  3. My corporate-issued U2F token can issue signatures to my web browser because they are physically linked, and it can assume I plugged it into whatever computer on purpose.

  4. Bluetooth devices make me do an explicit enrollment ("pairing") phase on both devices, which prevents attacks other than by people who are physically nearby at the time of initial pairing. (When available, a pairing PIN adds a small amount of security on top, but is usually more work than it's worth unless the NSA is parked outside.)

  5. Registering a new iCloud account on my iPhone can be safe because the iPhone's built-in certificate can be used to mutually validate between Apple's auth server and the phone. (This is only safe if you create your account from a suitably secure iPhone in the first place. Securing a non-secured account later is subject to the usual enrollment problems.)

  6. Registering additional Apple devices to your existing iCloud account, where Apple pops up a confirmation on your already-registered Apple devices and then makes you type an OTP into the new device. (The OTP in this case might seem extraneous, but it ensures that someone can't just trigger a request from their own device at the same moment as you trigger a request from yours, and have them both go through because you click confirm twice. It's also cool that they show the GPS location of the requester in the confirmation dialog, although that seems forgeable.)

...okay, I didn't expect this article to turn out as an iPhone ad, but you know what? They've really thought through this consumer authentication stuff. (To their credit, although Google is behind on consumer authentication, their work on U2F for corporate athentication is really great. It relies on the tedious-but-highly-effective human enrollment process.)

It seems to me that the above successful enrollment patterns all use one or more of the following techniques:

  • A human authenticates you and issues you a token (usually in person).

  • A short-distance, physical link (proximity-based authentication) like a biometric sensor, or USB or bluetooth connection.

  • Delegation to an existing authenticator (like a popular web service, eg. via oauth2, or by charging a credit card) to confirm that it's you. Once you have even a single chicken, you no longer have a chicken-or-egg problem. After delegating once, you can enroll a new token and avoid future delegations to that service if you want. (That might or might not be desirable.)

That's all I've got. In this article, I'm just writing about what other people do. I don't have any magic solutions to enrollment. So let's leave enrollment for now as an only-partially-solved, rather awkward problem.

(By the way, the reason gpg is so complicated and unusable is that the "web of trust" is all enrollment, all the time.)

But now that we understand about enrollment, let's talk about...

Why is U2F so effective?

Other than enrollment, using U2F (Universal Second Factor) is pretty simple. You get a physical key, plug it into a USB port (or nowadays sometimes a bluetooth link), and you press it when prompted.

When you press it, the device signs a request from any requesting web site, via the web browser, thus proving that you are you. (Whoever "you" are was determined during enrollment.)

The non-obvious best feature is that the signature only applies to a particular web domain that made the request, and the web browser enforces that the domain name is "real" by using HTTPS certificates. (Inception again! How do we know we can trust the web site? Because their HTTPS cert was signed by a global CA. How do we know which CAs to trust? A list of them came with our browser or our OS. How can we trust the browser or OS? Uh... supply chain?)

Anyway, put it all together, and this domain validation is like magic; it completely defeats phishing.

A good mental model for phishing is to imagine some person sitting there with their own web browser, but also operating a web site that proxies your requests straight through to the "real" web site, while capturing the traffic. So you accidentally visit gmail.example.com, which proxies to gmail.com. Somehow you are fooled into thinking you need to log in as if it's gmail. You type your username and password, which are logged by the proxy and sent along to gmail.com. Our hacker friend reads the username and password from their trace, and logs into gmail.com on their web browser. You've been phished!

Now let's add U2F. After you type your password, the site asks you to tap your U2F. You tap it, and generate a valid signature directed to... gmail.example.com, the site your browser confirmed as having made the request. They proxy that valid signature along to gmail.com, but it's useless: gmail.com doesn't trust signed authenticators directed at gmail.example.com. The phishing has been blocked completely.

(By the way, OTP - any device that generates a number that you have to type in - doesn't prevent phishing at all. At least, not if the attacker can do man-in-the-middle attacks, which they probably can. In our example, you'd happily type the OTP into gmail.example.com, for the same reason you typed your password, and it would proxy it to gmail.com, and bam, you lose.)

(I mentioned Apple's authenticator above, which requires you to type an OTP. It doesn't have this weakness, because they are also authenticating using a U2F-like mechanism behind the scenes. Forcing you to also type the OTP is an extra layer of security, part of a very subtle enrollment process that defends against even rarer types of attack.)

Note also that you can unplug your U2F token and plug it into any other computer and web browser instance. Without any special pairing process, it can trust the new web browser and the web browser can trust the U2F. Why? Because of that physical link again. Somebody plugged this U2F key into this browser. That means they intended for them to trust each other. It's axiomatic; that's the design.

For this reason, pure U2F doesn't work well without a second factor, such as a password. Otherwise, if someone steals your U2F key, they could impersonate you by plugging it into their own computer, unless there's something else proving it's you.

The reason these two factors are so effective is because of the threat model. Basically, there are physical attackers (eg. purse snatchers) and there are Internet attackers (eg. spammers). Physical attackers can steal your U2F device, but they also need to see you type your password, and the U2F is traceable when used, and the attack is unscalable (you need a physical human to be near you to steal it). On the other hand, Internet attackers can easily phish your password, but can't steal your U2F.

Skilled, well-funded attackers with physical presence can do both. Sorry. But most of us aren't important enough to worry about that.

Can I use my phone as a U2F?

Learning about the above led me to ask a question many people have asked: why do I need the stupid physical security token, which is only plugged into one computer at a time and which I'm guaranteed to misplace? Can't my phone be the security token?

This is a good question. Aside from enrollment being generally hard, the stupid, expensive physical security token is, almost certainly, the thing preventing widespread adoption of U2F. I doubt it'll ever catch on, other than for employees of corporations (who pay you for the inconvenience and have good ways to enroll a new token when you inevitably lose yours).

It ought to be easy, right? A U2F is really just a secure element containing a previously-enrolled private key. As established earlier, you can put a private key anywhere you want. Heck, your iPhone has one already, protected by a PIN or fingerprint or face. We're almost there!

Almost. The thing is, when I'm browsing on my computer with my phone in my pocket, it's not my phone that needs to authenticate with the web site: it's my computer. So my phone and computer are going to have to talk to each other, just like a USB U2F and a computer talk to each other.

Both my web browser and my phone are connected to the Internet, so that's the most obvious way for them to talk. (If my computer weren't on the Internet, it presumably wouldn't need to authenticate with a web site. And my phone probably has a cellular data plan, so situations where it's on the Internet are "mostly" a superset of situations where my computer is.)

So all I need to do is generate an encrypted push request from my browser, through the Internet, to my phone, saying "please sign an authorization for gmail.com." Then I tap my phone to approve, and it sends the answer back, and the web browser proceeds, exactly as it would with a USB-connected U2F token. Right?

Almost. There's just one catch. How does my phone know the request is coming from a trusted web browser, and not that browser the phishing attacker, up above, was operating, or from some other random browser on the Internet? How can my phone trust my browser?

Inception.

You have to... previously enroll... the browser's secret key... with your phone's U2F app.

Okay, okay, we can do this. Recall that there are three ways to do a safe enrollment: 1) a physical person gives you a key; 2) a direct physical link; 3) delegation from an existing authenticator. Traditional USB U2F uses method #2. When you physically plug a U2F into a new web browser, it starts trusting that web browser. Easy and safe.

But your computer and your phone aren't linked (if they are, then problem solved, but they usually aren't). Enrollment between your browser and your phone can be done in various different ways: you could make a bluetooth link, or generate a key on one and type it into the other, or scan a QR code from one to the other. Let's assume you picked some way to do this, and your browser is enrolled with your phone.

Now what? Okay, now when I want to log in, I must know my password and must physically possess my computer (containing the enrolled browser), and my phone with its U2F app. I type my password, which triggers a U2F request from the web site to the browser, which signs and pushes a signing request to my phone, which pops up an acknowledgement, which I tap, and then it signs and sends an answer back to my browser, which sends it back to the web site, and success!

Except... something's fishy there. That was more complicated than it needs to be. Notice that:

  • The browser is trusted by the phone U2F app because of a previous enrollment.

  • The phone U2F app is trusted by the web site because of a (different) previous enrollment.

  • My screen tap (or fingerprint scan) is trusted by the U2F app because of my physical presence (or another previous enrollment).

Why do I even need my phone in this sequence? I've already enrolled my browser; it's trusted. During enrollment, we could have given it a key that's just as useful as a U2F device, and we could have done that precisely as safely as enrolling a new U2F device. (It's nice if my computer has a secure element in which to store the key, but as established above, that only matters for security if my computer gets hit by malware.)

We could just do this instead: I type my password. Web site generates U2F request. Web browser signs the request using its previously-enrolled private key, enrolled earlier using my phone. Login completes, and phishing is prevented! Plus we don't rely on my phone or its flakey Internet connection.

Great, right?

(Note that while this does indeed prevent phishing - the biggest threat for most organizations - it doesn't prevent your roommate, spouse, or 6-year old child from watching you type your password, then sometime later unlocking your computer and using its already-enrolled web browser to pretend to be you. By leaving your 30-pound desktop computer just lying around, you've let your second factor get stolen. In that way, phones are better second factors than computers, because we're so addicted to them that we rarely let them out of our sight. That doesn't stop your children from borrowing your fingerprints while you nap, however.)

There is, of course, still a catch. If you do this, you have to enroll every browser you might use, separately, using your phone, with one of these tedious methods (bluetooth, typing a key, or QR code scan). That's kind of a hassle. With a USB U2F key, you can just carry it around and plug it into any computer. Even better, when you stop trusting that computer, you just unplug the token, and it cancels the enrollment. With a physical device, it's super easy to understand exactly what you've enrolled and what you haven't. (With a software key, the server needs a good UI to let you un-enroll computers you don't need anymore, and users will surely screw that up.)

What people tend to miss, though, is that this enrollment is necessary whether or not you send a push notification to the phone during login. The push notification is only secure if this specific browser instance is enrolled; but if this browser instance is enrolled, and my computer is not easier to steal than my phone, then the push notification adds no extra security. The enrollment was the security.

And that's where I'm stuck. I've more or less convinced myself that phone-based OTP (prone to phishing) or phone-push-based U2F (not useful after initial enrollment) add no interesting security but do make things harder for end users. I guess they call that "security theatre." Meanwhile, physical U2F tokens are unlikely to become popular with consumers because they're inconvenient.

At least that conclusion is consistent with the state of the world as it exists today, and what the market leaders are currently doing. Which means maybe I've at least explained it correctly.

(Special thanks to Tavis Ormandy, Reilly Grant, and Perry Lorier for answering some of my questions about this on Twitter. But any mistakes in this article are my fault, not theirs.)

I'm CEO at Tailscale, where we make network problems disappear.

Why would you follow me on twitter? Use RSS.

apenwarr on gmail.com