MFA Decision Making Part 1

In past articles we have covered a lot of different second factor options. We have talked about how they work, and covered advantages and disadvantages of each option. If you recall from the first article, the series started because someone asked “where do I go to learn about multi-factor authentication?” Hopefully, by now we can agree that we have a base level of knowledge about the topic, enabling us to switch to the decision making process. If you are doing this for yourself, you will at some point have to decide which factors you want to use for which service. If you are considering implementing or changing your multi-factor authentication (MFA) strategy for a business, you need to decide which factors to implement for internal users. Or, perhaps you are responsible for a customer-facing service and need to decide what to support for your customers. 

Reasoning about MFA

The first thing we need is a decision making framework; a way to reason about MFA options and which make sense. There are several options. One of the most prominent is the National Institute of Standards and Technology (NIST) Special Publication 800-630-3, which discusses Assurance Levels. It covers Identity Assurance Level (IAL), used during the identity proofing process, as well as Authentication Assurance Levels (AAL), used during the authentication process. It also goes into a Federated Assurance Level (FAL) but we’ll skip that for now. IAL is the level of assurance you provide at the time you prove your identity and become enrolled in the system. We will leave that for another time as well. AAL, however, is useful here. NIST defines three AALs, aptly named 1, 2, and 3, where 1 is lowest and 3 is highest. 

Selecting Assurance Levels

For AAL1 NIST requires single or multi-factor authentication. AAL1 provides relatively low confidence that the individual authenticating is the actual account holder. At AAL2 a higher level of confidence is expected and therefore two distinct factors should be presented and approved cryptographic techniques should be used. At AAL3 two factors must be presented. One must be a possession proof of a hardware authenticator. In addition, one authenticator, which could be the hardware one, must provide impersonation resistance. In other words, mere possession (think User Presence in FIDO authenticators) is acceptable if user verification is provided by means of a password, for instance. As a result the level of confidence that the entity authenticating is, in fact, the account holder is very high

This is a very well thought out framework and we will leverage that. While I disagree with some aspects of the guidance, it is largely because it is targeted at government agencies, not general users. To enable a greater level of flexibility I will simply use Low, Medium, and High assurance levels, which correspond roughly to NIST’s standards, but not exactly. 

NIST also provides guidance on how to determine the assurance level you need. I will talk more about that in a future article. For now we can rely on some intuitive decision based on the damage incurred if data is lost, stolen, or modified. If the damage is negligible, you have a low assurance requirement. If the damage is catastrophic, a business ending event, and/or extremely difficult to recover from, you need high assurance. Most things, and most users and customers, operate in the middle. 

Some Modifications

As with any complex problem, there are many ways to think through it. As I mentioned, NIST’s guidance is meant for the federal government. We are talking about a broader use case, however. This means we need a bit more color in our decision making matrix. For that reason, I’m learning from NIST, but making a few modifications

First, I believe there is a significant difference between bootstrapping a system and re-authenticating to that system. For instance, when you get a new mobile phone you have to authenticate to your mail client in the phone - bootstrapping it - but after that it works until you change your password or reset the phone. However, each time that mail client retrieves mail it re-authenticates to the mail server. It does so by using a password stored in the phone and protected by whatever credentials you used to unlock the phone. Without really having consciously decided so, we are implicitly saying we want to perform a full authentication sequence when bootstrapping, but when re-authenticating we accept letting whatever credentials you use to unlock the phone also unlock stored secrets. 

In this article we will talk about bootstrapping or unknown device scenarios. Here the Identity Provider does not recognize the user or the device they are coming from as one they have seen before. It could be anyone on the Internet, and an additional level of scrutiny is required. Next time we’ll talk about re-authenticating a user or authenticating using a known device. 

The second change from the NIST guidance is to look at different risk levels depending on the target usage scenario. For personal accounts I think a little differently about risk than I do when I try to protect an enterprise. The risk tolerance is also different if I am building a public service for use by health care institutions versus one for consumers to buy groceries online. To some extent the assurance levels can capture these differences, but three levels isn’t quite enough. Individuals - consumers - have some things they want to keep very secure, such as their medical records; some things that they don’t much care about at all, such as the account they had to set up to order take-out Indian food; and a whole lot of stuff that falls somewhere in between. An enterprise also has different levels, such as the intranet homepage, the system that builds software distributed to a couple of billion customers, and a whole lot of stuff in between. While there is overlap between the assurance requirements I would like as an individual and the ones I need as an enterprise, it is very hard to capture those in a single set of three assurance levels. Therefore, we will consider the target users separately. The framework is shown in Figure 1

Decision Making Framework

Figure 1 - Bootstrapping / Unknown Device Authentication

As you can see, I use color coding to define acceptable methods of authentication. Green indicates my personal belief that this factor provides an acceptable level of assurance at that specific level for that usage scenario. Yellow means that I find this factor or combination of factors unsuitable. In some cases the reason is related to usability, but mostly it is because I believe that level of assurance cannot be had without using stronger authentication factors. In general, at low assurance levels, single factor authentication may be acceptable. At medium assurance levels, we will need multi-factor authentication. At high assurance, we will use hardware multi-factor authentication.

These judgments are mine! You may have a completely different opinion about what provides an appropriate level of assurance. If you ask me in a couple of weeks my opinion may have changed too. That’s fine. You get to make those decisions too. In case you would like to make your own decisions using this framework the spreadsheet is available read-only for you to use or copy. 

Note: if you need a high-contrast version of the spreadsheet, please contact me.

Personal Use

The left-most colorful section in Figure 1 relates to personal use. I won’t go through all the details here but a few deserve mention. First, I do not consider password alone to be sufficient for any unknown device scenario. With well more than 1 billion passwords available in password dumps to anyone with 1/25 of a bitcoin to rub together there simply is no level of assurance that can be served with a password today. 

Second, for personal use in low-assurance scenarios, I do think phone-based methods are marginally acceptable. We discussed the security concerns surrounding the GSM cellular phone network earlier, but as I mentioned then, there are scenarios where it makes sense. For certain low assurance scenarios the convenience and universality of these methods makes them a transitional choice. 

I find HOTP devices in general to have poor usability. In addition, their property that the code doesn’t expire is a poor security choice considering the alternatives available, most obviously TOTP. For all but the highest assurance requirements, TOTP is a reasonable choice. The lack of phishing resistance makes it unsuitable for high assurance scenarios. However, this implies proper implementation. One company I have to do business with has an awful TOTP implementation using a proprietary app and a code that must be appended to the password field. Shockingly, this is the sole 2FA option that company supports too.

 This usage scenario is all about an unrecognized user on a new device.Therefore, this would require remote biometric authentication, which is unacceptable.  Biometrics should be local only. Likewise, as the device is unknown, there cannot be any secrets locked away in hardware that biometrics could unlock, and hence that scenario is invalid. 

However, U2F and FIDO2 are great choices for almost all assurance levels and in almost every combination. The only one that I could consider inappropriate by itself is a roaming authenticator as a single factor. It would be appropriate for transaction validation but not otherwise. 

Enterprise Authentication

If you’re building an authentication strategy for an enterprise the scenarios are slightly different. First, I would not rely on cellular network based authenticators unless the service is owned and the phone managed by the company, with strict controls on which applications may be installed on it. 

Second, while I think the usability of smart cards for consumers is too poor, I think they are acceptable in some scenarios for enterprise use. A password plus a smart card unlocked by a PIN - the most commonly deployed smart card solution - is quite cumbersome and not necessary, in my opinion, for most assurance requirements. However, for high assurance systems, the use of two knowledge factors may be warranted. 

Finally, I would not use FIDO2 or U2F with only user presence without a second factor for any enterprise scenario. 

Public-facing service

Up to now we have assumed you are making decisions for yourself or your internal end-users. If you are building a public-facing service you have to consider several additional factors in your decision making.

First, are you building a consumer grade or enterprise grade service? Your service may need to support both but the authentication solutions are slightly different for enterprise services. For our purposes here this changes which matrix to look at. 

Next, what assurance level is your service? Put another way, what assurance level would you like for your own data when protected by your service? This is probably something to explore in depth in another article but the answer needs to inform a lot of design decisions in your service. If you are handling data that is under regulatory control the regulations will require a basic level of assurance, but generally speaking, the authentication assurance required under regulations is not that high. 

Based on the assurance level you settle on in the first step you will have to decide which factors to support. There is actually a right and a wrong answer here. The right answer is to always support factors that are universally appropriate. In other words, every service really ought to support FIDO2 and probably TOTP as well. 

Finally, what should your defaults be? Sadly, at least for consumer grade services your defaults probably cannot be to require FIDO2 authenticators. Adoption simply is not there yet and you would probably turn away many people who want to give you money. This hopefully will change, and service providers have a crucial role to play in driving this adoption, but for now, we need to look at other factors. Almost equally sadly, most services opt for password-as-single-factor default. This is really unfortunate because it means a lot of customers, especially consumers, never migrate beyond that. The very least we need to do is to use an SMS code as a default second factor for a consumer grade service, and strongly recommend using a password manager. Many service owners believe the friction required to verify the phone number or email address, or even worse, setting up a TOTP generator will drive customers away. I can’t speak to whether that is true but I do know that not enforcing at least some kind of 2FA will drastically increase the ratio of account compromise. At some point, we will develop a proper way to account for the cost of account compromises to the customer and the company. Then we may see a more broad push toward secure defaults.


Comments

Popular posts from this blog

U2F, FIDO2, and Hardware Security Keys

The Busy Executive’s Guide to Personal Information Security

Single Sign-On