A decade and a half ago I stood in front of a group of people talking about authentication. Mostly I was there to do a deep technical dive into Windows credential management and authentication, but as usual when I gave that presentation, someone just had to ask what I thought about biometrics.
Biometric authentication - something you are - is a bit of a misnomer. It’s not authentication. It’s identification. You are, quite literally, identifying yourself. You’ll never see a police report say “law enforcement authenticated the criminal using fingerprints left on the scene.” Identity is something you are or something you claim to be. Authentication is how you prove it. If your authentication claim is your identity then you have identified yourself, as opposed to authenticated yourself. However, when we use biometry - the statistical analysis of biological observations - the identification decision is not binary. The evaluation of a biometric identity claim is based on a confidence interval that specifies how confident we are that the currently observed biometric data matches the previously observed data. In an earlier article I mentioned that Apple claims TouchID has a False Acceptance Rate (FAR) of 1 in 50,000, the same that Microsoft requires in Windows Hello. This FAR is a result of how narrow that confidence interval is. There are also false rejection rates (FRR) where the system decides incorrectly that the observed biometric data is not similar to the stored one. This too is based on a confidence interval and those confidence intervals are tuned to achieve an acceptable tradeoff between FAR and FRR. This is a unique feature among authentication systems and it happens because the system cannot store or compare the biometric observations with 100% fidelity.
When you register to use biometric authentication the computer (or phone) does not store an image of your fingerprint, your face, or your retinas to compare to the ones you present when you try to authenticate. Direct image comparison is not a particularly efficient way to authenticate, although doing so plays well in the movies. Computers calculate a mathematical representation of (supposedly) unique physical characteristics and store that mathematical representation. When you authenticate, the computer calculates a new mathematical representation of physical characteristics using the same algorithm. It then compares the mathematical representations. This cannot be done using an exact match like in every other authentication claim we have seen so far. Hopefully you will never look quite as awful as your driver’s license photo, for example, or is that just me? Your face will never look exactly the same from day to day. Your fingerprints change slightly, your fingers may be sweaty, they may be cold, wet, or dry. This is why the calculation uses confidence intervals. If the measurements are “close enough” it’s considered a match, or if they are different enough, it is considered not a match.
This is where FAR and FRR come into play. You want a relatively low FAR rate, and relatively low FRR. It’s not terribly difficult to achieve one of those but achieving both is hard. There are a lot of sophisticated ways to improve this tradeoff. Fingerprints, for instance, have characteristics we can see with the naked eye: ridges, deltas between regions, etc. They also have features that are harder to see, called minutiae. Improved scanning systems can observe more of these details, and improved algorithms can represent them more definitively. The actual science of biometry consists of some fascinating mathematics and is improving constantly. The National Institute of Standards and Technology (NIST) has had an ongoing program to drive improvements in fingerprint recognition for nearly two decades. While it is still reasonable to be skeptical about biometric authentication, the science has improved vastly in the past two decades, and is now relatively widely used. The systems used today probably provide a far more accurate match for most people than a human would by looking at their passport photos.
Biometric identification still poses a privacy concern. Each person needs to make their own decisions about their desires for privacy, however, privacy regulations, such as the EU GDPR, do cover biometric data. In my opinion, systems that rely on biometric authentication (as opposed to identification) should be local systems and the biometric data should be locked into a one-way “vault” and not transmitted across any network connection. There are a variety of algorithms to create a one-way representation of the biometric data that is evaluated locally. This is how the built-in biometric authentication in Windows Hello, TouchID, FaceID, and Android work. However, biometrics should not be used to log into any website, in my opinion, nor am I interested in using them to gain access to my office.
As a security or IT professional, we have to choose which biometric authentication systems, if any, to support. There are a variety of implementations and it is not always easy to determine the quality of any given one. The FAR rate is not always obviously stated, or stated at all. In addition, the FAR rate is only a measurement of how often the system will consider a presented set of biometrics sufficiently similar to ones registered, when they should in fact be rejected. That is not quite the same as a measurement of how easy it would be to produce a fake fingerprint, e.g. The more cooperative your “victim” is the easier it is to produce a relatively high quality copy of their biometric characteristics. If they are cooperating, or can be made to cooperate, you can make a high quality imprint of their fingerprint using commonly available materials such as glue. You can then mount these on a real finger or a fake one that looks real enough to the reader to be detected as a finger. Most of the research in biometrics is related to the accuracy with which the system can match a presented biometric with a database of stored ones. For example, between 2012 and 2014 the National Institute of Standards and Technology (NIST) hosted the Fingerprint Technology Vendor Evaluation (FpVTE) 2012. The report makes it very clear that the evaluation was not intended to “evaluate scanners and acquisition devices”, which is the crucial piece in detecting fake fingers. Modern biometric systems use liveness detection to detect blood flow, blinking or movement of the eyes, capacitive measurements, and other ways to make it more difficult to fool the system, but the success of these approaches is wrapped up in the FAR rate as the only metric we can use to understand the security of biometric authentication. The FAR rate is useful, but not exactly what we need.
The focus on accuracy in biometric research means that almost every biometric system has been shown to have flaws. Even when there are not yet any known flaws doubt is cast on the system. Less than 24 hours after Apple launched FaceID on September 12, 2017 TechCrunch called it into question, partially by showing how easy it was to defeat the facial recognition technology on an older Samsung device. Apple’s implementation seems relatively solid, however. One of the hacks that received the most attention was that you can bypass the “Require attention for Face ID” feature by putting a pair of doctored glasses on a sleeping or deceased victim. The fact that this got so much play in the press probably speaks to the quality of the system. There was an early claim that an attacker can create a mask that could fool FaceID but it does not seem to have been widely replicated.
A few years ago, before the iPhone 5s and the Samsung Galaxy S5 made fingerprint scanners on mobile phones a standard feature, it was quite convenient to keep a file with numbers in it, such as the PIN codes of your friends’ phones. That way, if your friends left their phones lying around, say, at a bar or their desk at work, you could make sure nobody nefarious took them. You could also help your friend by making sure their facebook account still worked and accepted new posts. Some people, with just a little bit of practice, are quite adept at shoulder surfing PIN codes or swipe patterns.
Practical jokes aside, it really isn’t terribly difficult to shoulder surf a four-digit PIN, and with some training you only need to see a six-digit PIN once or twice to catch all the digits. If the system echos the characters you type to the screen before obscuring them, or leaves them visible like Amazon’s mobile login screen, it is laughably easy to shoulder-surf the password. The “swipe pattern” that was popular on Android was usually visible if you tilted the screen to the light. Fingerprint readers put a stop to this practice almost overnight. Many people point out, and they are not completely wrong, that you can manufacture a fake fingerprint but, crucially, you cannot shoulder surf a fingerprint. When used in public, virtually any biometric has a crucial security advantage over PIN codes or passwords.
According to Apple the average iPhone user unlocks their phone 80 times per day (https://appleinsider.com/articles/16/04/19/average-iphone-user-unlocks-device-80-times-per-day-89-use-touch-id-apple-says). Prior to the introduction of TouchID at least 25% of iPhone users didn’t have a PIN code at all. Another 15% or so used one of 10 PIN codes (https://www.eweek.com/security/top-10-pin-codes-picked-by-iphone-users). That’s over one third of users in total. In addition, the vast majority of people did not use the “Require passcode immediately” feature but set it to something like 15-30 minutes. This avoided having to type the code 80 times per day, but also left the phone unlocked while unattended far more often. Biometrics on phones changed this completely. In 2016 85-89% of users with a TouchID enabled device used the fingerprint reader to unlock their phone. Facial recognition makes biometrics more accurate, and more convenient and it is likely this number has gone up since 2016. It’s now nearly seamless to have notifications hidden on the lock screen and have them only show up when the phone recognizes your face. Prior to biometrics many people, myself included, configured their phone to show notifications on the lock screen because unlocking the phone to see them was too much work. It should be beyond any doubt that well-implemented biometrics have drastically changed the security landscape on phones by making security more usable.
Most of us have been typing passwords on computers for so long that we don’t really think about it any longer but try to remember how many times per day you do it. Most people don’t lock their computer when they leave it for a short time because having to type the password, again, is a barrier to productivity. Seamless and well implemented biometrics could change that, but hasn’t yet. Far too many organizations disable biometric authentication because there is a wide-spread, incorrect, belief that a photo or gummy bear can be used to bypass it. Instead, they require people to type complex passwords to unlock their computer with the obvious result that computers do not get locked if users can help it! For example, here are nine tools that will keep your computer from locking. BTW, how many of the readers who are administrators look for and remove these on their fleet?
A very interesting academic study would be to determine for how many minutes per day the laptops these organizations issue are left unattended and unlocked at a coffee shop. How many fewer minutes would they be left unlocked if the computer locked itself after 30 seconds of inactivity, like your phone does? This is another one of those cases where the security team, meaning well, disables something they believe introduces a risk, without taking into account the risk they create by doing so. If you require people to type a 15-character password every time their computer locks they will make really sure the computer doesn’t lock very often, including when it should. Biometrics have successfully enhanced security on mobile devices because they are convenient. The technology is there for them to do the same on our computers, if only InfoSec and IT would deploy it. By the way, in case your security team says regulation won’t allow it, here is section 8.2 from PCI DSS 3.2.1:
8.2 In addition to assigning a unique ID, ensure proper user-authentication management for non-consumer users and administrators on all system components by employing at least one of the following methods to authenticate all users:
Something you know, such as a password or passphrase
Something you have, such as a token device or smart card
Something you are, such as a biometric
Is biometrics more secure than passcodes/passwords in an ideal scenario? No, they are not more secure than passcodes and passwords could be, but they are very likely more secure than the majority of in-use passwords and passcodes actually are. The reason has to do with human behavior, a factor we routinely ignore.
As I mentioned in an earlier article, Windows Hello can be deployed with a PIN code alongside the biometric. This sacrifices some convenience, but it does harden the technology for scenarios that need a higher level of assurance. Even a short PIN code may be acceptable because. The PIN code and the biometric together are not meant to stop the attacker from getting into the device indefinitely. They are only meant to slow them down so you can detect and neutralize the threat. In a truly stateless scenario, like what we use on our phones, the device can wipe itself after a certain number of failed attempts at unlocking it. This provides very strong security even with a relatively weak PIN code. However, it also requires a solid backup strategy and a technology strategy that treats computing devices as stateless - a strategy where the data resides in the cloud and not on end-points. In this strategy lies a promise for very usable and robust security, even when based on technologies that are not as strong as, say, public key cryptography. Biometric authentication can be a key component of this type of strategy.
Biometrics do also have a number of disadvantages. Perhaps the most important is that it is not easy to revoke a biometric authenticator in the traditional sense of the word. If you lose your FIDO security key you can buy a new one. You can’t really buy a new fingerprint, at least not cheaply. However, this problem isn’t really about losing it but more about compromising it. If someone can produce a synthetic fingerprint they can now authenticate as you and there is very little you can do to stop them. The best you can hope for is that the technology improves to detect subtle differences between the real thing and the fake thing. Over the past 20 years, the technology has improved drastically. When I received my first fingerprint reader, in the early 0’s, they used optical sensors that I could fool by slightly warming up a gummy bear, imprinting my fingerprint on it, and then using that on the reader. Today’s readers incorporate capacitive sensors, liveness detection, and other countermeasures to increase the difficulty to create synthetic biometric claims. However, the technology is not foolproof, and the best technology is also expensive. For this reason, I strongly recommend against using biometric systems for direct authentication to remote resources. Biometric authentication should be limited to local authentication, which may unlock cryptographic credentials and enable those to be used to access remote resources. However, a biometric claim should not be the authenticator for a remote resource. The biometric system must also compute the stored representation such that it cannot be reversed and cannot be used on any other system to authenticate if it is leaked. If used in this way, it is also relatively easy to revoke the credentials.
As mentioned multiple times above, there is a clear privacy implication to biometric authentication. While your email address provides hints as to who you are, a biometric measurement, literally, is something you are. You cannot claim any level of anonymity in a system that relies on biometric authentication.This, also, speaks to using them solely for local authentication, although many privacy advocates do not even trust that.
However, the biggest disadvantage of biometric authentication is that most of them have some way to bypass their use. About 15 years ago an authentication system that measured your typing cadence when typing your password became briefly popular. It was usually implemented using an Adobe Flash object. This technology was fatally flawed in numerous ways, including that to protect my bank account I was told to install a piece of software that had more security holes in it than, well, not really anything. The flash object measured your typing cadence and if it was too different from the cadence when you created your password it rejected the login.
However, not everyone can actually type their password, nor can everyone run Flash (iPhones, for instance). To accommodate people who used assistive devices, people who had broken their hand, etc, the system had a separate page where you could enter your password without the cadence being measured. I never really understood why nobody in any organization who implemented this system ever asked the vendor “but why would the criminals not just go to the flash-less page to do their credential stuffing”? Apparently nobody ever asked that and the vendor kept selling this “solution”. This was a great example of where a biometric system had a backdoor that allowed bypassing the security the system was intended to offer. Likewise, you can bypass FaceID on around 5% of iPhones by typing “1234” on the keypad. With strong biometric systems, the weakest point may be that you can avoid using it and attack the human weaknesses that have been there all along; that are still there.
When I was asked that question all those years ago my answer was that biometrics cannot be revoked and that using them for authentication across a network therefore is a bad idea. However, today, the common implementations - Apple FaceID and TouchID, Windows Hello, and Android’s biometric unlocking, do not transmit anything across a network. They only use biometric authentication to unlock the device and then to unlock cryptographic key material stored in a special purpose security device. The stored representations cannot be reversed into a fingerprint or a face. This completely changes the game, and my answer.
Therefore, in spite of the privacy issues and the difficulty in metrics to aid in decision making, I believe most people and organizations can derive significant security value from biometric authentication. If your device can be wiped remotely if it is lost the vast majority of users will be far more secure if they use biometric authentication and a reasonable (six digit minimum) passcode with some complexity. Biometrics cannot be shoulder surfed, and the convenience means people can use a slightly stronger passcode. Likewise, I believe the majority of enterprises and home users can use stronger passwords if they don’t have to be typed very often; and that computers will be much more likely to be locked when left unattended if they can be unlocked by looking at them. Pragmatically speaking, most people will be more secure using biometrics than they would be if we required them to use a, theoretically, more secure system based on complex and strong shared secrets. As security professionals we need to recognize that we cannot ignore the humans that use our systems and help them help us in protecting those systems.
Image by <a href="https://pixabay.com/users/TheDigitalWay-3008341/?utm_source=link-attribution&utm_medium=referral&utm_campaign=image&utm_content=1590455">TheDigitalWay</a> from <a href="https://pixabay.com/?utm_source=link-attribution&utm_medium=referral&utm_campaign=image&utm_content=1590455">Pixabay</a>