How Hype Will Turn Your Security Key Into Junk
In the last few months there has been a lot of hype about “passkeys” and how they are going to change authentication forever. But that hype will come at a cost.
Obsession with passkeys are about to turn your security keys (yubikeys, feitian, nitrokeys, …) into obsolete and useless junk.
It all comes down to one thing - resident keys.
What is a Resident Key
To understand the problem, we need to understand what a discoverable/resident key is.
You have probably seen that most keys support an ‘unlimited’ number of accounts. This is achieved by sending a “key wrapped key” to the security key. When the Relying Party (Authentication Server) wants to authenticate your security key, it will provide you a “credential id”. That credential ID is an encrypted blob that only your security key can decrypt. If your security key can decrypt that blob it yields a private key that is specific to that single RP that you can use for signatures.
┌────────────────┐ ┌────────────────┐ ┌────────────────┐
│ Relying Party │ │ │ Browser │ │ │ Security Key │
└────────────────┘ └────────────────┘ └────────────────┘
│ │
1. Send
Credential ──────────┼───────▶ │
IDs
│ 2. Forward to─────────┼─────▶ 3. Decrypt
Security Key Credential ID
│ │ with Master Key
│
│ │ │
│
│ │ ▼
4. Sign Challenge
│ │ with Decrypted
Key
│ │ │
│
│ │ │
▼
│ 6. Return ◀──────┼───────── 5. Return
◀──────────────── Signature Signature
│ │
This is what is called a non-resident or non-discoverable credential. The reason is that the private key must be discovered by the security key having the Credential ID provided to it externally. This is because the private keys are not resident inside the security enclave - only the master key is.
Contrast to this, a resident key or discoverable credential is one where the private key is stored in the security key itself. This allows the security key to discover (hence the name) what private keys might be used in the authentication.
┌────────────────┐ ┌────────────────┐ ┌────────────────┐
│ Relying Party │ │ │ Browser │ │ │ Security Key │
└────────────────┘ └────────────────┘ └────────────────┘
│ │
1. Send Empty
CredID list──────────┼───────▶ │
2. Query ───────────────▶
│ Security Key │ 3. Discover Keys
for RP
│ 4. Select a ◀────────┼─────
Security Key
│ ─────────┼────▶ 5. Sign Challenge
with Resident Key
│ │ │
│
│ │ │
▼
│ 7. Return ◀───────┼──────── 6. Return
◀──────────────── Signature Signature
│ │
│ │
│ │
Now, the primary difference here is that resident/discoverable keys consume space on the security key to store them since they need to persist - there is no credential id to rely on to decrypt with our master key!
Are non-resident keys less secure?
A frequent question here is if non resident keys are less secure than resident ones. Credential ID’s as key wrapped keys are secure since they are encrypted with aes128 and hmaced. This prevents them being tampered with or decrypted by an external source. If aes128 were broken and someone could decrypt your private-key in abscence of your security key, they probably could also break TLS encyrption, attack ssh and do much worse. Your key wrapped keys rely on the same security features that TLS relies on.
Resident Keys and Your Security Key
Now that we know what a resident key is, we can look at how these work with your security keys.
Since resident keys store their key in the device, this needs to consume space inside the security key. Generally, resident key slots are at a premium on a security key. Nitrokeys for example only support 8 resident keys. Yubikeys generally support between 20 and 32. Some keys support no resident keys at all.
The other problem is what CTAP standard your key implements. There are three versions - CTAP2.0, CTAP2.1PRE and CTAP2.1.
In CTAP2.1 (the latest) you can individually manage, update and delete specific resident keys from your device.
In CTAP2.0 and CTAP2.1PRE however, you can not. You can not delete a residentkey without resetting the whole device. Resetting the device also resets your master key, meaning all your non-resident keys will no longer work either. This makes resident keys on a CTAP2.0 and CTAP2.1PRE device a serious commitment. You really don’t want to accidentally fill up that limited space you have!
In most cases, your key is very likely to be CTAP2.0 or CTAP2.1PRE.
So Why Are Resident Keys a Problem?
On their own, and used carefully resident keys are great for certain applications. The problem is the hype and obsession with passkeys.
In 2022 Apple annouced their passkeys feature on MacOS/iOS allowing the use of touchid/faceid as a webauthn authenticator similar to your security key. Probably quite wisely, rather than calling them “touchid” or “credentials” or “authenticators” Apple chose to have a nicer name for users. Honestly passkeys is a good name rather than “webauthn authenticator” or “security key”. It evokes a similar concept to passwords which people are highly accustomed to, while also being different enough with the ‘key’ to indicate that it operates in a different way.
The problem (from an external view) is that passkeys was a branding or naming term of something - but overnight authentication thought leaders needed to be on the hype. “What is a passkey?”. Since Apple didn’t actually define it, this left a void for our thought leaders to answer that question for users hungry to know “what indeed is a passkey?”.
As a creator of a relying party and the webauthn library for Rust, we defined passkeys as the name for “all possible authenticators” that a person may choose to use. We wanted to support the goal to remove and eliminate passwords, and passkeys are a nice name for this.
Soon after that, some community members took to referring to passkeys to mean “credentials that are synchronised between multiple devices”. This definition is at the least, not harmful, even if it doesn’t express that there are many possible types of authenticators that can be used.
Some months later a person took the stage at FIDO’s Authenticate conference and annouced “a passkey is a resident key”. Because of the scale and size of the platform, this definition has now stuck. This definition has become so invasive that even FIDO now use it as their definition.
Part of the reason this definition is hyped is because it works with an upcoming browser feature that allows autocomplete of a username and webauthn credential if the key is resident. You don’t have to type your username. This now means that we have webauthn libraries pushing for residentkey as a requirement for all registrations, and many people will follow this advice without seeing the problem.
The problem is that security keys with their finite storage and lack of credential management will fill up rapidly. In my password manager I have more than 150 stored passwords. If all of these were to become resident keys I would need to buy at lesat 5 yubikeys to store all the accounts, and then another 5-10 as “backups”. I really don’t want to have to juggle and maintain 10 to 15 yubikeys …
This is an awful user experience to put it mildly. People who choose to use security keys, now won’t be able to due to passkeys resident key requirements.
To add further insult, an expressed goal of the Webauthn Work Group is that users should always be free to choose any authenticator they wish without penalty. Passkeys forcing key residency flies directly in the face of this.
This leaves few authenticator types which will work properly in this passkey world. Apples own passkeys, Android passkeys, password managers that support webauthn, Windows with TPM 2.0, and Chromium based browsers on MacOS (because of how they use the touchid as a TPM).
What Can Be Done?
Submit to the Webauth WG / Browsers to change rk=preferred to exclude security keys
Rather than passkeys being resident keys, passkeys could be expanded to be all possible authenticators where some subset opportunistically are resident. This puts passwordless front and center with residency as a bonus ui/ux for those who opt to use devices that support unlimited resident keys.
Currently there are three levels of request an RP can make to request resident keys. Discouraged, Preferred and Required. Here is what happens with different authenticator types when you submit each level.
┌────────────────────┬────────────────────┬────────────────────┐
│ Roaming │ Platform │ Platform │
│ Authenticator │ Authenticator │ Authenticator │
│ (Yubikey) │(Android Behaviour) │ (iOS Behaviour) │
└────────────────────┴────────────────────┴────────────────────┘
┌────────────────────┐ ┌────────────────────┬────────────────────┬────────────────────┐
│ │ │ │ │ │
│ rk=discouraged │ │ RK false │ RK false │ RK true │
│ │ │ │ │ │
├────────────────────┤ ├────────────────────┼────────────────────┼────────────────────┤
│ │ │ │ │ │
│ rk=preferred │ │ RK true (!) │ RK true │ RK true │
│ │ │ │ │ │
├────────────────────┤ ├────────────────────┼────────────────────┼────────────────────┤
│ │ │ │ │ │
│ rk=required │ │ RK true │ RK true │ RK true │
│ │ │ │ │ │
└────────────────────┘ └────────────────────┴────────────────────┴────────────────────┘
Rather than passkeys setting rk=required, if rk=preferred were softened so that on preferred meant “create a resident key only if storage is unlimited” then we would have a situation where Android/iOS would always get resident keys, and security keys would not have space consumed.
However, so far the WG is resistant to this change. It is not out of the question that browsers could implement this change externally, but that would in reality be down to the chrome team to decide.
Insist on your Passkey library setting rk=discouraged
Rather than rk=required which excludes security keys, rk=discouraged is the next best thing. Yes it means that android users won’t get conditional UI. But what do we prefer - some people have to type a username (that already has provisions to autocomplete anyway). Or do we damage and exclude security keys completely?
Contact FIDO and request RK storage as a certification feature
Currently FIDO doesn’t mandate any amount of storage requirements for certified devices. Given that FIDO also seem to want resident keys, then they should also mandate that certified devices have the ability to store thousands of resident keys. This way as a consumer you can pick and select certified devices.
Something Else?
If you have other ideas on how to improve this let me know!
Conclusion
The hype around passkeys being resident keys will prevent - or severly hinder - users of security keys from choosing the authenticator they want to use online in the future.
Why are PBKDF2-SHA256 and PBKDF2_SHA256 different in 389-ds?
In a mailing list discussion recently it came up about what password hash format should you use in 389-ds. Confusingly we have two PBKDF2 SHA256 implementations, which has a bit of history.
Too Lazy, Didn’t Read
Use PBKDF2-SHA256. (hyphen, not underscore).
What’s PBKDF2 anyway?
Passwords are a shared-knowledge secret, so knowledge of the password allows you to authenticate as the person. When we store that secret, we don’t want it stored in a form where a person can steal and use it. This is why we don’t store passwords cleartext - A rogue admin or a database breach would leak your passwords (and people do love to re-use their passwords over many websites …)
Because of this authentication experts recommend hashing your password. A one-way hash function given an input, will always produce the same output, but given the hash output, you can not derive the input.
However, this also isn’t the full story. Simply hashing your password isn’t enough because people have found many other attacks. These include things like rainbow tables which are a compressed and precomputed “lookup” of hash outputs to their inputs. You can also bruteforce dictionaries of common passwords to see if they match. All of these processes for an attacker use their CPU to generate these tables or bruteforce the passwords.
Most hashes though are designed to be fast and in many cases your CPU has hardware to accelerate and speed these up. All this does is mean that if you use a verification hash for storing passwords then an attacker just can attack your stored passwords even faster.
To combat this, what authentication experts truly recommend is key derivation functions. A key derivation function is similar to a hash where an input always yields the same output, but a KDF also intends to be resource consuming. This can be ram or cpu time for example. The intent is that an attacker bruteforcing your KDF hashed passwords should have to expend a large amount of CPU time and resources, while also producing far fewer results.
Conclusion
Use PBKDF2-SHA256.
- It’s written in Rust.
- It meets NIST SP800-63b recommendations.
Why Decentralised ID Won’t Work
Thanks to a number of high profile and damaging security incidents in Australia people have once again been discussing Decentralised ID (DID). As someone who has spent most of the career working on identity management, I’m here to tell you why it will not work.
What Is Decentralised ID Trying To Do?
To understand what DID is trying to achieve we have to look at what a “centralised” system is doing.
Lets consider an account holder like Google. You create an account with them, and you store your name and some personal data, as well as a method of authentication, such as a password and OTP, or Webauthn.
Now you go to some other website and it says “login with Google”. That site redirects to Google, who authenticates you, and then the website trusts Google to say “yes or no” that “you are who you say you are”. You can consent to this website seeing details about you like an email address or name.
A decentralised system works differently. You present a signed metadata statement about yourself to the website, and that cryptograhic signature can be traced back to your signing private key. This cryptograhic proof attests that you are the profile/account holder.
What Does DID Claim To Achieve?
- That you are the only authority who can modify your own identity and data.
- You control who can access (view) that data.
- Cryptographic verification that an identity is who they claim to be.
This Will Never Work
No Consideration Of Human Behaviour
DID systems do not consider human behaviour in their design.
I can not put it better, than Don Norman, in his paper “The Truth about Unix”.
System designers take note. Design the system for the person, not for the computer, not even for yourself. People are also information processing systems, with varying degrees of knowledge, varying degrees of experience. Friendly systems treat users as intelligent adults who, like normal adults, are forgetful, distracted, thinking of other things, and not quite as knowledgeable about the world as they themselves would like to be.
People are not “stupid”. They are distracted and busy. This means they will lose their keys. They will be affected by events out of their control.
In a centralised system there are ways to recover your account when you lose your password/keys. There are systems to verify you, and help you restore from damage.
In a DID system, if you lose your key, you lose everything. There is no recovery process.
GPG Already Failed
DID is effectively a modern rehash of GPG - including it’s problems. Many others have lamented at length about. These people have spent their lives studying cryptograhpic systems, and they have given up on it. Pretty much every issue they report here, applies to DID and all it’s topics.
Long Term Keys
One of the biggest issues in DID is that the identity is rooted in a private key that is held by an individual. This encourages long-term keys, which have a large blast radius (complete take over of your identity). This causes dramatic failure modes. To further this, it also prevents improvement of the cryptograhic quality of the key. When I started in IT RSA 1024 bit was secure. Now it’s not. Keys need to be short lived and disposable.
You Won’t Own Your Own Data
When you send a DID signed document to a provider, lets say your Bank to open a new account, what do you think they will do with that data?
They won’t destroy it and ask you for it every time you need it. They will store a copy on their servers for their records. There are often extremely good reasons they need to store that data as well.
Which means that your signed document of data is performative, and the data will just be used and extracted as usual.
DID does not solve the problem of data locality or retention. Regulation and oversight does.
Trust Is A Social Problem
You can’t solve social problems with technology.
The whole point of DID is about solving trust. Who do you trust to store (modify) or view your personal information?
In a DID world, you need to be “your own personal central data authority” (because apparently you can’t trust anyone else). That means you need to store your data, protect it from destruction and secure it from compromise.
In the current world, for all of Google’s and many other companies flaws, they still have dedicated security teams, specialists in risk analysis, and people who have dedicated themselves to protecting your accounts and your data.
The problem is that most software engineers fall into the fallacy that because they are an expert in their subject matter, they are now an expert on identity and computer security. They are likely not security experts (the same people love to mansplain authentication to me frequently, and generally this only serves to inform me that they actually do not understand authentication).
Why should anyone trust your DID account, when you likely have no data hygiene and insecure key storage? Why should a Bank? Why should Google? Your workplace? No one has a reason to trust you and your signatures.
Yes there are problems with centralised identity systems - but DID does not address them, and actually may make them significantly worse.
Your Verification Mark Means Nothing
Some DID sites claim things like “being able to prove ownership of an account”.
How does this proof work? Can people outside of the DID community explain these proofs? Can your accountant? You Taxi driver?
What this will boil down to a green tick that people will trust. It doesn’t take a lot of expertise to realise that the source code for this tick can be faked pretty easily since it’s simply a boolean check.
These verification marks come back to “trust”, which DID does not solve. You need to trust the site you are viewing to render things in a certain way, the same way you have to trust them not to go and impersonate you.
Even if you made a DID private key with ED25519 and signed some toots, Mastodon instance owners could still impersonate you if they wanted.
And to further this, how is the average person expected to verify your signatures? HTTPS has already shown that the majority of the public does not have the specific indepth knowledge to assess the legitimacy of a certificate authority. How are we expecting people to now verify every other person as their own CA?
The concept of web of trust is a performative act.
Even XKCD nailed this.
Conclusion
DID won’t work.
There are certainly issues with central authorities, and DID solves none of them.
It is similar to bootstraping compilers. It is a problem that is easy to articulate, emotionally catchy, requires widespread boring social solutions, but tech bros try to solve it unendingly with hyper-complex-technical solutions that won’t work.
You’re better off just adding FIDO2 keys to your accounts and moving on.
Where to start with linux authentication?
Recently I was asked about where someone could learn how linux authentication works as a “big picture” and how all the parts communicate. There aren’t too many great resources on this sadly, so I’ve decided to write this up.
Who … are you?
The first component in linux identity is NSS or nsswitch (not to be confused with NSS the cryptography library … ). nsswitch (name service switch) is exposed by glibc as a method to resolve uid/gid numbers and names and to then access details of the account. nsswitch can have “modules” that are stacked, where the first module with an answer, provides the response.
An example of nsswitch.conf is:
passwd: compat sss
group: compat sss
shadow: compat sss
hosts: files mdns dns
networks: files dns
services: files usrfiles
protocols: files usrfiles
rpc: files usrfiles
ethers: files
netmasks: files
netgroup: files nis
publickey: files
bootparams: files
automount: files nis
aliases: files
This is of the format “service: module module …”. An example here is when a program does “gethostbyname” (a dns lookup) it accesses the “host” service, then resolves via files (/etc/hosts) then mdns (aka avahi, bonjour), and then dns.
The three lines that matter for identities though, are passwd, group, and shadow. Most commonly you will use the files module which uses /etc/passwd and /etc/shadow to satisfy requests. The compat module is identical but with some extra syntaxes allowed for NIS compatibility. Another common module in nsswitch is sss which accesses System Services Security Daemon (SSSD). For my own IDM projects we use the kanidm nsswitch module.
You can test these with calls to getent to see how nsswitch is resolving some identity, for example:
# getent passwd william
william:x:654401105:654401105:William:/home/william:/bin/zsh
# getent passwd 654401105
william:x:654401105:654401105:William:/home/william:/bin/zsh
# getent group william
william:x:654401105:william
# getent group 654401105
william:x:654401105:william
Notice that both the uid (name) and uidnumber work to resolve the identity.
These modules are dynamic libraries, and you can find them with:
# ls -al /usr/lib[64]/libnss_*
When a process wishes to resole something with nsswitch, the calling process (for example apache) calls to glibc which then loads these dylibs at runtime, and they are executed and called. This is often why the addition of new nsswitch modules in a distro is guarded and audited because these modules can end up in every processes memory space! This also has impacts on security as every module, and by inheritence every process, may need access /etc/passwd or the network to do resolution of identities. Some modules improve this situation like sss, and we will give that it’s own section of this blog.
Prove yourself!
If nsswitch answers “who are you”, then pam (pluggable authentication modules) is “prove yourself”. It’s what actually checks if your credentials are valid and can login or not. Pam works by having “services” that contact (you guessed it) modules. Most linux distros have a folder (/etc/pam.d/) which contains all the service definitions (there is a subtely different syntax in /etc/pam.conf which is not often used in linux). So lets consider when you ssh to a machine. ssh contacts pam and says “I am the ssh service, can you please authorise this identity for me”.
Because this is the “ssh service” pam will open the named config, /etc/pam.d/SERVICE_NAME, in this case /etc/pam.d/ssh. This example is taken from Fedora, because Fedora and RHEL are very common distributions. Every distribution has their own “tweaks” and variants to these files, which certainly helps to make the landscape even more confusing.
# cat /etc/pam.d/ssh
#%PAM-1.0
auth include system-auth
account include system-auth
password include system-auth
session optional pam_keyinit.so revoke
session required pam_limits.so
session include system-auth
Note the “include” line that is repeated four times for auth, account, password and session. These include system-auth, so lets look at that.
# cat /etc/pam.d/system-auth
auth required pam_env.so
auth required pam_faildelay.so delay=2000000
auth [default=1 ignore=ignore success=ok] pam_usertype.so isregular
auth [default=1 ignore=ignore success=ok] pam_localuser.so
auth sufficient pam_unix.so nullok
auth [default=1 ignore=ignore success=ok] pam_usertype.so isregular
auth sufficient pam_sss.so forward_pass
auth required pam_deny.so
account required pam_unix.so
account sufficient pam_localuser.so
account sufficient pam_usertype.so issystem
account [default=bad success=ok user_unknown=ignore] pam_sss.so
account required pam_permit.so
session optional pam_keyinit.so revoke
session required pam_limits.so
-session optional pam_systemd.so
session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
session required pam_unix.so
session optional pam_sss.so
password requisite pam_pwquality.so local_users_only
password sufficient pam_unix.so yescrypt shadow nullok use_authtok
password sufficient pam_sss.so use_authtok
password required pam_deny.so
So, first we are in the “auth phase”. This is where pam will check the auth modules for your username and password (or other forms of authentication) until a success is returned. We start at pam_env.so, that “passes but isn’t finished” so we go to faildelay etc. Each of these modules is consulted in turn, with the result of the module, and the “rule” (required, sufficient or custom) being smooshed together to create “success and we are complete”, “success but keep going”, “fail but keep going” or “fail and we are complete”. In this example, the only modules that can actually authenticate a user are pam_unix.so and pam_sss.so, and if neither of them provide a “success and complete”, then pam_deny.so is hit which always yields a “fail and complete”. This phase however has only verified your credentials.
The second phase is the “account phase” which really should be “authorisation”. The modules are checked once again, to determine if the module will allow or deny access to your user account to access this system. Similar rules apply where each modules result and the rules of the config combine to create a success/fail and continue/complete result.
The third phase is the “session phase”. Each pam module can influence and setup things into the newly spawned session of the user. An example here is you can see pam_limits.so which is what applies cpu/memory/filedescriptor limits to the created shell session.
The fourth module is “password”. This isn’t actually used in the authentication process - this stack is called when you issue the “passwd” command to update the users password. Each module is consulted in turn for knowledge of the account, and if they are able to alter the credentials. If this fails you will recieve a generic “authentication token manipulation error”, which really just means “some module in the stack failed, but we wont tell you which”.
Again, these modules are all dylibs and can be found commonly in /usr/lib64/security/. Just like nsswitch, applications that use pam are linked to libpam.so, which inturn with load modules from /usr/lib64/security/ at runtime. Given that /etc/shadow is root-read-only, and anything that wants to verify passwords needs to … read this file, this generally means that any pam module is effectively running in root memory space on any system. Once again, this is why distributions carefully audit and control what packages can supply a pam module given the high level of access these require. Once again, because of how pam modules work this also generally means that the process will need network access to call out to external identity services depending on the pam modules in use.
What about that network auth?
Now that we’ve covered the foundations of how processes and daemons will find details of a user and verify their credentials, lets look at SSSD which is a specific implementation of an identity resolving daemon.
As mentioned, both nsswitch and pam have the limitation that the dylibs run in the context of the calling application, which often meant in the past with modules like pam_ldap.so would be running in the process space of root applications, requiring network access and having to parse asn.1 (a library commonly used for remote code execution that sometimes has the side effect of encoding and decoding binary structures).
┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┐
root: uid 0 │
│ │
│
│ ┌─────────────┐ │ ┌─────────────┐
│ │ │ │ │
│ │ │ │ │ │
│ │ │ │ │
│ │ SSHD │──┼────────▶│ LDAP │
│ │ │ │ │
│ │ │ │ │ │
│ │ │ │ │
│ └─────────────┘ │ └─────────────┘
│
│ │ Network
│
└ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┘
SSSD changes this by having a daemon running locally which can be accessed by a unix socket. This allows the pam and nsswitch modules to be thin veneers with minimal functionality and surface area, who then contact an isolated daemon that does the majority of the work. This has a ton of security benefits not limited to reducing the need for the root process to decode untrusted input from the network.
┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┐ ┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┐
root: uid 0 sssd: uid 123 │
│ │ │ │
│
│ ┌─────────────┐ │ │ ┌─────────────┐ │ ┌─────────────┐
│ │ │ │ │ │ │
│ │ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │
│ │ SSHD │──┼──────┼─▶│ SSSD │──┼─────────▶│ LDAP │
│ │ │ │ │ │ │
│ │ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │
│ └─────────────┘ │ │ └─────────────┘ │ └─────────────┘
│
│ │ │ │ Network
│
└ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┘ └ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┘
Another major benefit of this is that SSSD can cache responses from the network in a secure way, allowing the client to resolve identities when offline. This even includes caching passwords!
As a result this is why SSSD ends up taking on so much surface area of authentication on many distros today. With a thicc local daemon which does the more complicated tasks and work to actually identify and resolve users, and the ability to use a variety of authentication backends it is becoming widely deployed and will displace pam_ldap and pam_krb5 in the majority of network based authentication scenarioes.
Inside the beast
SSSD is internally built from a combination of parts that coordinate. It’s useful to know how to debug these if something goes wrong:
# /etc/sssd/sssd.conf
//change the log level of communication between the pam module and the sssd daemon
[pam]
debug_level = ...
// change the log level of communication between the nsswitch module and the sssd daemon
[nss]
debug_level = ...
// change the log level of processing the operations that relate to this authentication provider domain ```
[domain/AD]
debug_level = ...
Now we’ve just introduced a new concept - a SSSD domain. This is different to a “domain” per Active Directory. A SSSD domain is just “an authentication provider”. A single instance of SSSD can consume identities from multiple domains at the same time. In a majority of configurations however, a single domain is configured.
In the majority of cases if you have an issue with SSSD it is likely to be in the domain section so this is always the first place to look for debugging.
Each domain can configure different providers of the “identity”, “authentication”, “access” and “chpass”. For example a configuration in /etc/sssd/sssd.conf
[domain/default]
id_provider = ldap
auth_provider = ldap
access_provider = ldap
chpass_provider = ldap
The id_provider is the backend of the domain that resolves names and uid/gid numbers to identities.
The auth_provider is the backend that validates the password of an identity.
The access_provider is the backend that describes if an identity is allowed to access this system or not.
The chpass_provider is the backend that password changes and updates are sent to.
As you can see there is a lot of flexibility in this design. For example you could use krb5 as the auth provider, but send password changes via ldap.
Because of this design SSSD links to and consumes identity management libraries from many other sources such as samba (ad), ldap and kerberos. This means in some limited cases you may need to apply debugging knowledge from the relevant backend to solve an issue in SSSD.
Common Issues
Performance
In some cases SSSD can be very slow to resolve a user/group on first login, but then becomes “faster” after the login completes. In addition sometimes you may see excessive or high query load on an LDAP server during authentication as well. This is due to an issue with how groups and users are resolved where to resolve a user, you need to resolve it’s group memberships. Then each group is resolved, but for unix-tools to display a group you need to resolve it’s members. Of course it’s members are users and these need resolving … I hope you can see this is recursive. In some worst cases this can lead to a situation where when a single user logs on, the full LDAP/AD directory is enumerated, which can take minutes in some cases.
To prevent this set:
ignore_group_members = False
This prevents groups resolving their members. As a results groups appear to have no members, but users will always display the groups they are member-of. Since almost all applications work using this “member-of” pattern, there are very few negative outcomes from this.
Cache Clearing
SSSD has a local cache of responses from network services. It ships with a cache management tool sss_cache. This allows records to be marked as invalid so that a reload from the network occurs as soon as possible.
There are two flaws here. In some cases this appears to have “no effect” where invalid records continue to be served. In addition, the sss_cache tool when called with -E for everything, does not always actually invalidate everything.
A common source of advice in these cases is to stop sssd, remove all the content under /var/lib/sss/db (but not the folder itself) and then start sssd.
Debugging Kerberos
Kerberos can be notoriously hard to debug. This is because it doesn’t have a real verbose/debug mode, at least not obviously. To get debug output you need to set an environment variable.
KRB5_TRACE=/dev/stderr kinit user@domain
This works on any proccess that links to kerberos, so it works on 389-ds, sssd, and many other applications so you can use this to trace what’s going wrong.
Conclusion
That’s all for now, I’ll probably keep updating this post over time :)
Exploring Webauthn Use Cases
Webauthn is viewed by many people and companies as the future of authentication on the internet and within our workplaces. It has the support of many device manufacturers, browser vendors and authentication providers.
But for Webauthn’s lofty goals and promises, as a standard it has many fractured parts. Many of the features it claims at best don’t work, at worst, present possible security risks. The standard itself is quite confusing, uses dense and obtuse language, and laid out in a very piecemeal way. This makes it hard to see the full picture to construct a proper security and use cases analysis.
As the author of both a relying party ( Kanidm ) and the Webauthn Library for Rust I want to describe these problems.
To understand the issues, we first need to explore how Webauthn works, and then the potential use cases. While not an exhaustive list of all the ways Webauthn could be used, I am trying to cover the ways that I have seen in the wild, and how people have requested we want to use these.
Generally, I will try to use accessible language versions of terms, rather than the Webauthn standard terms, as the language in the standard is confusing / misleading - even if you have read the standard multiple times.
Use Cases
To understand the limitations of Webauthn, we need to examine how Webauthn would be used by an identity provider. The identity provider takes the pieces from Webauthn and their own elements and creates a work flow for the user to interact with. We will turn these into use cases.
Remember, the goal of webauthn is to enable all people, from various cultural, social and educational backgrounds to authenticate securely, so it’s critical these processes are clear, accessible, and transparent.
For the extremely detailed versions of these use cases, see the end of this post.
A really important part of these use cases is attestation. Attestation is the same as the little gold star sticker that you found on Nintendo game boxes. It’s a “certificate of authenticity”. Without attestation, the authenticator that we are communicating with could be anything. It could be a yubikey, Apple’s touchid, a custom-rolled software token, or even a private key you calculated on pen and paper. Attestation is a cryptograhic “certificate of authenticity” which tells us exactly whom produced that device and if it can be trusted.
This is really important, because within Webauthn many things are done on the authenticator such as user-verification. Rather than just touching the token, you may have to enter a PIN or use a fingerprint. But the server never sees that PIN or fingerprint - the authenticator just sends us a true/false flag if the verification occured and was valid. So for us to trust this flag (and many others), we need to know that the token is made by someone we trust, so that we know that flag means something.
Without this attestation, all we know is that “there is some kind of cryptograhic key that the user can access” and we have no other information about where it might be stored, or how it works. With attestation we can make stronger informed assertions about the properties of the authenticators our users are using.
Security Token (Public)
In this use case, we want our authenticator to be a single factor to compliment an existing password. This is the “classic” security key use case, that was originally spawned by U2F. Instead of an authenticator, a TOTP scheme could alternately be used where either the TOTP or authenticator plus the password is sufficient to grant access.
Generally in this use case, most identity providers do not care about attestation of the authenticator, what is more important is that some kind of non-password authentication exists and is present.
Security Token (Corporate)
This is the same as the public use case, except that in many corporations we may want to define a list of trusted providers of tokens. It’s important to us here that these tokens have a vetted or audited supply chain, and we have an understanding of “where” the cryptographic material may reside.
For this example, we likely want attestation, as well as the ability to ensure these credentials are not recoverable or transferable between authenticators. Resident Key may or may not be required in these cases.
Since these are guided by policy, we likely want to have our user interfaces guide our users to register or use the correct keys since we have a stricter list of what is accepted. For example, there is no point in the UI showing a prompt for caBLE (phone authenticator) when we know that only a USB key is accepted!
PassKey (Public)
A passkey is the “Apple terminology” for a cryptographic credential that can exist between multiple devices, and potentially even between multiple Apple accounts. This are intended to be a “single factor” replacement to passwords. They can be airdropped and moved between devices, and at the least in their usage with iOS devices, they can perform user verification, but it may not be required for the identity provider to verify this. This is because even as a single factor, these credentials do resolve many of the weaknesses of passwords even if user verification did not occur (and even if it did occur it can not be verified, for reasons we will explore in this post).
It is likely we will see Google and Microsoft develop similar. 1Password is already progressing to allow webauthn in their wallets.
In this scenario, all we care about is having some kind of credential that is stronger than a password. It’s a single factor, and we don’t know anything about the make or model of the device. User verification might be performed, but we don’t intend to verify if it is.
Nothing is really stopping a U2F style token like a yubikey being a passkey, but that relies on the identity provider to allow multiple devices and to have work flows to enrol them across different devices. It’s also unclear how this will work from an identity provider when someone has say a Microsoft Surface and an Apple iPhone.
Passwordless MFA (Public)
In this example, rather than having our authenticator as a single factor, we want it to be truly multifactor. This allows the user to login with nothing but their authenticator, and we have a secure multifactor work flow. This is a stronger level of authentication, where we are verifying not just possession of the private key, but also the identity of who is using it.
As a result, we need to strictly verify that the authenticator did a valid user verification.
Given that the authenticator is now the “sole” authenticator (even if multi-factor) we are more likely to want attestation here using privacy features granted through indirect attestation. That way we can have a broad list of known good security token providers that we accept. Without attestation we are unable to know if the user verification provided can be trusted.
Passwordless MFA (Corporate)
Again, this is similar to above. We narrow and focus this use case with a stricter attestation list of what is valid. We also again want to strictly control and prevent cryptographic material being moved, so we want to ensure these are not transferrable. We may want resident keys to be used here too since we have a higher level of trust in our devices now too. Again, we also will want to be able to strictly guide UI’s due to our knowledge of exactly what devices we accept.
Usernameless
Usernameless is similar to passwordless but requires resident keys as the username of the account is bound to the key and discovered by the client. Otherwise many of the features of passwordless apply.
It’s worth noting that due to the complexity and limitations of resident key management it is not feasible for any public service provider to currently use usernameless credentials on a broad scale without significant risk of credential loss. As a result, we limit our use case to corporate only, as they are the only entities in the position to effectively manage these issues.
Due to the implementation of passkeys and passwordless in the broader world, the line is blurred between these, so we will assume that passkeys and passwordless may sometimes attempt to be used in a usernameless workflow (for example conditional UI)
Summary
Let’s assemble a score card now. We’ll define the use cases, the features, and what they require and if webauthn can provide them.
Security Token | Sec Tok (Corp) | PassKey | Passwordless | PwLess (Corp) | |
---|---|---|---|---|---|
User Verification | no / ??? | no / ??? | no / ??? | required / ??? | required / ??? |
UV Policy | no / ??? | no / ??? | no / ??? | no / ??? | maybe / ??? |
Attestation | no / ??? | required / ??? | no / ??? | required / ??? | required / ??? |
Bound to Device / HW | no / ??? | required / ??? | no / ??? | required / ??? | required / ??? |
Resident Key | no / ??? | maybe / ??? | no / ??? | maybe / ??? | maybe / ??? |
UI Selection | maybe / ??? | maybe / ??? | no / ??? | maybe / ??? | required / ??? |
Update PII | no / ??? | no / ??? | maybe / ??? | maybe / ??? | maybe / ??? |
Result | ??? | ??? | ??? | ??? | ??? |
Now, I already know some of the answers to these, so lets fill in what we DO know.
Security Token | Sec Tok (Corp) | PassKey | Passwordless | PwLess (Corp) | |
---|---|---|---|---|---|
User Verification | no / ??? | no / ??? | no / ??? | required / ??? | required / ??? |
UV Policy | no / ??? | no / ??? | no / ??? | no / ??? | maybe / ??? |
Attestation | no / ✅ | required / ??? | no / ??? | required / ??? | required / ??? |
Bound to Device / HW | no / ✅ | required / ??? | no / ✅ | required / ??? | required / ??? |
Resident Key | no / ✅ | maybe / ??? | no / ✅ | no / ✅ | maybe / ??? |
Authenticator Selection | maybe / ??? | maybe / ??? | no / ??? | maybe / ??? | required / ??? |
Update PII | no / ✅ | no / ✅ | maybe / ??? | maybe / ??? | maybe / ??? |
Result | ??? | ??? | ??? | ??? | ??? |
The Problems
Now lets examine the series of issues that exist within Webauthn, and how they impact our ability to successfully implement the above.
Authenticator Selection
Today, there is no features in Webauthn that allow an identity provider at registration to pre-indicate what transports are known to be valid for authenticators that are registering. This is contrast to authentication, where a complete list of valid transports can be provided to help the browser select the correct device to use in the authentication.
As a result, the only toggle you have is “platform” vs “cross-platform”. Consider we have company issued yubikeys. We know these can only work via USB because that is the model we have chosen.
However, during a registration because we can only indicate “cross-platform” it is completely valid for a user to attempt to register say their iPhone via caBLE, or use another key via NFC. The user may then become “confused” why their other keys didn’t work for registration - the UI said they were allowed to use it! This is a lack of constraint.
This process could be easily streamlined by allowing transports to be specified in registration, but there is resistance to this from the working group.
A real world example of this has already occurred, where the email provider FastMail used specific language around “Security Tokens” including graphics of usb security keys in their documentation. Because of this lack of ability to specify transports in the registration process, once caBLE was released this means that FastMail now has to “rush” to respond to update their UI/Docs to work out how to communicate this to users. They don’t have a choice in temporarily excluding this either which may lead to user confusion.
User Verification Inconsistent / Confusing
For our security key work flows we would like to construct a situation where the authenticator is a single factor, and the users password or something else is the other factor. This means the authenticator should only require interaction to touch it, and no PIN or biometric is needed.
There are some major barriers here sadly. Remember, we want to create a consistent user experience so that people can become confident in the process they are using.
The problem is CTAP2.1 - this changes the behaviour of user verification ‘discouraged’ so that even when you are registering a credential, you always need to enter a PIN or biometrics. However, when authenticating, you never need the PIN or biometric.
There is no communication of the fact that the verification is only needed due to it being registration.
Surveying users showed about 60% expect when you need to enter your PIN/biometric at registration that it will be required during future authentication. When it is not present during future authentications this confuses people, and trains them that the PIN/biometrics is an inconsistent and untrustworthy dialog. Sometimes it is there - sometimes it is not.
When you combine this with the fact that UV=preferred on most RP’s is not validating the UV status, we now have effectively trained all our users that user verification can appear and disappear and not to worry about it, it’s fine, it’s just inconsistent so they never will consider it a threat.
It also means that when we try to adopt passwordless it will be harder to convince users this is safe since they may believe that this inconsistent usage of user verification on their authenticators is something that can be easily bypassed.
How can you trust that the PIN/biometric means something, when it is sometimes there and sometimes not?
This forces us even in our security key work flows to force UV=preferred, and to go beyond the standard to enforce user verification checks are consistent based on their application at registration. This means any CTAP2.1 device, even though it does NOT need a PIN as a single factor authenticator, will require one as a security key to create a consistent user experience and so we can build trust in our user base.
At this point since we are effectively forcing UV to always occur, why not just transition to Passwordless?
It is worth noting that for almost all identity providers today, that the use of UV=preferred is bypassable, as the user verification is not checked and there is no guidance in the specification to check this. This has affected Microsoft Azure, Nextcloud, and others
As a result, the only trustworthy UV policies are required, or preferred with checks that go beyond the standard. As far as I am aware, only Webauthn-RS providers these stricter requirement checks.
Discouraged could be used here, but needs user guidance and training to support it due to the inconsistent dialogs with CTAP2.1.
User Verification Policy
Especially in our passwordless scenarios, as an identity provider we may wish to define policy about what user verification methods we allow from users. For example we may wish for PIN only rather than allowing biometrics. We may also wish to express the policy on the length of the PIN as well.
However, nothing in the response an authenticator provides you with this information about what user verification method was used. Instead webauthn defines the User Verification Method extension which can allow an identity provider to request the device to provide what UVM was provided.
Sadly, nothing supports it in the wild. Experience with Webauthn-RS shows that it is never honoured or provided when requested. This is true of most extensions in Webauthn. For bonus marks did you know all extensions only are answered when you request attestation (this is not mentioned anywhere in the specification!)
As a corporate environment, we can kind-of control this through strict attestation lists, but as a public identity provider with attestation it is potentially not possible to know or enforce this due to extensions being widely unsupported and not implemented.
The reason this is “kind-of” is that yubikeys support PIN and some models also support biometrics, but there is no distinction in their attestation. This means if we only wanted PIN auth, we could not use yubikeys since there is no way to distinguish these. Additionally, things like minimum PIN length can’t be specified since we don’t know what manufacturers support this extension. Devices like yubikeys have an inbuilt minimum length of 8, but again we don’t know if they’ll use PIN given the availability of biometrics.
Resident Keys can’t be verified
Resident Keys is where we know that the key material lives only within the cryptographic processor of the authenticator. For example, a yubikey by default produces a key wrapped key, where the CredentialID is itself the encrypted private key, and only that yubikey can decrypt that CredentialID to use it as the private key. In very strict security environments, this may present a risk because an attacker could bruteforce the CredentialID to decrypt the private key, allowing the attacker to then use the credential. (It would take millions of years, but you know, some people have to factor that into their risk models).
To avoid this, you can request the device create a resident key - a private key that never leaves the device. The CredentialID is just a “reference” to allow the device to look up the Credential but it does not contain the private key itself.
The problem is that there is no signal in the attestation or response that indicates if a resident key was created by the device.
You can request to find out if this was created with the Credential Properties extension.
The devil however, is in the details. Notably:
“This client registration extension facilitates reporting certain credential properties known by the client”
A client extension means that this extension is processed by the web browser, and exists in a section of the response that is unsigned, and can not be verified. This means it is open to client side JS tampering and forgery. This means we can not trust the output of this property.
As a result, there is no simple way to verify a resident key was created.
To make this better, the request to create the resident key is not signed and can be stripped by client side javascript.
So any compromised javascript (which Webauthn assumes is trusted) can strip a registration request for a resident key, cause a key-wrapped-key to be created, and then “assert” pretty promise I swear it’s resident by faking the response to the extension.
The only way to guarantee you have a resident key, is to validate attestation from an authenticator that exclusively makes resident keys (e.g. Apple iOS). Anything else, you can not assert is a true resident key. Even if you subsequently attempt client side discovery of credentials, that is not the same property as the key being resident. This is a trap that many identity providers may not know they are exposed to.
Resident Keys can’t be administered
To compound the inability to verify creation of a resident key, the behaviour of resident keys (RK) for most major devices is undefined. For example a Yubikey has limited storage for RKs but I have been unable to find documenation about:
- How many RKs can exist on an authenticator.
- If the maximum number is created and we attempt to create more, does it act like a ring buffer and remove the oldest, or simply fail to create more?
- If it is possible to update usernames or other personal information related to the RKs in this device?
- Any API’s or tooling to list, audit, delete or manage RK’s on the device.
These are basic things that are critical for users and administrators, and they simply do not exist. This complete absence of tooling makes RK’s effectively useless to most users and deployments since we have no method to manage, audit, modify or delete RK’s.
Bound to Device / Hardware
For the years leading up to 2022, Webauthn and it’s design generally assumed a one to one relationship between the hardware of an authenticator, and the public keys it produced. However, that has now changed with the introduction of Apple Passkeys.
What is meant by “bound to device” is that given a public key, only a single hardware authenticator exists that has access to the private key to sign something. This generally means that the cryptographic operations, and the private key itself, are only ever known to the secure enclave of the account.
Apple’s Passkeys change this, allowing a private key to be distributed between multiple devices of an Apple account, but also the ability to transfer the private key to other nearby devices via airdrop. This means the private key is no longer bound to a single physical device.
When we design a security policy this kind of detail matters, where some identity providers can accept the benefits of a cryptographic authentication even if the private key is not hardware backed, but other identity providers must require that private keys are securely stored in hardware.
The major issue in Webauthn is that the specification does not really have the necessary parts in place to manage these effectively.
As an identity provider there is no way to currently indicate that you require a hardware bound credential (or perhaps you want to require passkeys only!). Because of this lack of control, Apple’s implementation relies on another signal - a request for attestation.
If you do not request attestation, a passkey is created.
If you do request attestation (direct or indirect), a hardware bound key is created.
When the credential is created, there are a new set of “backup state” bits that can indicate if the credential can be moved between devices. These are stored in the same set of bits that stores user verification bits, meaning that to trust them, you need attestation (which Apple can’t provide!). At the very least, the attested Apple credentials that are hardware bound, do correctly show they are not backup capable and are still resident keys.
Because of this, I expect to see that passkeys and related technology is treated in the manner as initially described - a single-factor replacement to passwords. Where you need stronger MFA in the style of a passwordless credential, it will not currently be possible to achieve this with Apple Passkeys.
It’s worth noting that it’s unclear how other vendors will act here. Some may produce passkeys that are attested, meaning that reliance on the backup state bits will become more important, but there is also a risk that vendors will not implement this correctly.
Importantly some testing in pre-release versions showed that if passkeys are enabled, and you request an attested credential, the registration fails blocking the bound credential creation. This will need retesting to be sure of the behaviour in the final iOS 16 release, but this could be a show stopper for BYOD users if not fixed. (20220614: We have confirmed that passkeys do block the creation of attested device bound credentials).
Conclusion
- ⚠️ - risks exist
- ✅ - works
- ❌ - broken/untrustworthy
Security Token | Sec Tok (Corp) | PassKey | Passwordless | PwLess (Corp) | |
---|---|---|---|---|---|
User Verification | no / ⚠️ | no / ⚠️ | no / ⚠️ | required / ✅ | required / ✅ |
UV Policy | no / ✅ | no / ✅ | no / ✅ | no / ✅ | maybe / ❌ |
Attestation | no / ✅ | required / ⚠️ | no / ✅ | required / ⚠️ | required / ⚠️ |
Bound to Device / HW | no / ✅ | required / ⚠️ | no / ✅ | required / ⚠️ | required / ⚠️ |
Resident Key | no / ✅ | maybe / ❌ | no / ✅ | no / ✅ | maybe / ❌ |
Authenticator Selection | maybe / ❌ | maybe / ❌ | no / ✅ | maybe / ❌ | required / ❌ |
Update PII | no / ✅ | no / ✅ | maybe / ❌ | maybe / ❌ | maybe / ❌ |
Result | ⚠️ 1, 2, 7 | ⚠️ 1, 2, 4, 5, 6, 7 | ⚠️ 1, 2, 8 | ⚠️ 4, 5, 7, 8 | ⚠️ 4, 5, 6, 7, 8 |
- User Verification in discouraged may incorrectly request UV, training users that UV prompts are “optional”.
- UV preferred, is bypassable in almost all implementations.
- No method to request a UV policy including min PIN length or UV classes.
- Existence of PassKeys on the device account, WILL prevent attested credentials from being created.
- Currently relies on vendor specific attestation behaviour.
- No way to validate a resident key is created without assumed vendor specific behaviours, or other out of band checks.
- Unable to request constraints for authenticators that are used in the interaction.
- Vendors often do not provide the ability to update PII on resident keys if used in these contexts
A very interesting take away from this however, is that “Passkeys” that Apple have created, are actually identical to “Security Tokens” in how they operate and are validated, meaning that for all intents and purposes they are the same scenario just with or without a password as the MFA element.
As we can see, from our use cases all of the scenarios have some kind of issues. They vary in severity and whom the issue affects, but they generally are all subtle and may have implications on identity providers. Generally the “trend” from these issues though, is that it feels like the Webauthn WG have abandoned authenticators as “security tokens” and are pushing more toward Passkeys as Single Factor or Passwordless scenarios. This is probably “a good thing”, but it’s not been communicated clearly and there are still issues that exist in the Passkey and Passwordless scenarios.
Bonus - Other Skeletons
Javascript is considered trusted
Because Javascript is considered trusted, a large number of properties of Webauthn in its communication are open to tampering which means that they infact, can not be trusted. Because we can’t trust the JS or the user not to tamper with their environment, we need to only trust properties that are from the browser or authenticator, and then signed. As a result, regardless of whom we are, we need to assume this in our threat models that anything on a webpage, can and will be altered. If the browser or authenticator are compromised, we have different issues, and different defences.
Insecure Crypto
Windows Hello especially relies on TPM’s that have their attestation signed with sha1. Sha1 is considered broken, meaning that it could be possible to forge attestations trivially of these credentials. Newer TPM’s may not have this limitation.
Unclear what is / is not security property
A large limitation of Webauthn is that it is unclear what is or is not a security property within the registration and authentication messages. For now, we’ll focus on the registration. This is presented with all the options and structures expanded that are relevant. Imagine you are an identity provider implementing a webauthn library and you see the following.
PublicKeyCredentialCreationOptions {
rp = "relying party identifier"
user {
id = "user id"
displayName = "user display name"
}
challenge = [0xAB, 0xCD, ... ]
PublicKeyCredentialParameters = [
{
type = "public-key";
alg ="ECDSA w/ SHA-256" | ... | "RSASSA-PKCS1-v1_5 using SHA-1"
}, ...
]
timeout = 60000
excludeCredentials = [
{
type = "public-key"
id = [0x00, 0x01, ... ]
transports = [ "usb" | "ble" | "internal" | "nfc", ... ]
}
]
authenticatorSelection = {
authenticatorAttachment = "platform" | "cross-platform"
userVerification = "discouraged" | default="preferred" | "required"
requireResidentKey = boolean
};
attestation = default="none" | "indirect" | "direct" | "enterprise"
extensions = ...
};
Now, reading this structure, which elements do you think are security properties that you can rely upon to be strictly enforced, and have cryptographic acknowledgement of that being enforced?
Well, only the following are signed cryptographically by the authenticator:
PublicKeyCredentialCreationOptions {
rp = "relying party identifier"
challenge = [0xAB, 0xCD, ... ]
}
We can assert the credential algorithm used by checking it (provided we are webauthn level 2 compliant or greater). And we can only check if the userVerification happened or not through the returned attestation. This means the following aren’t signed (for the aware, extensions are something we’ll cover seperately).
PublicKeyCredentialCreationOptions {
user {
id = "user id"
displayName = "user display name"
}
timeout = 60000
excludeCredentials = [
{
type = "public-key"
id = [0x00, 0x01, ... ]
transports = [ "usb" | "ble" | "internal" | "nfc", ... ]
}
]
authenticatorSelection = {
authenticatorAttachment = "platform" | "cross-platform"
requireResidentKey = boolean
};
};
This means that from our registration we can not know or assert:
- If an excluded credential was used or not
- If a resident key was really created
- If the created credential is platform or cross platform
Extensions
Most extensions are not implemented at all in the wild, making them flat out useless.
Many others are client extensions, meaning they are run in your browser and are not signed, and can be freely tampered with without verification as javascript is trusted.
Extremely Detailed Use Cases
The use cases we detail here are significantly richer and more detailed than the ones in the specification (2022-04-13).
Each workflow has two parts. A registration (on-boarding) and authentication. Most of the parameters for webauthn revolve around the behaviour at registration, with authentication being a much more similar work flow regardless of credential type.
Security Token (Public)
Registration:
- The user indicates they wish to enroll a security token
- The identity provider issues a challenge
- The browser lists which authenticators attached to the device could be registered
- The user interacts with the authenticator (note a pin should not be requested, but fingerprint is okay since it’s “transparent”)
- The authenticator releases the signed public key
- The authenticator is added to the users account
Authentication:
- The user enters their username
- The user provides their password and it is validated (note we could do this after webauthn)
- The user indicates they wish to use a security token
- The identity provider issues a webauthn challenge, limited by the list of authenticators and transports we know are valid for the authenticators associated.
- The browser offers the list of authenticators that can proceed
- The user interacts with the authenticator (note a pin should not be requested, but fingerprint is okay since it’s “transparent”)
- The authenticator releases the signature
Security Token (Corporate)
Registration:
- The user indicates they wish to enroll a security token
- The identity provider issues a challenge, with a list of what transports of known approved authenticators exist that could be used.
- The browser lists which authenticators attached to the device could be registered, per the transport list
- The user interacts with the authenticator (note a pin should not be requested, but fingerprint is okay since it’s “transparent”)
- The authenticator releases the signed public key
- The identity provider examines the attestation and asserts it is from a trusted manufacturer
- The identity provider examines the enrollment, and asserts it is bound to the hardware (IE not a passkey/backup)
- The authenticator is added to the users account
Authentication:
- As per Security Token (public)
PassKey (Public)
Registration:
- The user indicates they wish to enroll a token
- The identity provider issues a challenge
- The browser lists which authenticators attached to the device could be registered
- The user interacts with the authenticator (note a pin should not be requested, but fingerprint is okay since it’s “transparent”)
- The authenticator releases the signed public key
- The authenticator is added to the users account
Authentication:
- The user enters their username
- The identity provider issues a webauthn challenge, limited by the list of authenticators and transports we know are valid for the authenticators associated.
- The browser offers the list of authenticators that can proceed
- The user interacts with the authenticator (note a pin should not be requested, but fingerprint is okay since it’s “transparent”)
- The authenticator releases the signature
Passwordless (Public)
Registration:
- The user indicates they wish to enroll a security token
- The identity provider issues a challenge
- The browser lists which authenticators attached to the device could be registered
- The user interacts with the authenticator - user verification MUST be provided i.e. pin or biometric.
- The authenticator releases the signed public key
- The identity provider asserts that user verification occured
- (Optional) The identity provider examines the attestation and asserts it is from a trusted manufacturer
- The authenticator is added to the users account
Authentication:
- The user enters their username
- The identity provider issues a webauthn challenge
- The browser offers the list of authenticators that can proceed
- The user interacts with the authenticator - user verification MUST be provided i.e. pin or biometric.
- The authenticator releases the signature
- The identity provider asserts that user verification occured
Passwordless (Corporate)
Registration:
- The user indicates they wish to enroll a security token
- The identity provider issues a challenge, with a list of what transports of known approved authenticators exist that could be used.
- The browser lists which authenticators attached to the device could be registered, per the transport list
- The user interacts with the authenticator - user verification MUST be provided i.e. pin or biometric.
- The authenticator releases the signed public key
- The identity provider examines the attestation and asserts it is from a trusted manufacturer
- (Optional) The identity provider asserts that a resident key was created
- The identity provider examines the enrollment, and asserts it is bound to the hardware (IE not a passkey/backup)
- The identity provider asserts that user verification occured
- (Optional) The identity provider asserts the verification method complies to policy
- The authenticator is added to the users account
Authentication:
- As per Passwordless (public)
Usernameless
Registration
- The user indicates they wish to enroll a security token
- The identity provider issues a challenge, with a list of what transports of known approved authenticators exist that could be used.
- The browser lists which authenticators attached to the device could be registered, per the transport list
- The user interacts with the authenticator - user verification MUST be provided i.e. pin or biometric.
- The authenticator releases the signed public key
- The identity provider examines the attestation and asserts it is from a trusted manufacturer
- The identity provider asserts that a resident key was created
- The identity provider examines the enrollment, and asserts it is bound to the hardware (IE not a passkey/backup)
- The identity provider asserts that user verification occured
- (Optional) The identity provider asserts the verification method complies to policy
- The authenticator is added to the users account
Authentication:
- The identity provider issues a webauthn challenge
- The browser offers the list of authenticators that can proceed
- The user interacts with the authenticator - user verification MUST be provided i.e. pin or biometric.
- The authenticator releases the signature
- The identity provider asserts that user verification occured
- The identity provider extracts and uses the provided username that was supplied