Webauthn UserVerificationPolicy Curiosities

Recently I received a pair of interesting bugs in Webauthn RS where certain types of authenticators would not work in Firefox, but did work in Chromium. This confused me, and I couldn’t reproduce the behaviour. So like any obsessed person I ordered myself one of the affected devices and waited for Australia Post to lose it, find it, lose it again, and then finally deliver the device 2 months later.

In the meantime I swapped browsers from Firefox to Edge and started to notice some odd behaviour when logging into my corporate account - my yubikey began to ask me for my pin on every authentication, even though the key was registered to the corp servers without a pin. Yet the key kept working on Edge with a pin - and confusingly without a pin on Firefox.

Some background on Webauthn

Before we dive into the issue, we need to understand some details about Webauthn. Webauthn is a standard that allows a client (commonly a web browser) to cryptographically authenticate to a server (commonly a web site). Webauthn defines how different types of hardware cryptographic authenticators may communicate between the client and the server.

An example of some types of authenticator devices are U2F tokens (yubikeys), TouchID (Apple Mac, iPhone, iPad), Trusted Platform Modules (Windows Hello) and many more. Webauthn has to account for differences in these hardware classes and how they communicate, but in the end each device performs a set of asymmetric cryptographic (public/private key) operations.

Webauthn defines the structures of how a client and server communicate to both register new authenticators and subsequently authenticate with those authenticators.

For the first step of registration, the server provides a registration challenge to the client. The structure of this (which is important for later) looks like:

dictionary PublicKeyCredentialCreationOptions {
    required PublicKeyCredentialRpEntity         rp;
    required PublicKeyCredentialUserEntity       user;

    required BufferSource                             challenge;
    required sequence<PublicKeyCredentialParameters>  pubKeyCredParams;

    unsigned long                                timeout;
    sequence<PublicKeyCredentialDescriptor>      excludeCredentials = [];
    AuthenticatorSelectionCriteria               authenticatorSelection = {
        AuthenticatorAttachment      authenticatorAttachment;
        boolean                      requireResidentKey = false;
        UserVerificationRequirement  userVerification = "preferred";
    };
    AttestationConveyancePreference              attestation = "none";
    AuthenticationExtensionsClientInputs         extensions;
};

The client then takes this structure, and creates a number of hashes from it which the authenticator then signs. This signed data and options are returned to the server as a PublicKeyCredential containing an Authenticator Attestation Response.

Next is authentication. The server sends a challenge to the client which has the structure:

dictionary PublicKeyCredentialRequestOptions {
    required BufferSource                challenge;
    unsigned long                        timeout;
    USVString                            rpId;
    sequence<PublicKeyCredentialDescriptor> allowCredentials = [];
    UserVerificationRequirement          userVerification = "preferred";
    AuthenticationExtensionsClientInputs extensions;
};

Again, the client takes this structure, takes a number of hashes and the authenticator signs this to prove it is the holder of the private key. The signed response is sent to the server as a PublicKeyCredential containing an Authenticator Assertion Response.

Key to this discussion is the following field:

UserVerificationRequirement          userVerification = "preferred";

This is present in both PublicKeyCredentialRequestOptions and PublicKeyCredentialCreationOptions. This informs what level of interaction assurance should be provided during the signing process. These are discussed in NIST SP800-64b (5.1.7, 5.1.9) (which is just an excellent document anyway, so read it).

One aspect of these authenticators is that they must provide tamper proof evidence that a person is physically present and interacting with the device for the signature to proceed. This is important as it means that if someone is able to gain remote code execution on your system, they are unable to use your authenticator devices (even if it’s part of the device, like touch id) as they are not physically present at the machine.

Some authenticators are able to go beyond to strengthen this assurance, by verifying the identity of the person interacting with the authenticator. This means that the interaction also requires say a PIN (something you know), or a biometric (something you are). This allows the authenticator to assert not just that someone is present but that it is a specific person who is present.

All authenticators are capable of asserting user presence but only some are capable of asserting user verification. This creates two classes of authenticators as defined by NIST SP800-64b.

Single-Factor Cryptographic Devices (5.1.7) which only assert presence (the device becomes something you have) and Multi-Factor Cryptographic Devices (5.1.9) which assert the identity of the holder (something you have + something you know/are).

Webauthn is able to request the use of a Single-Factor Device or Multi-Factor Device through it’s UserVerificationRequirement option. The levels are:

  • Discouraged - Only use Single-Factor Devices
  • Required - Only use Multi-Factor Devices
  • Preferred - Request Multi-Factor if possible, but allow Single-Factor devices.

Back to the mystery …

When I initially saw these reports - of devices that did not work in Firefox but did in Chromium, and of devices asking for PINs on some browsers but not others - I was really confused. The breakthrough came as I was developing Webauthn Authenticator RS. This is the client half of Webauthn, so that I could have the Kanidm CLI tools use Webauthn for multi-factor authentication (MFA). In the process, I have been using the authenticator crate made by Mozilla and used by Firefox.

The authenticator crate is what communicates to authenticators by NFC, Bluetooth, or USB. Due to the different types of devices, there are multiple different protocols involved. For U2F devices, the protocol is CTAP over USB. There are two versions of the CTAP protocol - CTAP1, and CTAP2.

In the authenticator crate, only CTAP1 is supported. CTAP1 devices are unable to accept a PIN, so user verification must be performed internally to the device (such as a fingerprint reader built into the U2F device).

Chromium, however, is able to use CTAP2 - CTAP2 does allow a PIN to be provided from the host machine to the device as a user verification method.

Why would devices fail in Firefox?

Once I had learnt this about CTAP1/CTAP2, I realised that my example code in Webauthn RS was hardcoding Required as the user verification level. Since Firefox can only use CTAP1, it was unable to use PINs to U2F devices, so they would not respond to the challenge. But on Chromium with CTAP2 they are able to have PINs so Required can be satisfied and the devices work.

Okay but the corp account?

This one is subtle. The corp identity system uses user verification of ‘Preferred’. That meant that on Firefox, no PIN was requested since CTAP1 can’t provide them, but on Edge/Chromium a PIN can be provided as they use CTAP2.

What’s more curious is that the same authenticator device is flipping between Single Factor and Multi Factor, with the same Private/Public Key pair just based on what protocol is used! So even though the ‘Preferred’ request can be satisfied on Chromium/Edge, it’s not on Firefox. To further extend my confusion, the device was originally registered to the corp identity system in Firefox so it would have not had user verification available, but now that I use Edge it has gained this requirement during authentication.

That seems … wrong.

I agree. But Webauthn fully allows this. This is because user verification is a property of the request/response flow, not a property of the device.

This creates some interesting side effects that become an opportunity for user confusion. (I was confused about what the behaviour was and I write a webauthn server and client library - imagine how other people feel …).

Devices change behaviour

This means that during registration one policy can be requested (i.e. Required) but subsequently it may not be used (Preferred + Firefox + U2F, or Discouraged). Another example of a change in behaviour occurs when a device is used on Chromium with Preferred user verification is required, but when used on Firefox the device may not require verification. It also means that a site that implements Required can have devices that simply don’t work in other browsers.

Because this is changing behaviour it can confuse users. For examples:

  • Why do I need a PIN now but not before?
  • Why did I need a PIN before but not now?
  • Why does my authenticator work on this computer but not on another?

Preferred becomes Discouraged

This opens up a security risk where since Preferred “attempts” verification but allows it to not be present, a U2F device can be “downgraded” from Multi-Factor to Single-Factor by using it with CTAP1 instead of CTAP2. Since it’s also per request/response, a compromised client could also tamper with the communication to the authenticator removing the requested userverification parameter silently and the server would allow it.

This means that in reality, Preferred is policy and security wise equivalent to Discouraged, but with a more annoying UI/UX for users who have to conduct a verification that doesn’t actually help identify them.

Remember - if unspecified, ‘Preferred’ is the default user verification policy in Webauthn!

Lock Out / Abuse Vectors

There is also a potential abuse vector here. Many devices such as U2F tokens perform a “trust on first use” for their PIN setup. This means that the first time a user verification is requested you configure the pin at that point in time.

A potential abuse vector here is a token that is always used on Firefox, a malicious person could connect the device to Chromium, and setup the PIN without the knowledge of the owner. The owner could continue to use the device, and when Firefox eventually supports CTAP2, or they swap computer or browser, they would not know the PIN, and their token would effectively be unusable at that point. They would need to reset it, potentially causing them to be locked out from accounts, but more likely causing them to need to conduct a lot of password/credential resets.

Unable to implement Authenticator Policy

One of the greatest issues here though is that because user verification is part of the request/response flow and not per device attributes, authenticator policy, and mixed credentials are unable to exist in the current implementation of Webauthn.

Consider a user who has enrolled say their laptop’s U2F device + password, and their iPhone’s touchID to a server. Both of these are Multi Factor credentials. The U2F is a Single Factor Device and becomes Multi-Factor in combination with the password. The iPhone touchID is a Multi-Factor Device on it’s due to the biometric verification it is capable of.

We should be able to have a website request webauthn and based on the device used we can flow to the next required step. If the device was the iPhone, we would be authenticated as we have authenticated a Multi Factor credentials. If we saw the U2F device we would then move to request the password since we have only received a Single Factor. However Webauthn is unable to express this authentication flow.

If we requested Required, we would exclude the U2F device.

If we requested Discouraged, we would exclude the iPhone.

If we request Preferred, the U2F device could be used on a different browser with CTAP2, either:

  • bypassing the password, since the device is now a self contained Multi-Factor; or
  • the U2F device could prompt for the PIN needlessly and we progress to setting a password

The request to an iPhone could be tampered with, preventing the verification occurring and turning it into a single factor device (presence only).

Today, these mixed device scenarios can not exist in Webauthn. We are unable to create the policy around Single-Factor and Multi-Factor devices as defined by NIST because these require us to assert the verification requirements per credential, but Webauthn can not satisfy this.

We would need to pre-ask the user how they want to authenticate on that device and then only send a Webauthn challenge that can satisfy the authentication policy we have decided on for those credentials.

How to fix this

The solution here is to change PublicKeyCredentialDescriptor in the Webauthn standard to contain an optional UserVerificationRequirement field. This would allow a “global” default set by the server and then per-credential requirements to be defined. This would allow the user verification properties during registration to be associated to that credential, which can then be enforced by the server to guarantee the behaviour of a webauthn device. It would also allow the ‘Preferred’ option to have a valid and useful meaning during registration, where devices capable of verification can provide that or not, and then that verification boolean can be then transformed to a Discouraged or Required setting for that credential for future authentications.

The second change would be to disallow ‘Preferred’ as a valid value in the “global” default during authentications. The new “default” global value should be ‘Discouraged’ and then only credentials that registered with verification would indicate that in their PublicKeyCredentialDescriptor.

This would resolve the issues above by:

  • Making the use of an authenticator consistent after registration. For example, authenticators registered with CTAP1 would stay ‘Discouraged’ even when used with CTAP2
  • If PIN/Verification abuse occurred, the credentials registered on CTAP1 without verification would continue to be ‘presence only’ preventing the lockout
  • Allowing the server to proceed with the authentication flow based on which credential authenticated and provide logic about further factors if needed.
  • Allowing true Single Factor and Multi Factor device policies to be expressed in line with NIST SP800-63b, so users can have a mix of Single and Multi Factor devices associated with a single account.

I have since opened this issue with the webauthn specification about this, but early comments seem to be highly focused on the current expression of the standard rather than the issues around the user experience and ability for identity systems to accurately express credential policy.

In the meantime, I am going to make changes to Webauthn RS to help avoid some of these issues:

  • Preferred will be renamed to Preferred_Is_Equivalent_To_Discouraged (it will still emit ‘Preferred’ in the JSON, this only changes the Rust API enum)
  • Credential structures persisted by applications will contain the boolean of user-verification if it occurred during registration
  • During an authentication, if the set of credentials contains inconsistent user-verification booleans, an error will be raised
  • Authentication User Verification Policy is derived from the set of credentials having a consistent user-verification boolean

While not perfect, it will mean that it’s “hard to hold it wrong” with Webauthn RS.

Acknowledgements

Thanks to both @Charcol0x89 and @JuxhinDB for reviewing this post.

Rust, SIMD and target-feature flags

This year I’ve been working on concread and one of the ways that I have improved it is through the use of packed_simd for parallel key lookups in hashmaps. During testing I saw a ~10% speed up in Kanidm which heavily relies on concread, so great, pack it up, go home.

…?

Or so I thought. Recently I was learning to use Ghidra with a friend, and as a thought exercise I wanted to see how Rust decompiled. I put the concread test suite into Ghidra and took a look. Looking at the version of concread with simd_support enabled, I saw this in the disassembly (truncated for readability).

  **************************************************************
  *                          FUNCTION                          *
  **************************************************************
  Simd<[packed_simd_2--masks--m64;8]> __stdcall eq(Simd<[p
 ...
100114510 55              PUSH       RBP
100114511 48 89 e5        MOV        RBP,RSP
100114514 48 83 e4 c0     AND        RSP,-0x40
100114518 48 81 ec        SUB        RSP,0x100
          00 01 00 00
10011451f 48 89 f8        MOV        RAX,__return_storage_ptr__
100114522 0f 28 06        MOVAPS     XMM0,xmmword ptr [self->__0.__0]
 ...
100114540 66 0f 76 c4     PCMPEQD    XMM0,XMM4
100114544 66 0f 70        PSHUFD     XMM4,XMM0,0xb1
          e0 b1
100114549 66 0f db c4     PAND       XMM0,XMM4
 ...
100114574 0f 29 9c        MOVAPS     xmmword ptr [RSP + local_90],XMM3
          24 b0 00
          00 00
1001145b4 48 89 7c        MOV        qword ptr [RSP + local_c8],__return_storage_pt
          24 78
 ...
1001145be 0f 29 44        MOVAPS     xmmword ptr [RSP + local_e0],XMM0
          24 60
 ...
1001145d2 48 8b 44        MOV        RAX,qword ptr [RSP + local_c8]
          24 78
1001145d7 0f 28 44        MOVAPS     XMM0,xmmword ptr [RSP + local_e0]
          24 60
1001145dc 0f 29 00        MOVAPS     xmmword ptr [RAX],XMM0
 ...
1001145ff 48 89 ec        MOV        RSP,RBP
100114602 5d              POP        RBP
100114603 c3              RET

Now, it’s been a long time since I’ve had to look at x86_64 asm, so I saw this and went “great, it’s not using a loop, those aren’t simple TEST/JNZ instructions, they have a lot of letters, awesome it’s using HW accel.

Time passes …

Coming back to this, I have been wondering how we could enable SIMD in concread at SUSE, since 389 Directory Server has just merged a change for 2.0.0 that uses concread as a cache. For this I needed to know what minimum CPU is supported at SUSE. After some chasing internally, knowing what we need I asked in the Rust Brisbane group about how you can define in packed_simd to only emit instructions that work on a minimum CPU level rather than my cpu or the builder cpu.

The response was “but that’s already how it works”.

I was helpfully directed to the packed_simd perf guide which discusses the use of target features and target cpu. At that point I realised that for this whole time I’ve only been using the default:

# rustc --print cfg | grep -i target_feature
target_feature="fxsr"
target_feature="sse"
target_feature="sse2"

The PCMPEQD is from sse2, but my cpu is much newer and should support AVX and AVX2. Retesting this, I can see my CPU has much more:

# rustc --print cfg -C target-cpu=native | grep -i target_feature
target_feature="aes"
target_feature="avx"
target_feature="avx2"
target_feature="bmi1"
target_feature="bmi2"
target_feature="fma"
target_feature="fxsr"
target_feature="lzcnt"
target_feature="pclmulqdq"
target_feature="popcnt"
target_feature="rdrand"
target_feature="rdseed"
target_feature="sse"
target_feature="sse2"
target_feature="sse3"
target_feature="sse4.1"
target_feature="sse4.2"
target_feature="ssse3"
target_feature="xsave"
target_feature="xsavec"
target_feature="xsaveopt"
target_feature="xsaves"

All this time, I haven’t been using my native features!

For local builds now, I have .cargo/config set with:

[build]
rustflags = "-C target-cpu=native"

I recompiled concread and I now see in Ghidra:

00198960 55              PUSH       RBP
00198961 48 89 e5        MOV        RBP,RSP
00198964 48 83 e4 c0     AND        RSP,-0x40
00198968 48 81 ec        SUB        RSP,0x100
         00 01 00 00
0019896f 48 89 f8        MOV        RAX,__return_storage_ptr__
00198972 c5 fc 28 06     VMOVAPS    YMM0,ymmword ptr [self->__0.__0]
00198976 c5 fc 28        VMOVAPS    YMM1,ymmword ptr [RSI + self->__0.__4]
         4e 20
0019897b c5 fc 28 12     VMOVAPS    YMM2,ymmword ptr [other->__0.__0]
0019897f c5 fc 28        VMOVAPS    YMM3,ymmword ptr [RDX + other->__0.__4]
         5a 20
00198984 c4 e2 7d        VPCMPEQQ   YMM0,YMM0,YMM2
         29 c2
00198989 c4 e2 75        VPCMPEQQ   YMM1,YMM1,YMM3
         29 cb
0019898e c5 fc 29        VMOVAPS    ymmword ptr [RSP + local_a0[0]],YMM1
         8c 24 a0
         00 00 00
...
001989e7 48 89 ec        MOV        RSP,RBP
001989ea 5d              POP        RBP
001989eb c5 f8 77        VZEROUPPER
001989ee c3              RET

VPCMPEQQ is the AVX2 compare instruction (You can tell it’s AVX2 due to the register YMM, AVX uses XMM). Which means now I’m getting the SIMD comparisons I wanted!

These can be enabled with RUSTFLAGS=’-C target-feature=+avx2,+avx’ for selected builds, or in your .cargo/config. It may be a good idea for just local development to do target-cpu=native.

Deploying sccache on SUSE

sccache is a ccache/icecc-like tool from Mozilla, which in addition to working with C and C++, is also able to help with Rust builds.

Adding the Repo

A submission to Factory (tumbleweed) has been made, so check if you can install from zypper:

zypper install sccache

If not, sccache is still part of devel:tools:building so you will need to add the repo to use sccache.

zypper ar -f obs://devel:tools:building devel:tools:building
zypper install sccache

It’s also important you do not have ccache installed. ccache intercepts the gcc command so you end up “double caching” potentially.

zypper rm ccache

Single Host

To use sccache on your host, you need to set the following environment variables:

export RUSTC_WRAPPER=sccache
export CC="sccache /usr/bin/gcc"
export CXX="sccache /usr/bin/g++"
# Optional: This can improve rust caching
# export CARGO_INCREMENTAL=false

This will allow sccache to wrap your compiler commands. You can show your current sccache status with:

sccache -s

There is more information about using cloud/remote storage for the cache on the sccache project site

Distributed Compiliation

sccache is also capable of distributed compilation, where a number of builder servers can compile items and return the artificats to your machine. This can save you time by allowing compilation over a cluster, using a faster remote builder, or just to keep your laptop cool.

Three components are needed to make this work. A scheduler that coordinates the activities, one or more builders that provide their CPU, and a client that submits compilation jobs.

The sccache package contains the required elements for all three parts.

Note that the client does not need to be the same version of SUSE or even the same distro as the scheduler or builder. This is because the client is able to bundle and submit it’s toolchains to the workers on the fly. Neat! sccache is capacble of also compiling for macos and windows, but in these cases the toolchains can-not be submitted on the fly and requires extra work to configure.

Scheduler

The scheduler is configured with /etc/sccache/scheduler.conf. You need to define the listening ip, client auth, and server (builder) auth methods. The example configuration is well commented to help with this:

# The socket address the scheduler will listen on. It's strongly recommended
# to listen on localhost and put a HTTPS server in front of it.
public_addr = "127.0.0.1:10600"
# public_addr = "[::1]:10600"

[client_auth]
# This is how a client will authenticate to the scheduler.
# # sccache-dist auth generate-shared-token --help
type = "token"
token = "token here"
#
# type = "jwt_hs256"
# secret_key = ""

[server_auth]
# sccache-dist auth --help
# To generate the secret_key:
# # sccache-dist auth generate-jwt-hs256-key
# To generate a key for a builder, use the command:
# # sccache-dist auth generate-jwt-hs256-server-token --config /etc/sccache/scheduler.conf --secret-key "..." --server "builderip:builderport"
type = "jwt_hs256"
secret_key = "my secret key"

You can start the scheduler with:

systemctl start sccache-dist-scheduler.service

If you have issues you can increase logging verbosity with:

# systemctl edit sccache-dist-scheduler.service
[Service]
Environment="RUST_LOG=sccache=trace"

Builder

Similar to the scheduler, the builder is configured with /etc/sccache/builder.conf. Most of the defaults should be left “as is” but you will need to add the token generated from the comments in scheduler.conf - server_auth.

# This is where client toolchains will be stored.
# You should not need to change this as it is configured to work with systemd.
cache_dir = "/var/cache/sccache-builder/toolchains"
# The maximum size of the toolchain cache, in bytes.
# If unspecified the default is 10GB.
# toolchain_cache_size = 10737418240
# A public IP address and port that clients will use to connect to this builder.
public_addr = "127.0.0.1:10501"
# public_addr = "[::1]:10501"

# The URL used to connect to the scheduler (should use https, given an ideal
# setup of a HTTPS server in front of the scheduler)
scheduler_url = "https://127.0.0.1:10600"

[builder]
type = "overlay"
# The directory under which a sandboxed filesystem will be created for builds.
# You should not need to change this as it is configured to work with systemd.
build_dir = "/var/cache/sccache-builder/tmp"
# The path to the bubblewrap version 0.3.0+ `bwrap` binary.
# You should not need to change this as it is configured for a default SUSE install.
bwrap_path = "/usr/bin/bwrap"

[scheduler_auth]
type = "jwt_token"
# This will be generated by the `generate-jwt-hs256-server-token` command or
# provided by an administrator of the sccache cluster. See /etc/sccache/scheduler.conf
token = "token goes here"

Again, you can start the builder with:

systemctl start sccache-dist-builder.service

If you have issues you can increase logging verbosity with:

# systemctl edit sccache-dist-builder.service
[Service]
Environment="RUST_LOG=sccache=trace"

You can configure many hosts as builders, and compilation jobs will be distributed amongst them.

Client

The client is the part that submits compilation work. You need to configure your machine the same as single host with regard to the environment variables.

Additionally you need to configure the file ~/.config/sccache/config. An example of this can be found in /etc/sccache/client.example.

[dist]
# The URL used to connect to the scheduler (should use https, given an ideal
# setup of a HTTPS server in front of the scheduler)
scheduler_url = "http://x.x.x.x:10600"
# Used for mapping local toolchains to remote cross-compile toolchains. Empty in
# this example where the client and build server are both Linux.
toolchains = []
# Size of the local toolchain cache, in bytes (5GB here, 10GB if unspecified).
# toolchain_cache_size = 5368709120

cache_dir = "/tmp/toolchains"

[dist.auth]
type = "token"
# This should match the `client_auth` section of the scheduler config.
token = ""

You can check the status with:

sccache --stop-server
sccache --dist-status

If you have issues, you can increase the logging with:

sccache --stop-server
SCCACHE_NO_DAEMON=1 RUST_LOG=sccache=trace sccache --dist-status

Then begin a compilation job and you will get the extra logging. To undo this, run:

sccache --stop-server
sccache --dist-status

In addition, sccache even in distributed mode can still use cloud or remote storage for items, using it’s cache first, and the distributed complitation second. Anything that can’t be remotely complied will be run locally.

Verifying

If you compile something from your client, you should see messages like this appear in journald in the builder/scheduler machine:

INFO 2020-11-19T22:23:46Z: sccache_dist: Job 140 created and will be assigned to server ServerId(V4(x.x.x.x:10501))
INFO 2020-11-19T22:23:46Z: sccache_dist: Job 140 successfully assigned and saved with state Ready
INFO 2020-11-19T22:23:46Z: sccache_dist: Job 140 updated state to Started
INFO 2020-11-19T22:23:46Z: sccache_dist: Job 140 updated state to Complete
d i v c l a s s = " s e c t i o n " i d = " h o w - a - s e a r c h - q u e r y - i s - p r o c e s s e d - i n - k a n i d m " >

Using SUSE Leap Enterprise with Docker

It’s a little bit annoying to connect up all the parts for this. If you have a SLE15 system then credentials for SCC are automatically passed into containers via secrets.

But if you are on a non-SLE base, like myself with MacOS or OpenSUSE you’ll need to provide these to the container in another way. The documentation is a bit tricky to search and connect up what you need but in summary:

  • Get /etc/SUSEConnect and /etc/zypp/credentials.d/SCCcredentials from an SLE install that has been registered. The SLE version does not matter.
  • Mount them into the image:
docker ... -v /scc/SUSEConnect:/etc/SUSEConnect \
    -v /scc/SCCcredentials:/etc/zypp/credentials.d/SCCcredentials \
    ...
    registry.suse.com/suse/sle15:15.2

Now you can use the images from the SUSE registry. For example docker pull registry.suse.com/suse/sle15:15.2 and have working zypper within them.

If you want to add extra modules to your container (you can list what’s available with container-suseconnect from an existing SLE container of the same version), you can do this by adding environment variables at startup. For example, to add dev tools like gdb:

docker ... -e ADDITIONAL_MODULES=sle-module-development-tools \
    -v /scc/SUSEConnect:/etc/SUSEConnect \
    -v /scc/SCCcredentials:/etc/zypp/credentials.d/SCCcredentials \
    ...
    registry.suse.com/suse/sle15:15.2

This also works during builds to add extra modules.

HINT: SUSEConnect and SCCcredentials and not version dependent so will work in any image version.