Cloud Native

Workflow Identity and Kubernetes with OpenUnison

November 20, 2024

by

Marc Boorshtein

OpenUnison isn't just for user identities! You can also use OpenUnison to validate workload identities and use those identities to provide secure access to your Kubernetes clusters. In past posts, I've written about how important it is to not use Kubernetes ServiceAccount tokens from external workflows. In that post, we walked through using Okta to authenticate to your cluster from a pipeline. I've also written about how you can use kube-oidc-proxy to provide secure access based on your workflow's identity and JWTs. Using OpenUnison, we can combine these concepts to provide access to our clusters securely from external workloads. We're going to focus on GitLab in this post, but the examples can be used by any workload external to your clusters.

Security Token Services - How To Exchange Tokens

Almost all workflow systems today provide a sense of identity that is unique to the workflow instance. As an example, GitHub and GitLab both provide identities unique to each workflow in the form of a JSON Web Token. Here's an example of a JWT provided by GitLab:

{
  "namespace_id": "63733341",
  "namespace_path": "mlbiam",
  "project_id": "64662446",
  "project_path": "mlbiam/wfid-to-kubernetes-blog",
  "user_id": "13710079",
  "user_login": "mlbiam",
  "user_email": "marc@tremolo.io",
  "user_access_level": "owner",
  "pipeline_id": "1551523418",
  "pipeline_source": "push",
  "job_id": "8416812862",
  "ref": "main",
  "ref_type": "branch",
  "ref_path": "refs/heads/main",
  "ref_protected": "true",
  "groups_direct": [
    "tremolo-sigstore-lab",
    "tremolosecurity"
  ],
  "runner_id": 12270837,
  "runner_environment": "gitlab-hosted",
  "sha": "94aaf63b029c46a78cbfe218be4a4786b472091b",
  "project_visibility": "public",
  "ci_config_ref_uri": "gitlab.com/mlbiam/wfid-to-kubernetes-blog//.gitlab-ci.yml@refs/heads/main",
  "ci_config_sha": "94aaf63b029c46a78cbfe218be4a4786b472091b",
  "jti": "56e7734d-e22b-4bdd-ae6b-e55e8ce1c325",
  "iat": 1732045603,
  "nbf": 1732045598,
  "exp": 1732049203,
  "iss": "https://gitlab.com",
  "sub": "project_path:mlbiam/wfid-to-kubernetes-blog:ref_type:branch:ref:main",
  "aud": "https://k8sou.domain/"
}

The token its self contains multiple attributes, or claims, that we can use to identify the workload. While Kubernetes 1.31 has beta support for multiple authentication sources, it's still in beta and is subject to change. Also, this feature is only available for manual installations since none of the distribution vendors are supporting it. We'll want to exchange this token for a kubectl configuration.

Once we have this token, we want to exchange it for a token that can be used by our cluster. This type of a service is often referred to as a "Security Token Service". It's not a service designed to be used by humans, but by systems and workloads that need to identify themselves to a remote service using an existing credential but that remote service doesn't know how to validate that credential directly. Another way to think about a security token service is how when you login to a website you exchange your credentials for a cookie that's used to authenticate your access to different pages after login. An STS works the same way.

We want more then a token though, since nearly every Kubernetes client SDK will automatically configure its self based on a kubectl configuration file. The good news is, OpenUnison already knows how to do that for us! When you login to OpenUnison's portal and generate a kubectl configuration for your client or you use the oulogin utility you're using an API that can be called on its own.

We're going to build our STS using OpenUnison's built in capabilities, which we'll look at next.

Creating Custom Applications in OpenUnison

Now that we know we need to build a STS, the next step is to figure out how to build it. OpenUnison has an extensive application configuration setup. We could start from scratch, but it's much easier to start with an existing example. For our STS, we're going to start with the token application in the openunison namespace:

$ k get application token -n openunison -o yaml
apiVersion: openunison.tremolo.io/v2
kind: Application
metadata:
  annotations:
    argocd.argoproj.io/sync-wave: "30"
    meta.helm.sh/release-name: orchestra-login-portal
    meta.helm.sh/release-namespace: openunison
  creationTimestamp: "2024-11-19T16:54:26Z"
  generation: 1
  labels:
    app.kubernetes.io/component: openunison-applications
    app.kubernetes.io/instance: openunison-orchestra-login-portal
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: openunison
    app.kubernetes.io/part-of: openunison
  name: token
  namespace: openunison
  resourceVersion: "14599"
  uid: ff56581a-dd37-4549-8bb8-e3d57eff636e
spec:
  azTimeoutMillis: 3000
  cookieConfig:
    cookiesEnabled: true
    domain: '#[OU_HOST]'
    httpOnly: true
    keyAlias: session-unison
    logoutURI: /logout
    scope: -1
    secure: true
    sessionCookieName: tremolosession
    timeout: 900
  isApp: true
  urls:
  - authChain: login-service
    azRules:
    - constraint: o=Tremolo
      scope: dn
    filterChain:
    - className: com.tremolosecurity.proxy.filters.AzFilter
      params:
        azFail: force-logout
        rules:
        - custom;require-session
    - className: com.tremolosecurity.proxy.filters.XForward
      params:
        createHeaders: "false"
    - className: com.tremolosecurity.proxy.filters.SetNoCacheHeaders
      params: {}
    - className: com.tremolosecurity.proxy.filters.MapUriRoot
      params:
        newRoot: /token
        paramName: tokenURI
    hosts:
    - '#[OU_HOST]'
    proxyTo: http://ouhtml-orchestra-login-portal.openunison.svc:8080${tokenURI}
    results:
      auFail: default-login-failure
      azFail: default-login-failure
    uri: /k8stoken
  - authChain: login-service
    azRules:
    - constraint: o=Tremolo
      scope: dn
    filterChain:
    - className: com.tremolosecurity.proxy.filters.AzFilter
      params:
        azFail: force-logout
        rules:
        - custom;require-session
    - className: com.tremolosecurity.scalejs.token.ws.ScaleToken
      params:
        displayNameAttribute: sub
        frontPage.text: Use this kubectl command to set your user in .kubectl/config.  Refresh
          this screen to generate a new set of tokens.  Logging out will clear all
          of your sessions.
        frontPage.title: Kubernetes kubectl command
        homeURL: /scale/
        k8sCaCertName: '#[K8S_API_SERVER_CERT:unison-ca]'
        kubectlTemplate: ' export TMP_CERT=\$(mktemp) && echo -e "$k8s_newline_cert$"
          > \$TMP_CERT && kubectl config set-cluster #[K8S_CLUSTER_NAME:kubernetes]
          --server=#[K8S_URL] --certificate-authority=\$TMP_CERT --embed-certs=true
          && kubectl config set-context #[K8S_CLUSTER_NAME:kubernetes] --cluster=#[K8S_CLUSTER_NAME:kubernetes]
          --user=$user_id$@#[K8S_CLUSTER_NAME:kubernetes]  && kubectl config set-credentials
          $user_id$@#[K8S_CLUSTER_NAME:kubernetes]  --auth-provider=oidc --auth-provider-arg=client-secret=
          --auth-provider-arg=idp-issuer-url=$token.claims.issuer$ --auth-provider-arg=client-id=$token.trustName$
          --auth-provider-arg=refresh-token=$token.refreshToken$  --auth-provider-arg=id-token=$token.encodedIdJSON$  --auth-provider-arg=idp-certificate-authority-data=#[IDP_CERT_DATA:$ou_b64_cert$]   &&
          kubectl config use-context #[K8S_CLUSTER_NAME:kubernetes] && rm \$TMP_CERT'
        kubectlUsage: Run the kubectl command to set your user-context and server
          connection
        kubectlWinUsage: |
          \$TMP_CERT=New-TemporaryFile ; "$k8s_newline_cert_win$" | out-file \$TMP_CERT -encoding oem ; kubectl config set-cluster #[K8S_CLUSTER_NAME:kubernetes] --server=#[K8S_URL]  --certificate-authority=\$TMP_CERT --embed-certs=true ; kubectl config set-context #[K8S_CLUSTER_NAME:kubernetes] --cluster=#[K8S_CLUSTER_NAME:kubernetes] --user=$user_id$@#[K8S_CLUSTER_NAME:kubernetes]  ; kubectl config set-credentials $user_id$@#[K8S_CLUSTER_NAME:kubernetes]  --auth-provider=oidc --auth-provider-arg=client-secret= --auth-provider-arg=idp-issuer-url=$token.claims.issuer$ --auth-provider-arg=client-id=$token.trustName$ --auth-provider-arg=refresh-token=$token.refreshToken$  --auth-provider-arg=id-token=$token.encodedIdJSON$  --auth-provider-arg=idp-certificate-authority-data=$ou_b64_cert$ ; kubectl config use-context #[K8S_CLUSTER_NAME:kubernetes] ; Remove-Item -recurse -force \$TMP_CERT
        logoutURL: /logout
        oulogin: kubectl oulogin --host=#[OU_HOST]
        tokenClassName: com.tremolosecurity.scalejs.KubectlTokenLoader
        uidAttributeName: uid
        unisonCaCertName: unison-ca
        warnMinutesLeft: "5"
    hosts:
    - '#[OU_HOST]'
    results:
      auFail: default-login-failure
      azFail: default-login-failure
    uri: /k8stoken/token

There's quite a bit of YAML. The first thing to point out is there are two items in spec.urls. The first is to load the HTML front end, the second is to enable our API. We don't need the HTML frontend because we're going to just use this service from workflows. We'll remove that block.

Next, look at spec.cookieConfig. This section defines how we're going to manage sessions, which are important for an API being called directly from a web frontend, but for our API it's not needed. We're going to change spec.cookieConfig.cookiesEnabled to false.

After we've updated our cookie configuration to disable sessions, we'll look at the spec.urls[0].filterChain. There are two filters. The first executes an authorization, the second will generate our kubectl configuration. We're going to remove the first one. When working with the portal, this is important to make sure that the user hasn't been signed out by an administrator. Since we're writing an API that will be authenticated in each request, we don't need this protection.

Finally, we're going to change the name of our application and the application's spec.urls[0].uri to reflect that it's a new application. We've customized our application to the point that it can be deployed, but how will we authenticate to our service? As of right now, spec.urls[0].authChain: login-service. The login-service chain is how users login to OpenUnison. Next, we'll create an anonymous authentication chain we can use for testing.

Creating an Anonymous Authentication Chain

Allowing anonymous access to our cluster is not very secure! Why enable it? When creating applications in OpenUnison, there's often multiple moving parts. In our future end state we're going to have to validate a token, then generate a kubectl configuration and make sure it works. If any of those components fail, it would be good to be able to easily isolate which one is the issue. Creating an anonymous authentication chain will help us do that.

OpenUnison defines authentication chains to string together multiple authentication mechanisms. This gives you a tremendous amount of flexibility to determine how to authenticate users.  In our instance, our chain is going to do three things:

  • Authenticate the user - In this case, we're going to use anonymous authentication
  • Map attributes - We want to normalize our "user" by mapping attributes from the token
  • Just-In-Time provision the user - OpenUnison stores all users, with their groups, as CRDs
  • Generate an OIDC Token - We'll generate an OIDC token for the user that will be used to generate the kubectl configuration file

Just as with our application, we're going to start with an existing authentication chain.

$ k get authchain enterprise-idp -n openunison -o yaml
apiVersion: openunison.tremolo.io/v1
kind: AuthenticationChain
metadata:
  annotations:
    argocd.argoproj.io/sync-wave: "30"
    meta.helm.sh/release-name: orchestra-login-portal
    meta.helm.sh/release-namespace: openunison
  creationTimestamp: "2024-11-19T16:54:26Z"
  generation: 1
  labels:
    app.kubernetes.io/component: openunison-authchains
    app.kubernetes.io/instance: openunison-orchestra-login-portal
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: openunison
    app.kubernetes.io/part-of: openunison
  name: enterprise-idp
  namespace: openunison
  resourceVersion: "14618"
  uid: 2f821a6b-dadd-4738-8662-256e8cbbd2e0
spec:
  authMechs:
  - name: oidc
    params:
      bearerTokenName: oidcBearerToken
      clientid: xxxx
      defaultObjectClass: inetOrgPerson
      forceAuthentication: "true"
      hd: ""
      issuer: https://xxxxx
      linkToDirectory: "false"
      lookupFilter: (uid=${sub})
      noMatchOU: oidc
      responseType: code
      scope: openid email profile groups
      uidAttr: sub
      userLookupClassName: com.tremolosecurity.unison.proxy.auth.openidconnect.loadUser.LoadAttributesFromWS
    required: required
    secretParams:
    - name: secretid
      secretKey: OIDC_CLIENT_SECRET
      secretName: orchestra-secrets-source#[openunison.static-secret.suffix]
  - name: map
    params:
      map:
      - uid|composite|${sub}
      - mail|composite|${email}
      - givenName|composite|${given_name}
      - sn|composite|${family_name}
      - displayName|composite|${name}
      - memberOf|user|groups
    required: required
  - name: jit
    params:
      nameAttr: uid
      workflowName: jitdb
    required: required
  - name: genoidctoken
    params:
      idpName: k8sidp
      trustName: kubernetes
    required: required
  level: 20
  root: o=Data

We're going to make a couple of updates. First, we'll rename the object to anon-token, change spec.level: 1, remove the oidc authentication, and setup a static mapping. Now we can create our new chain:

apiVersion: openunison.tremolo.io/v1
kind: AuthenticationChain
metadata:
  name: anon-token
  namespace: openunison
spec:
  authMechs:
  - name: map
    params:
      map:
      - uid|static|anonymous
      - mail|static|none@none.io
      - givenName|static|Anon
      - sn|static|User
      - displayName|static|Anon User
      - memberOf|user|groups
    required: required
  - name: jit
    params:
      nameAttr: uid
      workflowName: jitdb
    required: required
  - name: genoidctoken
    params:
      idpName: k8sidp
      trustName: kubernetes
    required: required
  level: 1

Now, we'll create our application object:

apiVersion: openunison.tremolo.io/v2
kind: Application
metadata:
  name: wftoken
  namespace: openunison
spec:
  azTimeoutMillis: 3000
  cookieConfig:
    cookiesEnabled: false
    domain: '#[OU_HOST]'
    httpOnly: true
    keyAlias: session-unison
    logoutURI: /logout
    scope: -1
    secure: true
    sessionCookieName: tremolosession
    timeout: 900
  isApp: true
  urls:
  - authChain: anon-token
    azRules:
    - constraint: o=Tremolo
      scope: dn
    filterChain:
    - className: com.tremolosecurity.scalejs.token.ws.ScaleToken
      params:
        displayNameAttribute: sub
        frontPage.text: Use this kubectl command to set your user in .kubectl/config.  Refresh
          this screen to generate a new set of tokens.  Logging out will clear all
          of your sessions.
        frontPage.title: Kubernetes kubectl command
        homeURL: /scale/
        k8sCaCertName: '#[K8S_API_SERVER_CERT:unison-ca]'
        kubectlTemplate: ' export TMP_CERT=\$(mktemp) && echo -e "$k8s_newline_cert$"
          > \$TMP_CERT && kubectl config set-cluster #[K8S_CLUSTER_NAME:kubernetes]
          --server=#[K8S_URL] --certificate-authority=\$TMP_CERT --embed-certs=true
          && kubectl config set-context #[K8S_CLUSTER_NAME:kubernetes] --cluster=#[K8S_CLUSTER_NAME:kubernetes]
          --user=$user_id$@#[K8S_CLUSTER_NAME:kubernetes]  && kubectl config set-credentials
          $user_id$@#[K8S_CLUSTER_NAME:kubernetes]  --auth-provider=oidc --auth-provider-arg=client-secret=
          --auth-provider-arg=idp-issuer-url=$token.claims.issuer$ --auth-provider-arg=client-id=$token.trustName$
          --auth-provider-arg=refresh-token=$token.refreshToken$  --auth-provider-arg=id-token=$token.encodedIdJSON$  --auth-provider-arg=idp-certificate-authority-data=#[IDP_CERT_DATA:$ou_b64_cert$]   &&
          kubectl config use-context #[K8S_CLUSTER_NAME:kubernetes] && rm \$TMP_CERT'
        kubectlUsage: Run the kubectl command to set your user-context and server
          connection
        kubectlWinUsage: |
          \$TMP_CERT=New-TemporaryFile ; "$k8s_newline_cert_win$" | out-file \$TMP_CERT -encoding oem ; kubectl config set-cluster #[K8S_CLUSTER_NAME:kubernetes] --server=#[K8S_URL]  --certificate-authority=\$TMP_CERT --embed-certs=true ; kubectl config set-context #[K8S_CLUSTER_NAME:kubernetes] --cluster=#[K8S_CLUSTER_NAME:kubernetes] --user=$user_id$@#[K8S_CLUSTER_NAME:kubernetes]  ; kubectl config set-credentials $user_id$@#[K8S_CLUSTER_NAME:kubernetes]  --auth-provider=oidc --auth-provider-arg=client-secret= --auth-provider-arg=idp-issuer-url=$token.claims.issuer$ --auth-provider-arg=client-id=$token.trustName$ --auth-provider-arg=refresh-token=$token.refreshToken$  --auth-provider-arg=id-token=$token.encodedIdJSON$  --auth-provider-arg=idp-certificate-authority-data=$ou_b64_cert$ ; kubectl config use-context #[K8S_CLUSTER_NAME:kubernetes] ; Remove-Item -recurse -force \$TMP_CERT
        logoutURL: /logout
        oulogin: kubectl oulogin --host=#[OU_HOST]
        tokenClassName: com.tremolosecurity.scalejs.KubectlTokenLoader
        uidAttributeName: uid
        unisonCaCertName: unison-ca
        warnMinutesLeft: "5"
    hosts:
    - '#[OU_HOST]'
    results:
      auFail: default-login-failure
      azFail: default-login-failure
    uri: /wftoken/token

There's no need to restart OpenUnison. It's an operator, so it's watching for any updates to its configuration objects. With these objects deployed, we can test our service:

$ curl https://k8sou.wfid.tremolo.dev/wftoken/token/user
{"displayName":"anonymous","token":{"kubectl Windows Command"...

We can now generate a kubectl configuration and run kubectl!

➜  ~ export KUBECONFIG=$(mktemp)
➜  ~ curl https://k8sou.some.domain/wftoken/token/user 2>/dev/null | jq -r '.token["kubectl Command"]' | sh
Cluster "openunison-cp" set.
Context "openunison-cp" created.
User "anonymous@openunison-cp" set.
Switched to context "openunison-cp".
➜  ~ k get nodes
Error from server (Forbidden): nodes is forbidden: User "anonymous" cannot list resource "nodes" in API group "" at the cluster scope

Wow! We can now generate a token for an anonymous user and start working with pretty much any kubernetes client sdk! If we take a look at these tokens, we'll see that just like our user tokens, they're only good for a minute or two depending on clock skew. Don't worry if you have a long running workflow though. Just like with a user's kubectl configuration file that's generated by OpenUnison, the client SDK will constantly refresh the token.

It's great that we can generate a token for an anonymous user, but that's not really all that secure. Next, let's update our configuration to authenticate a GitLab token.

Authenticating GitLab Workflows

Now that we have a service that we know will generate a working kubectl configuration file, the next step is to configure an authentication chain that will authenticate against GitLab's OIDC issuer. When GitLab launches a job, you can create identities that are designed for external services (more on this in a bit). These identities contain metadata about the job being run and it's owner. It also contains an issuer that tells us where to get the certificates we can use to validate the JWT's authenticity. Let's create a new authentication chain for GitLab tokens:

apiVersion: openunison.tremolo.io/v1
kind: AuthenticationChain
metadata:
  name: gitlab-token
  namespace: openunison
spec:
  authMechs:
  - name: oauth2jwt
    required: required
    params:
      issuer: "https://gitlab.com"
      fromWellKnown: "true"
      linkToDirectory: "false"
      noMatchOU: oauth2
      uidAttr: sub
      lookupFilter: "(sub=${sub})"
      userLookupClassName: inetOrgPerson
      defaultObjectClass: inetOrgPerson
      realm: kubernetes
      scope: auth
      audience: https://k8sou.domain/
    secretParams: []
  - name: map
    params:
      map:
      - uid|composite|${sub}
      - mail|composite|${user_email}
      - givenName|composite|${namespace_path}
      - sn|composite|${ref_path}
      - displayName|composite|${project_path}
      - memberOf|user|groups_direct
    required: required
  - name: jit
    params:
      nameAttr: uid
      workflowName: jitdb
    required: required
  - name: genoidctoken
    params:
      idpName: k8sidp
      trustName: kubernetes
    required: required
  level: 1

There are two main changes in this chain. First, we included the oauth2jwt mechanism, with the issuer configured for GitLab. Next, we updated our mapping to include some attributes from our GitLab identity. After we create this chain, we need to update application to change spec.urls[0].authChain to gitlab-token. We're now going to be required to supply a valid JWT. If we re-run our curl command, it will fail:

curl -v https://k8sou.domain/wftoken/token/user
.
.
.
* Request completely sent off
 HTTP/2 401

Next, let's create a namespace with a RoleBinding that will only allow a token from our repository to interact with our cluster:

$ k get cm -n gitlab-wf
$  k create rolebinding admin-binding --user='project_path:mlbiam/wfid-to-kubernetes-blog:ref_type:branch:ref:main' --clusterrole='admin' -n gitlab-wf

Our "user" lines up with the sub claim in our token. Finally, lets create create our workflow:

stages:
- build

build-job:
  stage: build
  id_tokens:
    OPENUNISON_TOKEN:
      aud: https://k8sou.domain/
  image:
    name: ghcr.io/tremolosecurity/vcluster-onboard:1.0.0
    entrypoint:
    - ""
  script: |-
    export KUBECONFIG=$(mktemp)
    curl -H "Authorization: Bearer $OPENUNISON_TOKEN" https://k8sou.domain/wftoken/token/user 2>/dev/null | jq -r '.token["kubectl Command"]' | sh
    kubectl get cm -n gitlab-wf

The aud in the id_token section MUST be the exact same as the audience in our auth chain configuration. Otherwise the token won't authenticate properly. Let's see our workflow output:

Executing "step_script" stage of the job script
00:02
Using docker image sha256:de64b73543f00db4fc61e2ba31881c39cff103f46ff382315619c0560eee4309 for ghcr.io/tremolosecurity/vcluster-onboard:1.0.0 with digest ghcr.io/tremolosecurity/vcluster-onboard@sha256:e962c458dfab28e589df86b2169169aff1a43b05a6cee02abba9f87e51bf4424 ...
$ export KUBECONFIG=$(mktemp) # collapsed multi-line command
Cluster "openunison-cp" set.
Context "openunison-cp" created.
User "projectx-95-xpathx-58-xmlbiamx-47-xwfid-to-kubernetes-blogx-58-xrefx-95-xtypex-58-xbranchx-58-xrefx-58-xmain@openunison-cp" set.
Switched to context "openunison-cp".
NAME               DATA   AGE
kube-root-ca.crt   1      3m44s
Cleaning up project directory and file based variables
00:00
Job succeeded

Success! we're now able to interact with resources in our namespace. This now begs the question, what issues can we be creating? One is that we're now allowing any token generated by GitLab to be used against our cluster. We should probably not do that. We'll wrap up by adding authorization to our STS.

Authorizing GitLab Tokens

Right now, we're allowing anyone with a valid token from GitLab to authenticate to our cluster. We do have RBAC setup in our cluster to limit access, but THE CALL IS COMING FROM INSIDE THE HOUSE! Using authorization at this layer leaves you open to potential escalation attacks. There are four or five escalation related CVEs for Kubernetes. This doesn't include potential bugs in admission controllers, authorizors, etc. We'll want to limit which tokens we're willing to exchange for cluster tokens.

GitLab provides a great amount of information as claims in the token that we can write an LDAP filter against. Let's create a dev branch in our repository. This triggers a new build, with a new sub in our token - project_path:mlbiam/wfid-to-kubernetes-blog:ref_type:branch:ref:dev. As expected, RBAC will stop access to our namespace:

$ export KUBECONFIG=$(mktemp) # collapsed multi-line command
Cluster "openunison-cp" set.
Context "openunison-cp" created.
User "projectx-95-xpathx-58-xmlbiamx-47-xwfid-to-kubernetes-blogx-58-xrefx-95-xtypex-58-xbranchx-58-xrefx-58-xdev@openunison-cp" set.
Switched to context "openunison-cp".
Error from server (Forbidden): configmaps is forbidden: User "project_path:mlbiam/wfid-to-kubernetes-blog:ref_type:branch:ref:dev" cannot list resource "configmaps" in API group "" in the namespace "gitlab-wf"

We want to only allow runs from main against our cluster though, so we're going to make some updates to our authentication chain to block authentication of any token that isn't from our GitLab namespace and isn't from the main branch:

apiVersion: openunison.tremolo.io/v1
kind: AuthenticationChain
metadata:
  name: gitlab-token
  namespace: openunison
spec:
  authMechs:
  - name: oauth2jwt
    required: required
    params:
      issuer: "https://gitlab.com"
      fromWellKnown: "true"
      linkToDirectory: "false"
      noMatchOU: oauth2
      uidAttr: sub
      lookupFilter: "(sub=${sub})"
      userLookupClassName: inetOrgPerson
      defaultObjectClass: inetOrgPerson
      realm: kubernetes
      scope: auth
      audience: https://k8sou.domain/
    secretParams: []
  - name: map
    params:
      map:
      - uid|composite|${sub}
      - mail|composite|${user_email}
      - givenName|composite|${namespace_path}
      - sn|composite|${ref_path}
      - displayName|composite|${project_path}
      - memberOf|user|groups_direct
      - namespacepath|composite|${namespace_path}
    required: required
  - name: az
    required: required
    params:
      rules:
      - filter;(&(ref=main)(namespacepath=mlbiam))
  - name: jit
    params:
      nameAttr: uid
      workflowName: jitdb
    required: required
  - name: genoidctoken
    params:
      idpName: k8sidp
      trustName: kubernetes
    required: required
  level: 1

We made two changes:

  • Added namespacepath to the mapping - We needed change the name of the namespace_path claim because the LDAP filters OpenUnison uses don't allow underscores.
  • Added the az authentication mechanism - This mechanism lets us specify authorization rules, in this case that the run comes from within our GitLab namespace and is from the main branch.

Now that we have our updated authentication chain, let's try running our dev branch again:

$ export KUBECONFIG=$(mktemp) # collapsed multi-line command
E1120 16:10:10.074160      19 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused

if we look in our OpenUnison logs, we'll find an authentication failure:

2024-11-20 16:10:10,012][XNIO-1 task-1] INFO  AccessLog - [AuFail] - wftoken - https://k8sou.domain/wftoken/token/user - cn=none - gitlab-token [10.244.2.7] - [fe98261dc318c8438df6d1cacaaa096f9b4025491]

As a spot check, let's make sure our main branch still works:

$ export KUBECONFIG=$(mktemp) # collapsed multi-line command
Cluster "openunison-cp" set.
Context "openunison-cp" created.
User "projectx-95-xpathx-58-xmlbiamx-47-xwfid-to-kubernetes-blogx-58-xrefx-95-xtypex-58-xbranchx-58-xrefx-58-xmain@openunison-cp" set.
Switched to context "openunison-cp".
NAME               DATA   AGE
kube-root-ca.crt   1      20h

Success! We're now limiting our cluster to jobs from our GitLab namespace and only on the main branch! What if we're not using GitLab?

Going beyond GitLab

What if you need to do this same exercise with GitHub, CircleCI, Jenkins, or just a system that uses SPIFFE identities? We're going to continue this blog post with other CD systems so that you can see what their different opinionated systems require for secure integration with your OpenUnison managed cluster.

If you want to learn more about deploying OpenUnison, check out our documentation site. If you're interested in a commercial support contract for your deployment, we'd love to hear from you!

Related Posts