Skip to content

Private k8s access using tailscale operator

I have a on-prem private Kubernetes cluster running on Hetzner, until yesterday I was accessing it using tailscale + manually copy and paste the kubeconfig to the local workstation. This is fine for local access, but not feasible for a more granular RBAC based access.

An obvious solution is to setup a OIDC provider and use kubelogin to authenticate to the cluster. Having done this in the past, I really don't want to do it again as I have a life.

One of the alternative solutions comes to my mind is to use the tailscale operator. The main feature of the operator is allow you to expose internal cluster workloads to the tailnet (VPN network). Another equally nice feature is that it allows you to access the control plane (k8s API server) via a tailscale proxy, with fine grained RBAC controls so that:

  • I can access the API server privately
  • The level of access can be controlled via RBAC
  • I don't need to hand-craft kubeconfig files
  • If there are any extra CI/IaC based access requirements, I can grant access without the need to hand craft CSR.

The installation can be found at https://tailscale.com/kb/1236/kubernetes-operator and https://tailscale.com/kb/1437/kubernetes-operator-api-server-proxy. It's reasonably easy to read but comes with a few hidden gotchas.

I will document the steps I've taken to install the operator in this article.

Steps

Step 1: Add the tags ownership in the ACL doc at https://login.tailscale.com/admin/acls/file with the following:

"tagOwners": {
   "tag:k8s-operator": [],
   "tag:k8s": ["tag:k8s-operator"], # it's HuJSON so you can comment and trail with comma
}

Step 2: Generate oauth client id and secret for the operator at https://login.tailscale.com/admin/settings/oauth with the following config:

Scope: auth_keys, devices:core Tags: tag:k8s-operator

For my use case I dump the client id and secret to the GCP secret manager, and sync it to the tailscale namespace using the external secret operator.

Step 3: Install the operator using the helm chart I uses terraform for this:

resource "helm_release" "tailscale_operator" {
  name             = "tailscale-operator"
  chart            = "tailscale-operator"
  version          = "1.82.0"
  repository       = "https://pkgs.tailscale.com/helmcharts"
  namespace        = var.tailscale_namespace
  create_namespace = false

  values = [
    yamlencode({
      oauthSecretVolume = {
        secret = {
          secretName = "tailscale-secret"
        }
      }
      apiServerProxyConfig = {
        mode = "true"
      }
      operatorConfig = {
        hostname = var.operator_hostname
      }
    })
  ]
}

A few things to note: * Unlike the official docs that inject the client id and secret via plain text, I use a k8s secret and mount it as a volume. This is more secure as I won't code the secret into the terraform state. * The operator is installed in the tailscale namespace. * The apiServerConfig is set to true to enable the private access feature. * I made hostname that is default to tailscale-operator configurable, just to be future proof as I might have a few more clusters to manage.

Step 4: Enable magicDNS and HTTP certificates.

This can be done on the DNS Console.

Step 5 ACL to give you access to the k8s cluster

This is my config on https://login.tailscale.com/admin/acls/file.

{
    // add myself to the admin group
    "groups": {
        "group:prod-admin": ["jingkaihe@github"],
    },

    "acls": [
        // allow prod-admin group to access the api-server proxy
        {
            "action": "accept",
            "src":    ["group:prod-admin"],
            "dst":    ["tag:k8s-operator:443"],
        },
        // ...
    ],

    // allow prod-admin tailscale acl group to impersonate the system:masters k8s group,
    // which is bound to the cluster-admin cluster role
    "grants": [
        {
            "src": ["group:prod-admin"],
            "dst": ["tag:k8s-operator"],
            "app": {
                "tailscale.com/cap/kubernetes": [{
                    "impersonate": {
                        "groups": ["system:masters"],
                    },
                }],
            },
        },
    ],
}

Step 6: Access the k8s cluster using tailscale

I remove the existing clsuter config from my workstation via kubectl config delete-context <context-name>.

Then I generated the kubeconfig using tailscale command:

tailscale configure kubeconfig <hostname> # the hostname is the same as `var.operator_hostname` in the terraform

You can run tailscale status to double check if the hostname is correct.

After this when I run kubectl version I got some brief request timeout, but the request eventually succeeded. Afterward I can privately access the k8s cluster as usual.

Conclusion

I put the config together while I was watching Netfix on a Friday night. Have to say this is order of magnitude simpler than the OIDC based approach with a joy to use, without compromising the security.