Self-Hosted Headscale as a Tailscale Control Server
Headscale is an open-source, self-hosted implementation of the Tailscale control server. By running your own control plane, you gain full ownership of your network coordination, user management, and access policies without relying on Tailscale's hosted infrastructure.
This post walks through deploying Headscale on a bare-metal Kubernetes cluster with SQLite persistence, embedded DERP relay, Google OIDC authentication, and proper secrets management.
Architecture Overview
flowchart TB
subgraph external[External Traffic]
client[Tailscale Clients]
end
subgraph ingress[Ingress Layer]
lb[Load Balancer<br/>TCP 443]
end
subgraph k8s[Kubernetes Cluster]
subgraph sts[StatefulSet: headscale]
subgraph pod[Pod]
init[Init Container<br/>litestream restore]
headscale[Headscale<br/>Container]
litestream[Litestream<br/>Sidecar]
data[(emptyDir<br/>/data)]
stun[Host Port<br/>UDP 3478]
end
end
svc[Service: ClusterIP<br/>8080, 9090, 50443, 3478/UDP]
end
subgraph gcp[Google Cloud]
gcs[(GCS Bucket<br/>SQLite Replica)]
end
client -->|HTTPS| lb
client -->|DERP/STUN| stun
lb --> svc
svc --> pod
init --> data
headscale --> data
litestream --> data
init -.->|restore on startup| gcs
litestream -->|continuous replication| gcs
The deployment consists of:
- StatefulSet running a single replica with headscale and litestream containers
- Load Balancer exposing TCP 443 for HTTPS traffic
- Host Port exposing UDP 3478 for DERP relay and STUN NAT traversal
- Litestream sidecar for continuous SQLite replication to GCS
- Init container to restore the database from GCS on startup
Deploy The Workload Using StatefulSet
Headscale uses SQLite as its backend database, which means only one replica can safely write to the database at a time. A StatefulSet with a single replica ensures ordered pod management and stable network identity, making it the right choice for this workload.
spec:
service_name: headscale
replicas: 1
pod_management_policy: OrderedReady
SQLite Persistence with Litestream
Since we are running on bare-metal without cloud-managed persistent volumes, we use Litestream to continuously replicate the SQLite database to a GCS bucket. This provides durability without requiring a persistent volume.
The init container restores the database on pod startup if a replica exists:
init_container:
name: litestream-restore
image: litestream/litestream:0.3.13
args:
- restore
- -if-db-not-exists
- -if-replica-exists
- /data/headscale.db
The sidecar continuously replicates changes:
container:
name: litestream
image: litestream/litestream:0.3.13
args: ["replicate"]
Litestream configuration:
dbs:
- path: /data/headscale.db
replicas:
- type: gcs
bucket: your-litestream-bucket
path: headscale.db
Workload Identity for GCS Access
On bare-metal clusters without native cloud integration, we use Workload Identity Federation to provide GCS access without static credentials. The Kubernetes service account is mapped to a GCP principal that has roles/storage.objectAdmin on the Litestream bucket.
The credential configuration is mounted as a ConfigMap:
{
"universe_domain": "googleapis.com",
"type": "external_account",
"audience": "//iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/providers/PROVIDER_ID",
"subject_token_type": "urn:ietf:params:oauth:token-type:jwt",
"token_url": "https://sts.googleapis.com/v1/token",
"credential_source": {
"file": "/var/run/workload-identity-federation/token",
"format": { "type": "text" }
}
}
TLS Certificate Management
Certificates are managed by cert-manager using Let's Encrypt as the issuer. The Certificate resource references a ClusterIssuer:
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: headscale-cert
namespace: headscale
spec:
secretName: headscale-tls
issuerRef:
kind: ClusterIssuer
name: letsencrypt-prod
commonName: headscale.yourdomain.com
dnsNames:
- headscale.yourdomain.com
The Traefik IngressRoute then references this TLS secret.
DNS Management with external-dns
DNS records in GCP Cloud DNS are managed automatically by external-dns. The IngressRoute annotation tells external-dns which IP addresses to create A records for:
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: headscale
annotations:
external-dns.alpha.kubernetes.io/target: "1.2.3.4,5.6.7.8"
Secrets Management
Sensitive configuration is stored in GCP Secret Manager and synced to Kubernetes using External Secrets Operator:
| Secret | Purpose |
|---|---|
| OIDC Client ID/Secret | Google OAuth credentials |
| Noise Private Key | TS2021 protocol encryption |
| DERP Private Key | Embedded DERP server encryption |
The noise and DERP keys are generated using the headscale CLI. Here is how to bootstrap them:
# Create temporary directory and minimal config
mkdir -p /tmp/headscale-keys
cat > /tmp/headscale-keys/config.yaml << 'EOF'
server_url: http://localhost:8080
listen_addr: 0.0.0.0:8080
noise:
private_key_path: /var/lib/headscale/noise_private.key
derp:
server:
enabled: true
private_key_path: /var/lib/headscale/derp_server_private.key
stun_listen_addr: 0.0.0.0:3478
urls: []
database:
type: sqlite
sqlite:
path: /var/lib/headscale/db.sqlite
dns:
magic_dns: false
override_local_dns: false
base_domain: example.com
prefixes:
v4: 100.64.0.0/10
v6: fd7a:115c:a1e0::/48
EOF
# Run headscale to generate keys
docker run --rm -d --name headscale-keygen \
-v /tmp/headscale-keys:/var/lib/headscale \
-v /tmp/headscale-keys/config.yaml:/etc/headscale/config.yaml:ro \
headscale/headscale:v0.27.1 serve
sleep 3
docker stop headscale-keygen
# View the generated keys
sudo cat /tmp/headscale-keys/noise_private.key
sudo cat /tmp/headscale-keys/derp_server_private.key
Store the keys in GCP Secret Manager:
jq -n \
--arg noise "$(sudo cat /tmp/headscale-keys/noise_private.key)" \
--arg derp "$(sudo cat /tmp/headscale-keys/derp_server_private.key)" \
'{noise_private_key: $noise, derp_private_key: $derp}' | \
gcloud secrets create headscale-keys --data-file=- --project=your-project-id
# Clean up local keys
sudo rm -rf /tmp/headscale-keys
Configure External Secrets to sync them to Kubernetes:
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: headscale-keys
namespace: headscale
spec:
refreshInterval: 1h
secretStoreRef:
name: headscale-secret-store
kind: SecretStore
target:
name: headscale-keys
creationPolicy: Owner
data:
- secretKey: noise_private.key
remoteRef:
key: headscale-keys
property: noise_private_key
- secretKey: derp_server_private.key
remoteRef:
key: headscale-keys
property: derp_private_key
Embedded DERP Server
Instead of relying on Tailscale's public DERP servers, we run a dedicated DERP server embedded in Headscale. This keeps all relay traffic self-hosted.
derp:
server:
enabled: true
region_id: 999
region_code: myderp
region_name: My DERP Server
stun_listen_addr: 0.0.0.0:3478
private_key_path: /data/derp_server_private.key
verify_clients: true
urls: [] # Empty to disable public DERP servers
Setting urls: [] removes https://controlplane.tailscale.com/derpmap/default from the configuration, ensuring clients only use your self-hosted DERP server.
STUN Port Exposure
The STUN port (3478/UDP) is required for NAT traversal discovery. We expose it using hostPort rather than Traefik IngressRouteUDP because K3s does not have UDP entrypoints configured by default, and patching Traefik HelmChartConfig adds unnecessary complexity for this use case.
port:
name: stun
container_port: 3478
protocol: UDP
host_port: 3478
Firewall Requirements
Ensure these ports are open:
- TCP 443: HTTPS and DERP relay traffic
- UDP 3478: STUN for NAT traversal
OIDC Authentication with Google
Configuring OIDC with Google provides a Tailscale-like SSO experience with better control over user access. Users authenticate through your Google Workspace domain.
oidc:
issuer: https://accounts.google.com
allowed_domains:
- yourdomain.com
pkce:
enabled: true
expiry: 180d
The client ID and secret are injected via environment variables:
env:
- name: HEADSCALE_OIDC_CLIENT_ID
valueFrom:
secretKeyRef:
name: headscale-oidc
key: client_id
- name: HEADSCALE_OIDC_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: headscale-oidc
key: client_secret
To set up Google OAuth credentials:
- Go to Google Cloud Console, navigate to APIs & Services, then Credentials
- Create an OAuth client ID for a Web Application
- Set the authorized redirect URI to
https://headscale.yourdomain.com/oidc/callback - Store the credentials in GCP Secret Manager:
echo -n '{"client_id":"<CLIENT_ID>","client_secret":"<CLIENT_SECRET>"}' | \
gcloud secrets create headscale-oidc --data-file=- --project=your-project-id
File-Based ACL Policy
Access control is managed via a file-based ACL policy. The only downside is that Headscale requires a server reload to apply new policies, but for small-scale deployments this is acceptable.
{
"groups": {
"group:admin": ["user@yourdomain.com"],
"group:developer": ["user@yourdomain.com"]
},
"tagOwners": {
"tag:workstation": ["group:admin"]
},
"acls": [
{ "action": "accept", "src": ["*"], "dst": ["*:*"] }
],
"ssh": [
{
"action": "accept",
"src": ["group:developer"],
"dst": ["tag:workstation"],
"users": ["autogroup:nonroot"]
}
]
}
The ACL is mounted as a ConfigMap, and the StatefulSet includes a checksum annotation to trigger rolling updates when the policy changes:
annotations:
checksum/acl: "${sha256(acl_config_map_data)}"
Managing Headscale via kubectl
Rather than exposing the gRPC port (50443) to the public internet, we access the headscale CLI through kubectl. This alias makes it convenient:
alias headscale='kubectl exec -n headscale -it sts/headscale -c headscale -- headscale'
Creating Preauth Keys
To create a reusable preauth key for joining nodes:
headscale preauthkeys create --expiration 7d --reusable --user 1 --tags tag:workstation
Joining Nodes
To join a node using a preauth key:
tailscale up --authkey $PREAUTH_KEY --login-server https://headscale.yourdomain.com --ssh
To join using OIDC authentication:
tailscale up --login-server https://headscale.yourdomain.com
Switching Between Accounts
If you use both Tailscale and Headscale, you can switch between them:
tailscale switch --list # View available accounts
tailscale switch <account-id> # Switch to the desired account
Putting It All Together
Here are all the Kubernetes resources that make up the complete Headscale deployment.
ConfigMaps
apiVersion: v1
kind: ConfigMap
metadata:
name: headscale-config
namespace: headscale
data:
config.yaml: |
server_url: https://headscale.yourdomain.com
listen_addr: 0.0.0.0:8080
metrics_listen_addr: 0.0.0.0:9090
grpc_listen_addr: 0.0.0.0:50443
database:
type: sqlite
sqlite:
path: /data/headscale.db
noise:
private_key_path: /data/noise_private.key
prefixes:
v4: 100.64.0.0/10
v6: fd7a:115c:a1e0::/48
derp:
server:
enabled: true
region_id: 999
region_code: myderp
region_name: My DERP Server
stun_listen_addr: 0.0.0.0:3478
private_key_path: /data/derp_server_private.key
ipv4: 203.0.113.10 # Your server's public IP
verify_clients: true
urls: []
paths: []
oidc:
issuer: https://accounts.google.com
allowed_domains:
- yourdomain.com
expiry: 180d
pkce:
enabled: true
policy:
mode: file
path: /etc/headscale/acl.json
dns:
magic_dns: true
base_domain: internal.net
nameservers:
global:
- 1.1.1.1
- 1.0.0.1
- 2606:4700:4700::1111
- 2606:4700:4700::1001
log:
level: info
format: json
---
apiVersion: v1
kind: ConfigMap
metadata:
name: litestream-config
namespace: headscale
data:
litestream.yml: |
dbs:
- path: /data/headscale.db
replicas:
- type: gcs
bucket: your-litestream-bucket
path: headscale.db
StatefulSet
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: headscale
namespace: headscale
spec:
serviceName: headscale
replicas: 1
podManagementPolicy: OrderedReady
selector:
matchLabels:
app: headscale
updateStrategy:
type: RollingUpdate
rollingUpdate:
partition: 0
template:
metadata:
labels:
app: headscale
annotations:
checksum/acl: "${sha256(acl_configmap)}"
checksum/config: "${sha256(config_configmap)}"
spec:
serviceAccountName: headscale
initContainers:
- name: litestream-restore
image: litestream/litestream:0.3.13
args: [restore, -if-db-not-exists, -if-replica-exists, /data/headscale.db]
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /etc/gcp/credential.json
volumeMounts:
- name: data
mountPath: /data
- name: litestream-config
mountPath: /etc/litestream.yml
subPath: litestream.yml
- name: gcp-credentials
mountPath: /etc/gcp
readOnly: true
- name: workload-identity-token
mountPath: /var/run/workload-identity-federation
readOnly: true
containers:
- name: headscale
image: headscale/headscale:v0.27.1
args: [serve]
ports:
- name: http
containerPort: 8080
- name: metrics
containerPort: 9090
- name: grpc
containerPort: 50443
- name: stun
containerPort: 3478
hostPort: 3478
protocol: UDP
env:
- name: HEADSCALE_OIDC_CLIENT_ID
valueFrom:
secretKeyRef:
name: headscale-oidc
key: client_id
- name: HEADSCALE_OIDC_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: headscale-oidc
key: client_secret
volumeMounts:
- name: data
mountPath: /data
- name: headscale-config
mountPath: /etc/headscale/config.yaml
subPath: config.yaml
- name: headscale-acl
mountPath: /etc/headscale/acl.json
subPath: acl.json
- name: headscale-keys
mountPath: /data/noise_private.key
subPath: noise_private.key
readOnly: true
- name: headscale-keys
mountPath: /data/derp_server_private.key
subPath: derp_server_private.key
readOnly: true
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 10
periodSeconds: 30
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 256Mi
- name: litestream
image: litestream/litestream:0.3.13
args: [replicate]
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /etc/gcp/credential.json
volumeMounts:
- name: data
mountPath: /data
- name: litestream-config
mountPath: /etc/litestream.yml
subPath: litestream.yml
- name: gcp-credentials
mountPath: /etc/gcp
readOnly: true
- name: workload-identity-token
mountPath: /var/run/workload-identity-federation
readOnly: true
resources:
requests:
cpu: 50m
memory: 64Mi
limits:
cpu: 200m
memory: 128Mi
volumes:
- name: data
emptyDir: {}
- name: headscale-config
configMap:
name: headscale-config
- name: headscale-acl
configMap:
name: headscale-acl
- name: litestream-config
configMap:
name: litestream-config
- name: gcp-credentials
configMap:
name: gcp-credential-configuration
- name: workload-identity-token
projected:
sources:
- serviceAccountToken:
audience: https://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/providers/PROVIDER_ID
expirationSeconds: 3600
path: token
- name: headscale-keys
secret:
secretName: headscale-keys
Service
apiVersion: v1
kind: Service
metadata:
name: headscale
namespace: headscale
spec:
type: ClusterIP
selector:
app: headscale
ports:
- name: http
port: 8080
targetPort: 8080
- name: metrics
port: 9090
targetPort: 9090
- name: grpc
port: 50443
targetPort: 50443
- name: stun
port: 3478
targetPort: 3478
protocol: UDP
External Secrets
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: headscale-oidc
namespace: headscale
spec:
refreshInterval: 1h
secretStoreRef:
kind: SecretStore
name: headscale-secret-store
target:
name: headscale-oidc
creationPolicy: Owner
data:
- secretKey: client_id
remoteRef:
key: headscale-oidc
property: client_id
- secretKey: client_secret
remoteRef:
key: headscale-oidc
property: client_secret
---
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: headscale-keys
namespace: headscale
spec:
refreshInterval: 1h
secretStoreRef:
kind: SecretStore
name: headscale-secret-store
target:
name: headscale-keys
creationPolicy: Owner
data:
- secretKey: noise_private.key
remoteRef:
key: headscale-keys
property: noise_private_key
- secretKey: derp_server_private.key
remoteRef:
key: headscale-keys
property: derp_private_key
Certificate
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: headscale-cert
namespace: headscale
spec:
secretName: headscale-tls
commonName: headscale.yourdomain.com
dnsNames:
- headscale.yourdomain.com
issuerRef:
kind: ClusterIssuer
name: letsencrypt-prod
IngressRoute
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: headscale
namespace: headscale
annotations:
external-dns.alpha.kubernetes.io/target: 203.0.113.10
spec:
entryPoints:
- websecure
routes:
- kind: Rule
match: Host(`headscale.yourdomain.com`)
services:
- name: headscale
port: 8080
tls:
secretName: headscale-tls
Conclusion
Self-hosting Headscale gives you complete control over your Tailscale-compatible network. The trade-off is obviously the operational complexity compared to using Tailscale's hosted control plane, but for those who need data sovereignty or want to avoid vendor lock-in, Headscale is a solid choice.