Self-Hosting
This guide covers deploying Lucity on a production Kubernetes cluster. By the end, you'll have the full platform running with TLS, Gateway API routing, and DNS, ready to ship real apps.
If you're looking to run Lucity locally for development, check the Quick Start instead.
Prerequisites
You'll need:
- Kubernetes cluster (1.28+) with at least 3 nodes
- Helm 3: package manager for Kubernetes
- kubectl: configured with your cluster context
- cert-manager: for automated TLS certificates
- external-dns: for automatic DNS record management
- Gateway API controller: Cilium Gateway API or Envoy Gateway
- Two DNS zones: one for the platform, one for workloads (e.g.
lucity.cloud+lucity.app) - A GitHub App: with OAuth configured
The two domains must be different. The platform domain hosts the UI, API, and docs, while the workload domain serves your deployed applications.
Domain Layout
Lucity uses path-based routing on the platform domain:
| Path | Backend | Purpose |
|---|---|---|
/graphql | Gateway | GraphQL API + WebSocket |
/auth | Gateway | OAuth flows |
/playground | Gateway | GraphQL playground |
/app | Dashboard | Vue SPA |
/webhooks | Webhook | GitHub webhook reception |
/ | Docs | Documentation (catch-all) |
User workloads get subdomains on the workload domain: {service}-{env}.lucity.app.
1. TLS Setup
Lucity uses cert-manager with DNS01 validation for TLS certificates. This supports both exact domain and wildcard certificates.
Create a ClusterIssuer for your DNS provider. Here's an example for Azure DNS:
# Create the DNS provider secret
kubectl create secret generic azuredns-config \
-n cert-manager \
--from-literal=client-secret='<your-azure-sp-client-secret>'
# Create the ClusterIssuer
kubectl apply -f - <<'EOF'
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-dns01
spec:
acme:
email: your-email@example.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-dns01
solvers:
- dns01:
azureDNS:
clientID: <your-sp-client-id>
clientSecretSecretRef:
name: azuredns-config
key: client-secret
subscriptionID: <your-subscription-id>
tenantID: <your-tenant-id>
resourceGroupName: <your-resource-group>
hostedZoneName: <your-platform-domain>
environment: AzurePublicCloud
EOF
Repeat the solver block for each DNS zone if your platform and workload domains are in separate zones.
2. Deploy Infrastructure
Generate an SSH key for Soft-serve admin access:
ssh-keygen -t ed25519 -f ~/.ssh/lucity-admin -N ""
Create your infrastructure values file (see deployments/lucity-prod/infra-values.yaml for a complete example):
# infra-values.yaml
zot:
persistence: true
pvc:
create: true
storage: 20Gi
storageClassName: <your-storage-class>
softServe:
persistence:
size: 10Gi
storageClass: <your-storage-class>
argo-cd:
enabled: true
configs:
params:
server.insecure: true # HTTP internally
gateway:
enabled: true
className: cilium # or "eg" for Envoy Gateway
name: lucity-gateway
listeners:
- name: platform-https
protocol: HTTPS
port: 443
hostname: "<your-platform-domain>"
tls:
mode: Terminate
certificateRefs:
- name: platform-tls
- name: workload-https
protocol: HTTPS
port: 443
hostname: "*.<your-workload-domain>"
tls:
mode: Terminate
certificateRefs:
- name: workload-wildcard-tls
certificates:
- name: platform-tls
secretName: platform-tls
issuerRef:
name: letsencrypt-dns01
dnsNames:
- "<your-platform-domain>"
- name: workload-wildcard-tls
secretName: workload-wildcard-tls
issuerRef:
name: letsencrypt-dns01
dnsNames:
- "*.<your-workload-domain>"
Deploy the infrastructure chart:
helm dependency update charts/lucity-infra
helm upgrade --install lucity-infra charts/lucity-infra \
-n lucity-system --create-namespace \
--kube-context <your-context> \
-f infra-values.yaml \
--set softServe.initialAdminKey="$(cat ~/.ssh/lucity-admin.pub)"
Wait for all pods to become ready:
kubectl get pods -n lucity-system --watch
3. Generate Tokens
You'll need API tokens for ArgoCD and Soft-serve.
ArgoCD Token
# Port-forward ArgoCD
kubectl port-forward svc/lucity-infra-argocd-server 8443:80 -n lucity-system &
# Get initial admin password
ADMIN_PASS=$(kubectl get secret argocd-initial-admin-secret \
-n lucity-system -o jsonpath='{.data.password}' | base64 -d)
# Create a session and generate an API token
SESSION=$(curl -sk http://localhost:8443/api/v1/session \
-d "{\"username\":\"admin\",\"password\":\"$ADMIN_PASS\"}" | jq -r '.token')
ARGOCD_TOKEN=$(curl -sk -H "Authorization: Bearer $SESSION" \
-X POST http://localhost:8443/api/v1/account/lucity/token | jq -r '.token')
echo "ARGOCD_TOKEN=$ARGOCD_TOKEN"
Soft-serve Token
# Port-forward Soft-serve SSH
kubectl port-forward svc/lucity-infra-soft-serve 23231:23231 -n lucity-system &
SOFTSERVE_TOKEN=$(ssh -i ~/.ssh/lucity-admin \
-o IdentitiesOnly=yes \
-o StrictHostKeyChecking=accept-new \
-p 23231 localhost token create 'packager')
echo "SOFTSERVE_TOKEN=$SOFTSERVE_TOKEN"
4. Deploy the Platform
Create your platform values file (see deployments/lucity-prod/values.yaml for a complete example) and install:
helm upgrade --install lucity charts/lucity \
-n lucity-system \
--kube-context <your-context> \
-f values.yaml \
--set secrets.JWT_SECRET="$(openssl rand -hex 32)" \
--set secrets.GITHUB_CLIENT_SECRET="<from-github-app>" \
--set secrets.GITHUB_WEBHOOK_SECRET="<from-github-app>" \
--set secrets.ARGOCD_TOKEN="$ARGOCD_TOKEN" \
--set secrets.SOFTSERVE_TOKEN="$SOFTSERVE_TOKEN" \
--set-file githubPrivateKey=<path-to-github-app.pem> \
--set-file softserveKey=~/.ssh/lucity-admin \
--set gateway.env.GITHUB_APP_ID="<app-id>" \
--set gateway.env.GITHUB_CLIENT_ID="<client-id>" \
--set webhook.env.GITHUB_APP_ID="<app-id>"
5. Configure DNS
Wildcard DNS Record
Create a wildcard A record for your workload domain pointing to the Gateway's external IP. This ensures every new service subdomain is instantly reachable without waiting for DNS propagation:
# Find the Gateway's external IP
GATEWAY_IP=$(kubectl get gateway lucity-gateway -n lucity-system \
-o jsonpath='{.status.addresses[0].value}')
echo "Gateway IP: $GATEWAY_IP"
Create the DNS record with your provider:
az network dns record-set a add-record \
--resource-group <your-resource-group> \
--zone-name <your-workload-domain> \
--record-set-name '*' \
--ipv4-address $GATEWAY_IP
curl -X POST "https://api.cloudflare.com/client/v4/zones/<zone-id>/dns_records" \
-H "Authorization: Bearer <api-token>" \
-H "Content-Type: application/json" \
--data "{\"type\":\"A\",\"name\":\"*\",\"content\":\"$GATEWAY_IP\",\"proxied\":false}"
aws route53 change-resource-record-sets \
--hosted-zone-id <zone-id> \
--change-batch '{
"Changes": [{
"Action": "UPSERT",
"ResourceRecordSet": {
"Name": "*.<your-workload-domain>",
"Type": "A",
"TTL": 300,
"ResourceRecords": [{"Value": "'$GATEWAY_IP'"}]
}
}]
}'
Without this wildcard record, each service domain would only become reachable after external-dns detects the new HTTPRoute and creates an individual DNS record, which can take a few minutes due to DNS propagation. With the wildcard, any *.lucity.app subdomain resolves immediately.
external-dns (optional)
external-dns is optional when you have a wildcard DNS record for the workload domain. It's still useful if you want to support custom domains on user workloads, as those require individual DNS records.
If you're using Gateway API (not Ingress), external-dns needs to watch HTTPRoute resources:
# Add gateway-httproute source
kubectl -n external-dns patch deploy external-dns --type=json \
-p '[{"op":"add","path":"/spec/template/spec/containers/0/args/-","value":"--source=gateway-httproute"}]'
# Add RBAC for Gateway API resources
kubectl patch clusterrole external-dns --type=json -p '[
{"op":"add","path":"/rules/-","value":{
"apiGroups":["gateway.networking.k8s.io"],
"resources":["gateways","httproutes"],
"verbs":["get","list","watch"]
}}
]'
Use the upsert-only policy so external-dns creates and updates records but never deletes them. This protects your wildcard record from being removed.
6. Configure GitHub App
Once the platform is accessible, update your GitHub App settings:
- Callback URL:
https://<your-platform-domain>/auth/github/callback - Webhook URL:
https://<your-platform-domain>/webhooks
Verify
Check that everything is running:
# All pods should be Running
kubectl get pods -n lucity-system
# Gateway should be Programmed
kubectl get gateway -n lucity-system
# HTTPRoute should be Accepted
kubectl get httproute -n lucity-system
# Certificates should be Ready
kubectl get certificate -n lucity-system
# DNS should resolve
dig <your-platform-domain>
# Platform should respond
curl -I https://<your-platform-domain>/
curl -I https://<your-platform-domain>/app/
curl -I https://<your-platform-domain>/playground
Reference Deployment
The deployments/lucity-prod/ directory contains a complete production deployment profile for a 3-node Hetzner cluster with Cilium Gateway API, Azure DNS, and Let's Encrypt TLS. Use it as a starting point for your own deployment.