Quick Start
This guide walks you through setting up Lucity locally and deploying your first app. You'll need a Kubernetes cluster (we use minikube), some CLI tools, and a GitHub App for OAuth.
By the end, you'll have a fully working PaaS running on your machine. Not bad for a Saturday afternoon.
Prerequisites
Before you start, make sure you have the following installed:
- Go 1.26+: for building the backend services
- Node.js 20+: for the dashboard
- Docker: for building container images
- Minikube: local Kubernetes cluster
- Helm: package manager for Kubernetes
- kubectl: Kubernetes CLI
- crane: OCI registry tool (for image inspection)
- air: Go hot reload (install with
go install github.com/air-verse/air@latest) - A GitHub App: configured for OAuth (you'll need the App ID, Client ID, and Client Secret)
Create the cluster
make minikube
This starts minikube with insecure registry support so your local Zot registry can push and pull images without TLS. If you already have a minikube cluster running, this is a no-op.
Deploy infrastructure
make infra
Installs the foundational services Lucity depends on:
- Gateway API CRDs + Envoy Gateway: for HTTP routing
- Zot: OCI-compliant container registry for your app images
- Soft-serve: Git server for GitOps repositories
- ArgoCD: GitOps delivery engine
- CloudNativePG: PostgreSQL operator for managed databases
Grab a coffee while Helm does its thing.
Port-forward infrastructure
make infra-forward
Exposes the infrastructure services on localhost:
| Service | Local Port | Purpose |
|---|---|---|
| Zot | 5000 | OCI registry |
| Soft-serve SSH | 23231 | Git over SSH |
| Soft-serve HTTP | 23232 | Git over HTTP |
| ArgoCD | 8443 | GitOps UI |
| Gateway (Envoy) | 8880 | HTTP traffic ingress |
Keep this terminal running, you'll need these ports accessible.
Generate API tokens
make infra-tokens
This prints access tokens for ArgoCD and Soft-serve. You'll need them in the next step:
- ARGOCD_TOKEN: add to
services/deployer/.env - SOFTSERVE_TOKEN: add to both
services/deployer/.envandservices/packager/.env
Configure services
Copy the example environment files for each service and fill in the values:
cp services/gateway/.env.example services/gateway/.env
cp services/builder/.env.example services/builder/.env
cp services/packager/.env.example services/packager/.env
cp services/deployer/.env.example services/deployer/.env
Key configuration:
| Service | Variable | Value |
|---|---|---|
| Gateway | GITHUB_APP_ID | Your GitHub App ID |
| Gateway | GITHUB_CLIENT_ID | Your GitHub App Client ID |
| Gateway | GITHUB_CLIENT_SECRET | Your GitHub App Client Secret |
| Gateway | REGISTRY_IMAGE_PREFIX | 10.96.100.50:5000 (Zot ClusterIP) |
| Builder | REGISTRY_INSECURE | true |
| Packager | SOFTSERVE_SSH_KEY_PATH | Path to your Soft-serve SSH key |
| Packager | SOFTSERVE_TOKEN | From the previous step |
| Deployer | ARGOCD_TOKEN | From the previous step |
| Deployer | SOFTSERVE_TOKEN | From the previous step |
The REGISTRY_IMAGE_PREFIX uses the Zot ClusterIP (10.96.100.50:5000) so that Kubernetes nodes can pull images directly from the in-cluster registry.
Start all services
make dev
This starts all services with hot reload via air:
- Dashboard: http://localhost:5173
- GraphQL Playground: http://localhost:8080/playground
Log in through the dashboard using your GitHub account (via the GitHub App OAuth flow), connect a repository, and deploy it.
What just happened?
Here's the flow from "connect repo" to "app running":
- GitHub repo connected: Lucity reads your source code (read-only, it never writes to your repo)
- Services detected: railpack analyzes your code and figures out the language, framework, and build plan
- Container image built: your code is built into an OCI image and pushed to the Zot registry
- Helm values generated: the packager creates values.yaml files and commits them to the GitOps repo
- ArgoCD syncs: ArgoCD picks up the new commit and deploys your workload to Kubernetes
No Dockerfile written. No YAML hand-edited. No "works on my machine" moments. (Okay, maybe one or two during local dev. We're honest like that.)
Make command reference
| Command | Description |
|---|---|
make minikube | Create minikube cluster with registry support |
make infra | Deploy Zot, Soft-serve, ArgoCD, Envoy Gateway, CNPG |
make infra-down | Uninstall infrastructure from the cluster |
make infra-forward | Port-forward infrastructure services |
make infra-forward-stop | Stop all infrastructure port-forwards |
make infra-tokens | Generate ArgoCD and Soft-serve tokens |
make dns | Set up *.lucity.local DNS (macOS, requires Homebrew) |
make dev | Start all services with hot reload |
make dev-gateway | Start only the gateway |
make dev-builder | Start only the builder |
make dev-packager | Start only the packager |
make dev-deployer | Start only the deployer |
make dev-webhook | Start only the webhook service |
make dev-stop | Stop all dev services |
make build | Build all services |
make proto | Regenerate protobuf code |
make generate-graphql | Regenerate GraphQL resolvers |
make test-integration | Run full integration test suite |
make test-integration-short | Run quick integration tests |
make test-watch | Auto-rerun tests on file changes |
Next steps
Now that you have Lucity running, head to Basic Concepts to understand how projects, services, environments, and deployments fit together.