Get Started
Architecture

How It Works

A technical deep-dive into Lucity's architecture

Lucity is seven services that orchestrate builds and deployments on Kubernetes. No magic, no black boxes. Just well-defined responsibilities and standard protocols. This page explains how the pieces fit together for the curious and the skeptical.

The Services

ServicePortProtocolWhat it does
Gateway8080GraphQLAPI entry point. Receives all requests from the dashboard and delegates to backend services.
Builder9001gRPCBuilds container images from source code using railpack. Pushes to the OCI registry.
Packager9002gRPCManages GitOps repositories. Generates and updates Helm values. Handles ejection.
Deployer9003gRPCCreates and syncs ArgoCD Applications. Reports rollout health and sync status.
Webhook9004HTTPReceives GitHub webhooks. Routes push events to trigger builds and deployments.
Cashier9005gRPCBilling and metering. Tracks resource usage, manages Stripe subscriptions and invoicing.
Dashboard5173HTTPVue 3 SPA. The user-facing interface for managing projects, services, and deployments.

Each service is a standalone Go binary (except Dashboard, which is a Vite app). Each owns its domain completely. No shared databases, no shared state, no tight coupling.

How They Communicate

The communication model is deliberately simple:

  • Dashboard to Gateway: GraphQL over HTTP. The dashboard is a standard SPA that talks to one API endpoint.
  • Gateway to backend services: gRPC for short-lived commands. Gateway is the only service that talks to Builder, Packager, and Deployer. It's the fan-out point.
  • Long-running operations: fire-and-poll. Trigger the operation, get an ID back, poll for status. Builds can take minutes. The dashboard polls buildStatus for progress and streams logs in real time.

No persistent WebSocket connections for state sync. No message queues. No event buses. No Kafka. The system is simple enough that you can trace any request from the dashboard to the backend service and back by reading the code.

The Deployment Flow

Here's what happens when you deploy a service, the full chain from click to running container:

Dashboard sends a mutation

The user clicks "Deploy" in the dashboard. The dashboard sends a deploy GraphQL mutation to the Gateway with the project, service, and environment details.

Gateway calls Builder

Gateway calls Builder.Build via gRPC. Builder clones the source repository, detects the framework and language using railpack, builds a container image, and pushes it to the OCI registry (Zot).

Packager updates the GitOps repo

Gateway calls Packager.UpdateImageTag. The packager writes the new image tag to the environment's values.yaml in the GitOps repository and commits with a semantic message like deploy(development): api a1b2c3d.

Deployer syncs ArgoCD

Gateway calls Deployer.Sync. The deployer ensures the ArgoCD Application exists for this project and environment, then triggers a sync. ArgoCD detects the new commit in the GitOps repo and applies the updated manifests to Kubernetes.

Kubernetes rolls out the update

ArgoCD applies the new Deployment manifest. Kubernetes performs a rolling update, spinning up new pods with the new image, waiting for health checks, then terminating old pods.

Dashboard shows progress

The dashboard polls deployStatus for real-time updates. It shows build logs streaming in, then sync status, then rollout health. When the new pods are healthy, the deployment is complete.

External Dependencies

Lucity depends on a small set of open-source tools:

DependencyRoleReplaceable?
KubernetesRuntime platform for all workloadsIt's Kubernetes. That's the runtime.
ArgoCDGitOps reconciliation: watches Git, applies to K8sAny GitOps tool (Flux, etc.)
Soft-serveGit hosting for GitOps repositoriesAny Git server
ZotOCI registry for container imagesAny OCI-compliant registry
GitHub AppSource code access and OAuth authenticationAny OAuth-compatible Git host

All open source. All replaceable. The build engine and registry are interfaces in the codebase. Swapping implementations is a design goal, not an afterthought.

Why No Message Queue?

Some architectures would reach for Kafka or NATS for inter-service communication. Lucity doesn't, for three reasons:

  1. Operations are short-lived or polled. A build takes minutes, but the interaction is "start build, poll for status," not a stream of events that needs reliable delivery.
  2. State lives in Git and Kubernetes, not in flight. There's no event stream that is the source of truth. Git is the source of truth. Kubernetes is the source of truth. If a message is lost, the state is still there.
  3. Simpler to operate. One less system to monitor, configure, tune, and debug at 2 AM. For a platform that values small-team operability, that matters.

The architecture is boring on purpose. Boring is debuggable. Boring is operable. Boring lets you sleep at night.