Get Started
Architecture

Philosophy

The principles that guide every design decision in Lucity

Lucity is built on a small set of principles. They're not aspirational. They're enforced in every feature, every API, every line of code. If a feature violates one of these, it doesn't ship.

Ejectability First

Every feature must survive ejection. Before anything ships, the question is: "If a user runs lucity eject right now, does this feature survive as standard Kubernetes, Helm, and ArgoCD configs?" If it can't, it doesn't ship.

This isn't a nice-to-have. It's the architectural constraint that shapes everything else. It's why we use Gateway API instead of a custom proxy. It's why we use CloudNativePG instead of a proprietary database layer. It's why the GitOps repo is plain Helm values, not a proprietary format.

Ejectability means you're never stuck. You can leave on a Tuesday afternoon and be running independently by Wednesday morning. Try that with Railway.

No Database

Lucity has no central database. Zero. All state is derived from external systems:

  • Git for configuration and deployment history
  • Kubernetes for runtime state
  • The OCI registry for container images
  • Your identity provider for authentication

If you're tempted to add a database to a PaaS, you're probably building lock-in. A database is a leash. It's where vendor-specific state accumulates, where migration nightmares begin, where "just one more table" turns into "we can never leave." We chose not to have one.

Every piece of state in Lucity is derivable from systems you already run. If Lucity disappears tomorrow, nothing is lost, because nothing lived exclusively inside it.

Your Repo is Sacred

The platform never writes to your source repository. Not a commit, not a file, not a webhook configuration, not a .lucity.yaml, not a Procfile with platform-specific magic. Your code is yours.

All platform-managed configuration lives in the GitOps repository, which you can inspect, modify, or take with you. The separation is clean and absolute.

Discovery Over Definition

Lucity doesn't maintain custom CRDs or mapping tables. There's no LucityProject resource, no LucityService object, no proprietary schema you need to learn.

  • A "project" is namespaces with lucity.dev/project labels
  • A "service" is discovered from Helm values and the Kubernetes API
  • A "database" is a CNPG Cluster with platform labels

Standard kubectl works for everything. kubectl get ns -l lucity.dev/project=myapp shows you all namespaces for a project. No special tooling, no proprietary query language, no "please install our CLI to see what's happening."

Standard Tools

Helm. ArgoCD. Gateway API. CloudNativePG. Kubernetes. Open source, battle-tested, widely understood. Lucity is a thin orchestration layer on top of tools the community already trusts.

When you eject, you're not learning a new stack. You're using the same stack, minus one dashboard. Your team's existing Kubernetes knowledge applies directly. Your existing monitoring works. Your existing runbooks are relevant.

Non-Intrusive

If Lucity goes down, your workloads keep running. The platform is an orchestrator, not a runtime dependency. Your apps don't call Lucity APIs. They don't need a sidecar. They don't phone home. They run on plain Kubernetes.

Lucity's job is to get your applications deployed and configured correctly. Once they're running, they don't need us. That's by design. A PaaS that becomes a runtime dependency has confused convenience with control.


Most PaaS platforms say "trust us with your infrastructure." We say "here's exactly how it works, and here are the keys if you want to drive."