Get Started
Security

Registry Authentication

How container image registry credentials are scoped and managed

Lucity uses Zot as its OCI registry for user workload images. The registry requires authentication for all operations. No anonymous access.

Users and Roles

The registry has two htpasswd users with different access levels:

UserAccessUsed by
builderread, create, update, deleteBuild Job pods (push images and cache)
readerreadWorkload pods (pull images via imagePullSecrets)

Zot's access control is configured with defaultPolicy: ["read"] for any authenticated user and an explicit policy granting builder full write access. Unauthenticated requests are rejected.

Build Push Credentials

The builder credential follows a specific path designed to keep it off the BuildKit daemon's filesystem (where untrusted RUN instructions execute):

  1. Helm creates a Secret containing a Docker config.json in the lucity-builds namespace
  2. The Build Job pod mounts this Secret and sets DOCKER_CONFIG to point to it
  3. The Job pod's build runner creates a BuildKit session auth provider from the Docker config
  4. When BuildKit needs to authenticate (push image, push/pull cache), it calls back to the Job pod's session over gRPC
  5. The Job pod responds with credentials from its mounted Secret

BuildKit never stores these credentials on disk. A malicious RUN instruction on the buildkitd pod cannot read them. See Build Isolation for the full model.

Workload Pull Credentials

Workload pods pull images from the registry using Kubernetes imagePullSecrets. The reader credential reaches workload namespaces without flowing through GitOps repos:

  1. The platform Helm chart creates a source Secret (lucity-registry-pull, type kubernetes.io/dockerconfigjson) in lucity-system
  2. When the deployer creates a new environment namespace, it clones this Secret into the workload namespace as lucity-registry
  3. Deployment and CronJob templates in the lucity-app chart unconditionally reference lucity-registry via imagePullSecrets
  4. kubelet uses the credentials when pulling images for the pods

The clone is idempotent: if the Secret already exists it gets updated, if the source Secret is missing (e.g., dev environments without a private registry) the deployer logs a warning and skips. No credentials are committed to GitOps repos.

imagePullSecrets are consumed by kubelet at the node level during image pulls. They are not mounted into the container filesystem and not accessible to application code.

Why Not Anonymous Reads?

An earlier version of the platform allowed anonymous read access to the registry. This was removed because:

  • Workload pods are blocked from reaching the registry by NetworkPolicy (egress to cluster CIDRs is denied), but build pods can reach it
  • A malicious RUN instruction during a build could list and pull any image from the registry, including images from other workspaces
  • imagePullSecrets are consumed by kubelet, which runs on the node (not inside the pod's network namespace), so NetworkPolicies on the pod don't affect image pulls

Workspace-Scoped Credentials (Planned)

The current model uses shared credentials: one builder for all pushes, one reader for all pulls. In a multi-workspace setup, a compromised build in workspace A could theoretically overwrite images belonging to workspace B (if it obtained the builder credential, which is only on the Job pod).

Zot supports path-based access control with glob patterns:

{
  "accessControl": {
    "repositories": {
      "workspace-a/**": {
        "policies": [
          { "users": ["ws-a-builder"], "actions": ["read", "create", "update"] }
        ]
      }
    }
  }
}

Each workspace would get its own htpasswd user, restricted to pushing images under its own path. Workspace A's credentials couldn't read or overwrite workspace B's images.

The limitation is that Zot uses static htpasswd files. Adding a user requires updating the file and reloading. For dynamic credential issuance, Zot supports Docker v2 bearer token authentication with repository-scoped access claims in JWTs.

Image Integrity

Every built image is tagged with the git commit SHA and labeled with OCI-standard metadata:

org.opencontainers.image.source: "https://github.com/acme/myapp"
org.opencontainers.image.revision: "a1b2c3d"
lucity.dev/built-by: "lucity-builder"
lucity.dev/service: "api"

These labels provide traceability from running container back to exact source commit. Combined with scoped credentials, this ensures that the image running in production was built from the code you expect, by the platform you trust, and pushed by credentials that belong to your workspace.