Build Isolation
Builds are the riskiest operation a PaaS performs. You're taking someone's source code, running detection heuristics over it, and executing arbitrary build commands to produce a container image. Lucity separates every part of this pipeline from the platform itself.
Architecture
The build pipeline has three components, all running in the dedicated lucity-builds namespace:
| Component | Role |
|---|---|
| Build Job pod | Trusted platform code. Clones the repo, runs railpack to detect the framework, sends the build plan to BuildKit via gRPC. Holds registry credentials. |
| BuildKit daemon | Long-lived process. Receives LLB build graphs from Job pods, executes RUN instructions, pushes built images to the registry. No credentials on its filesystem. |
BuildKit RUN steps | Untrusted user code. Executes as subprocesses of the buildkitd process. |
The buildkitd process vs. RUN instructions
This distinction is critical to the security model. BuildKit creates a real OCI container (via runc) for every RUN step, but with --oci-worker-no-process-sandbox, the PID namespace and network namespace are shared with buildkitd. Each RUN step does get its own mount namespace (pivot_root into the image layers), so the buildkitd container's filesystem is not directly visible from within a RUN step. However, PID sharing means RUN steps can see and signal buildkitd's processes, and network sharing means they inherit buildkitd's full network access.
The mount namespace isolation is a useful defense layer, but not one we rely on exclusively. Credentials are kept off the BuildKit daemon regardless, and network policies restrict what buildkitd (and therefore RUN steps) can reach.
Credential Flow
Registry credentials follow a path that keeps them off the BuildKit daemon's filesystem:
- The Helm chart creates a Secret containing a Docker
config.jsoninlucity-builds - The Build Job pod mounts this Secret at
/etc/registry-auth/and setsDOCKER_CONFIG=/etc/registry-auth - The Job pod's build runner loads the Docker config and creates a BuildKit session auth provider
- When BuildKit needs to authenticate to the registry (push image, push/pull cache), it calls back to the Job pod's session over gRPC
- The Job pod's auth provider responds with credentials from its mounted Secret
BuildKit never stores or caches these credentials on disk. They exist only in memory during the gRPC session. Since RUN steps execute on the buildkitd pod (not the Job pod), they cannot access the mounted Secret.
Why a Separate Namespace
Build Jobs and BuildKit run in lucity-builds, not lucity-system (where platform services live). This matters because:
- DNS isolation: Build pods can't resolve
lucity-gateway,lucity-deployer, or other platform service names. Their DNS search domain islucity-builds.svc.cluster.local. - Blast radius: If railpack or the git clone process has a vulnerability, the attacker lands in a namespace with no platform components.
- Network policy boundary:
lucity-buildshas its own deny-all NetworkPolicy with selective egress rules. BuildKit can only reach Zot, DNS, and the public internet. Build Job pods can only reach BuildKit, the K8s API (to annotate their Job), DNS, and the public internet. - RBAC isolation: Two ServiceAccounts with minimal permissions. The
lucity-build-jobSA (used by Job pods) can only get, list, and patch Jobs inlucity-builds(to write build results as annotations). Thelucity-builderSA (used by the builder service inlucity-system) can get, list, watch, and create Jobs plus read pod logs inlucity-builds. Neither has access to Secrets, ConfigMaps, Deployments, or anything else.
Build Job Lifecycle
- Gateway calls
StartBuild()on the builder service - Builder creates a Kubernetes Job in
lucity-builds - The Job pod clones the repo, runs railpack detection, sends LLB to BuildKit
- BuildKit executes the build (including user
RUNcommands) and pushes the image - The Job pod annotates itself with the image reference and digest
- Builder polls the Job status every 3 seconds until completion
- Jobs auto-delete after 24 hours via TTL
Build results (image reference, digest, or error message) are stored as annotations on the Job object. No shared filesystem, no database, no message queue.
BuildKit Security Context
BuildKit requires seccompProfile: Unconfined to function. The OCI worker needs syscalls like unshare and mount to set up build environments, even with --oci-worker-no-process-sandbox. This is a BuildKit requirement, not a Lucity design choice.
Because RUN steps are subprocesses of buildkitd, they inherit the Unconfined seccomp profile. This is the primary reason the other isolation layers (network policies, credential separation, namespace isolation) are critical. See Pod Security for more on seccomp trade-offs.
What User Code Can and Cannot Do
A malicious RUN instruction can:
- Make network requests (constrained by buildkitd's NetworkPolicy to DNS, Zot, and the public internet)
- Consume CPU and memory (constrained by buildkitd's resource limits)
- See and signal buildkitd's processes (shared PID namespace)
A malicious RUN instruction cannot:
- Read registry credentials (they're on the Job pod, delivered via gRPC session)
- Reach platform services in
lucity-system(blocked by NetworkPolicy) - Reach workloads in other namespaces (blocked by NetworkPolicy)
- Escalate privileges (BuildKit runs as non-root UID 1000, seccomp is Unconfined but capabilities are limited)
- Persist beyond the build (buildkitd restarts clean; Job pods are ephemeral)
Non-Root Images
Railpack doesn't currently support building images with a non-root user (railwayapp/railpack#286). All files in the built image are owned by root. The build pipeline works around this by appending LLB steps after the railpack plan:
- Create a
lucitygroup and user with UID/GID 1000 chownthe image'sWORKDIRto 1000:1000 (theWORKDIRis read from railpack's image config, not hardcoded)- Set
USER 1000:1000in the OCI image config
This ensures the built image declares a non-root user and has correct file ownership. At runtime, the lucity-app chart enforces runAsUser: 1000 and runAsNonRoot: true in the pod security context as a second layer. See Pod Security for details.