Get Started
Security

Network Policies

How Lucity restricts network traffic between workloads and platform components

Every namespace managed by Lucity starts with a deny-all NetworkPolicy. Nothing talks to anything unless explicitly allowed. This is the opposite of Kubernetes' default behavior, where all pods can reach all other pods across all namespaces.

Build Namespace (lucity-builds)

The builds namespace starts with a default deny-all policy on both ingress and egress (no pod can send or receive any traffic). BuildKit and Build Job pods each get their own targeted policies that open only the minimum required traffic on top of this baseline.

BuildKit can reach:

DestinationPortWhy
CoreDNS (kube-system)53 UDP/TCPDNS resolution
Zot registry (lucity-system)5000 TCPPush built images and cache layers
Public internetanyPull base images from Docker Hub, GHCR, etc.

BuildKit accepts ingress only from Build Job pods (matched by lucity.dev/component: build label in the same namespace) and the builder service (cross-namespace from lucity-system, matched by namespace label + pod label). Since user RUN steps share BuildKit's network namespace, these egress rules are the primary constraint on what malicious build commands can reach.

Build Job pods can reach:

DestinationPortWhy
CoreDNS (kube-system)53 UDP/TCPDNS resolution
BuildKit (same namespace)1234 TCPSend build plans, receive session auth callbacks
K8s API server (10.96.0.1)443 TCPAnnotate Job with build results
Public internetanyClone repos from GitHub, fetch dependencies

Build Job pods accept no ingress. The cluster-internal CIDRs (pod network and service network, configurable via networkPolicy.podCIDR and networkPolicy.serviceCIDR) are explicitly excluded from internet egress, so build pods can't probe other services or workloads.

Platform Namespace (lucity-system)

Platform services have component-specific policies:

All platform pods start with a deny-all ingress policy. Each service gets its own ingress policy allowing traffic only from other pods in the same Helm release (identified by the app.kubernetes.io/instance label). Internet-facing services (gateway, webhook, dashboard, docs) allow ingress from any source on their specific port.

User Workload Namespaces

Each project environment gets its own namespace (e.g., acme-myapp-production). Workload network policies are defined in the lucity-app Helm chart:

  • Services within the same namespace can communicate freely on both ingress and egress (service to service, service to PostgreSQL, service to Redis)
  • Ingress from the platform namespace (lucity-system) is allowed for internal access
  • Egress to the public internet is allowed, but cluster-internal CIDRs (pod and service networks) are explicitly excluded, so pods can't reach the registry, K8s API, or other namespaces
  • Public services are exposed via Gateway API HTTPRoutes. Since Cilium uses a reserved "ingress" identity for Gateway API traffic that standard K8s NetworkPolicy can't match, a CiliumNetworkPolicy allows ingress from the ingress entity

The egress block on cluster CIDRs is important for registry security: even though workload pods can't authenticate to the registry without credentials, the NetworkPolicy ensures they can't even reach it over the network.

What the Policies Don't Cover

NetworkPolicies operate at L3/L4 (IP addresses and ports). They don't inspect HTTP headers, authenticate requests, or enforce application-level authorization. If two pods are allowed to communicate by NetworkPolicy, they can send any traffic on the allowed port.

For application-level security (authentication, authorization, rate limiting), rely on your application code and the platform's OIDC integration.

Verifying Policies

You can verify NetworkPolicies are enforced by exec'ing into the buildkitd pod (where user RUN steps execute):

# This should timeout (K8s API blocked from BuildKit)
kubectl exec -n lucity-builds deploy/lucity-buildkit -- \
  wget -q -O- -T 5 https://kubernetes.default.svc/version

# This should also timeout (cross-namespace blocked)
kubectl exec -n lucity-builds deploy/lucity-buildkit -- \
  wget -q -O- -T 5 http://lucity-gateway.lucity-system:8080/health

# This should return 401 (network is allowed, but registry requires auth)
kubectl exec -n lucity-builds deploy/lucity-buildkit -- \
  wget -q -O- -T 5 http://lucity-infra-zot.lucity-system:5000/v2/

If the first two commands succeed instead of timing out, your CNI plugin may not support NetworkPolicies. Cilium, Calico, and most modern CNIs enforce them; Flannel does not.