Kubernetes Network Policies for Zero Trust: Complete Security Guide

Kubernetes Network Policies for Zero Trust

Kubernetes network policies are the foundation of zero trust networking in containerized environments. By default, every pod can communicate with every other pod in a Kubernetes cluster — a flat network with no access control. Network policies let you define granular rules about which pods can talk to which services, on which ports, and in which direction.

This guide covers everything from basic deny-all policies to advanced L7 filtering with Cilium. If you are running any production workload on Kubernetes, network policies are not optional — they are essential security hygiene.

Default Deny: The Starting Point

The first step in zero trust networking is denying all traffic by default and explicitly allowing only what is needed:

# default-deny-all.yaml
# Apply to every namespace that runs application workloads
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
  namespace: production
spec:
  podSelector: {}    # Applies to ALL pods in namespace
  policyTypes:
    - Ingress        # Block all incoming traffic
    - Egress         # Block all outgoing traffic
Kubernetes network policies zero trust diagram
Default deny: start by blocking all traffic, then allow specific paths

After applying default-deny, your pods cannot communicate with anything — not even DNS. Now you selectively open paths.

Essential Allow Rules

# allow-dns.yaml — Required for service discovery
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-dns
  namespace: production
spec:
  podSelector: {}
  policyTypes:
    - Egress
  egress:
    - to:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: kube-system
      ports:
        - protocol: UDP
          port: 53
        - protocol: TCP
          port: 53

---
# allow-api-to-database.yaml — Specific service communication
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-api-to-db
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: api-server
  policyTypes:
    - Egress
  egress:
    - to:
        - podSelector:
            matchLabels:
              app: postgresql
      ports:
        - protocol: TCP
          port: 5432

---
# allow-ingress-to-api.yaml — External traffic to API
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-ingress-to-api
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: api-server
  policyTypes:
    - Ingress
  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: ingress-nginx
      ports:
        - protocol: TCP
          port: 8080

Advanced Policies with Cilium

Standard Kubernetes network policies work at L3/L4 (IP and port). Cilium extends this to L7, letting you filter by HTTP method, path, and headers:

# cilium-l7-policy.yaml
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: api-l7-policy
  namespace: production
spec:
  endpointSelector:
    matchLabels:
      app: api-server
  ingress:
    - fromEndpoints:
        - matchLabels:
            app: frontend
      toPorts:
        - ports:
            - port: "8080"
              protocol: TCP
          rules:
            http:
              # Only allow GET and POST to specific paths
              - method: "GET"
                path: "/api/v1/products.*"
              - method: "POST"
                path: "/api/v1/orders"
                headers:
                  - 'Content-Type: application/json'
              # Block access to admin endpoints
              # (deny by default, only specified paths allowed)
Cilium network policy visualization
Cilium provides L7 visibility and filtering for HTTP, gRPC, and Kafka traffic

DNS-Based Egress Control

# cilium-dns-policy.yaml — Control external API access
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: allow-external-apis
  namespace: production
spec:
  endpointSelector:
    matchLabels:
      app: payment-service
  egress:
    # Allow only specific external domains
    - toFQDNs:
        - matchName: "api.stripe.com"
        - matchName: "api.sendgrid.com"
      toPorts:
        - ports:
            - port: "443"
              protocol: TCP
    # Allow DNS for FQDN resolution
    - toEndpoints:
        - matchLabels:
            k8s:io.kubernetes.pod.namespace: kube-system
            k8s-app: kube-dns
      toPorts:
        - ports:
            - port: "53"
              protocol: ANY

Production Implementation Pattern

Implementation Order:

1. Audit current traffic (Cilium Hubble / Calico flow logs)
2. Document required communication paths
3. Apply default-deny in staging FIRST
4. Add allow rules based on audit
5. Test thoroughly — broken policies = outage
6. Apply to production namespace by namespace
7. Monitor with network policy dashboards
8. Alert on denied traffic (may indicate misconfiguration OR attack)

When NOT to Use Network Policies

Network policies add operational complexity. Consequently, they may not be worth it for: development/sandbox clusters, single-tenant applications with no compliance requirements, or very small clusters where all pods are trusted. Additionally, ensure your CNI plugin supports network policies — not all do (Flannel doesn’t by default).

Network security monitoring dashboard
Monitoring denied traffic to identify misconfigurations and potential attacks

Key Takeaways

Kubernetes network policies are essential for zero trust security. Start with default-deny, allow DNS, then open specific paths between services. As a result, even if an attacker compromises one pod, lateral movement is restricted to only the explicitly allowed communication paths.

Related Reading

External Resources

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top