Kubernetes Gateway API: The Modern Replacement for Ingress
Kubernetes Ingress has been the standard for routing external traffic to services for years, but it has fundamental limitations — no support for TCP/UDP routing, no standard way to split traffic, and vendor-specific annotations for anything beyond basic HTTP routing. Kubernetes Gateway API replaces Ingress with an expressive, role-oriented, and portable API for traffic management. Therefore, this guide covers the Gateway API, sidecar containers GA in Kubernetes 1.32, and how they integrate with service mesh architectures.
Why Ingress Needed Replacing
Ingress resources in Kubernetes are deceptively simple. A basic Ingress routes HTTP traffic by hostname and path. The moment you need anything more — header-based routing, request mirroring, canary deployments, rate limiting, or mutual TLS — you reach for vendor-specific annotations. An Ingress for NGINX looks completely different from one for Traefik or AWS ALB. Moreover, there is no standard way to configure TCP/UDP routing, gRPC load balancing, or traffic splitting.
The result: infrastructure teams create Ingress templates with 20+ annotations, each specific to their controller. Migrating from one controller to another means rewriting every Ingress resource. The Gateway API solves this by making advanced routing features part of the standard API instead of relegating them to annotations.
Gateway API Architecture: Roles and Resources
The Gateway API introduces three resource types that separate concerns by role:
# 1. GatewayClass — Infrastructure provider (managed by cluster operator)
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: production-gateway
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
description: "Production traffic gateway using Envoy"
---
# 2. Gateway — Infrastructure instance (managed by cluster operator)
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: production
namespace: gateway-system
spec:
gatewayClassName: production-gateway
listeners:
- name: https
protocol: HTTPS
port: 443
tls:
mode: Terminate
certificateRefs:
- name: wildcard-cert
kind: Secret
allowedRoutes:
namespaces:
from: Selector
selector:
matchLabels:
gateway-access: "true"
- name: http
protocol: HTTP
port: 80
---
# 3. HTTPRoute — Traffic rules (managed by application teams)
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: order-service
namespace: orders
spec:
parentRefs:
- name: production
namespace: gateway-system
hostnames:
- "api.example.com"
rules:
- matches:
- path:
type: PathPrefix
value: /api/orders
headers:
- name: x-api-version
value: "v2"
backendRefs:
- name: order-service-v2
port: 8080
weight: 90
- name: order-service-v3
port: 8080
weight: 10 # 10% canary traffic to v3
- matches:
- path:
type: PathPrefix
value: /api/orders
backendRefs:
- name: order-service-v2
port: 8080This separation is powerful. The infrastructure team manages GatewayClass and Gateway resources — they control which ports are open, which TLS certificates are used, and which namespaces can attach routes. Application teams manage HTTPRoute resources in their own namespaces — they control routing rules, traffic splitting, and backend selection. Neither team needs to understand the other’s domain. Additionally, no vendor-specific annotations are needed; traffic splitting, header-based routing, and path matching are all part of the standard API.
Advanced Routing: What Gateway API Enables
Gateway API supports routing patterns that required custom annotations or CRDs with Ingress:
# Traffic mirroring: send a copy of traffic to a test service
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: order-service-mirror
spec:
parentRefs:
- name: production
namespace: gateway-system
rules:
- matches:
- path:
type: PathPrefix
value: /api/orders
backendRefs:
- name: order-service
port: 8080
filters:
- type: RequestMirror
requestMirror:
backendRef:
name: order-service-canary
port: 8080
---
# gRPC routing (GRPCRoute resource)
apiVersion: gateway.networking.k8s.io/v1
kind: GRPCRoute
metadata:
name: grpc-services
spec:
parentRefs:
- name: production
namespace: gateway-system
rules:
- matches:
- method:
service: orders.OrderService
method: CreateOrder
backendRefs:
- name: order-grpc-service
port: 9090
- matches:
- method:
service: payments.PaymentService
backendRefs:
- name: payment-grpc-service
port: 9090
---
# TCP routing (TCPRoute for non-HTTP protocols)
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: TCPRoute
metadata:
name: postgres-route
spec:
parentRefs:
- name: internal-gateway
sectionName: postgres
rules:
- backendRefs:
- name: postgres-primary
port: 5432Request mirroring, gRPC routing, TCP/UDP routing, and weighted traffic splitting are all standard resources with the Gateway API. Consequently, you can switch from Envoy Gateway to Istio Gateway to Contour without rewriting your routing rules — they all implement the same API.
Sidecar Containers GA in Kubernetes 1.32
Kubernetes 1.32 promotes native sidecar containers to GA. Previously, sidecar containers (like Envoy proxies, log collectors, or secret injectors) were regular containers that happened to run alongside your application. The problem: Kubernetes treated all containers equally, so sidecars sometimes started after the application or shut down before it, causing startup failures and connection errors during shutdown.
# Native sidecar container (Kubernetes 1.32+)
apiVersion: v1
kind: Pod
metadata:
name: order-service
spec:
initContainers:
# restartPolicy: Always makes this a sidecar
- name: envoy-proxy
image: envoyproxy/envoy:v1.31
restartPolicy: Always # This is the sidecar magic
ports:
- containerPort: 15001
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 200m
memory: 256Mi
- name: log-collector
image: fluent/fluent-bit:3.0
restartPolicy: Always
volumeMounts:
- name: app-logs
mountPath: /var/log/app
containers:
- name: order-service
image: myapp/order-service:v3.2
ports:
- containerPort: 8080
volumeMounts:
- name: app-logs
mountPath: /var/log/appWith native sidecars, Kubernetes guarantees: sidecars start before application containers and stop after them, sidecar crashes trigger a restart without killing the main container, and sidecar resource usage is properly accounted for in scheduling. This matters for service mesh proxies (Envoy, Linkerd) which must be ready before the application sends traffic.
Service Mesh Integration
The Gateway API and native sidecars converge in service mesh architectures. Istio, Linkerd, and Cilium all support Gateway API as their ingress layer. With native sidecars for proxy injection and Gateway API for routing, you get a complete traffic management stack built on Kubernetes standards rather than custom CRDs.
Istio’s ambient mode goes further by removing sidecar proxies entirely, replacing them with a per-node ztunnel proxy for L4 (mTLS, authorization) and optional waypoint proxies for L7 (HTTP routing, retries). This reduces resource overhead from ~128MB per pod to shared node-level proxies. However, ambient mode is newer and does not support all Istio features yet.
Migrating from Ingress to Gateway API
You do not need to migrate all at once. Gateway API and Ingress can coexist in the same cluster. Start by creating a Gateway resource, migrate one service’s routing to an HTTPRoute, verify it works, then migrate the next. Most Gateway API controllers (including NGINX Gateway Fabric and Envoy Gateway) support both Ingress and Gateway API simultaneously during migration.
Related Reading:
Resources:
In conclusion, the Kubernetes Gateway API brings standardized, role-oriented traffic management that replaces Ingress’s annotation-driven approach. Combined with native sidecar containers in Kubernetes 1.32, the platform now has proper lifecycle management for proxy containers. Migrate incrementally — start with one HTTPRoute, verify it works, and expand from there.