Event-Driven Architecture with Kafka: Beyond Simple Pub/Sub
Most teams use Kafka as a message queue. The real power comes from event sourcing, stream processing, and the patterns that emerge when events are your source of truth.
Event Sourcing: Events as the Database
Instead of storing current state, store every event that led to the current state. An order is not a row — it is a sequence: OrderCreated → PaymentReceived → ItemsShipped → OrderDelivered.
public record OrderEvent(
String orderId,
String eventType,
Instant timestamp,
JsonNode payload
) {}
// Rebuild current state by replaying events
Order rebuild(List<OrderEvent> events) {
Order order = new Order();
for (OrderEvent event : events) {
order = order.apply(event);
}
return order;
}
CQRS: Separate Read and Write Models
Write events to Kafka. Consume them into read-optimized views — Elasticsearch for search, Redis for caching, PostgreSQL for reports. Each consumer builds exactly the view it needs.
Saga Pattern for Distributed Transactions
Orchestration sagas use a central coordinator. Choreography sagas let each service react to events independently. Choose orchestration when you need visibility into the workflow state; choose choreography when services should remain decoupled.