Micrometer Observability in Spring Boot: Unified Metrics and Tracing Guide

Micrometer Observability in Spring Boot

Micrometer observability Spring Boot integration has become the standard approach for instrumenting Java applications in 2026. With the Observation API introduced in Micrometer 1.10 and fully embraced by Spring Boot 3.x, developers can now capture metrics, traces, and logs through a single unified API. This eliminates the fragmented instrumentation patterns that plagued earlier monitoring setups.

In this guide, you will learn how to set up Micrometer observability from scratch, configure exporters for Prometheus and Grafana Tempo, create custom observations, and build production-ready dashboards. By the end, you will have a complete observability pipeline that correlates metrics with distributed traces automatically.

Why Unified Observability Matters

Traditional monitoring required separate libraries for metrics (Micrometer), tracing (Brave/OpenTelemetry), and logging (SLF4J). Each had its own configuration, context propagation, and export pipeline. Consequently, correlating a latency spike in your metrics dashboard with the specific trace that caused it was a manual and error-prone process.

The Observation API solves this by providing a single entry point. When you create an observation, it automatically generates both a timer metric and a trace span. Furthermore, it propagates context so that log statements within the observation include the trace ID. This means one line of instrumentation code produces three signals.

Micrometer observability metrics dashboard
Unified observability dashboard correlating metrics with distributed traces

Setting Up the Observability Stack

Start by adding the necessary dependencies to your Spring Boot 3.x project. The key dependency is micrometer-tracing-bridge-otel which bridges Micrometer’s Observation API to OpenTelemetry for trace export.

<dependencies>
    <!-- Core observability -->
    <dependency>
        <groupId>io.micrometer</groupId>
        <artifactId>micrometer-observation</artifactId>
    </dependency>

    <!-- Tracing bridge to OpenTelemetry -->
    <dependency>
        <groupId>io.micrometer</groupId>
        <artifactId>micrometer-tracing-bridge-otel</artifactId>
    </dependency>

    <!-- Export traces to Zipkin/Tempo -->
    <dependency>
        <groupId>io.opentelemetry</groupId>
        <artifactId>opentelemetry-exporter-zipkin</artifactId>
    </dependency>

    <!-- Export metrics to Prometheus -->
    <dependency>
        <groupId>io.micrometer</groupId>
        <artifactId>micrometer-registry-prometheus</artifactId>
    </dependency>

    <!-- Spring Boot Actuator -->
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-actuator</artifactId>
    </dependency>
</dependencies>

Next, configure the application properties to enable observation and trace export:

# application.yml
management:
  observations:
    key-values:
      application: order-service
  tracing:
    sampling:
      probability: 1.0  # 100% in dev, lower in prod
  endpoints:
    web:
      exposure:
        include: health,prometheus,metrics
  metrics:
    distribution:
      percentiles-histogram:
        http.server.requests: true
    tags:
      application: order-service

spring:
  application:
    name: order-service

Creating Custom Observations

The Observation API lets you instrument business logic with a clean, fluent interface. Each observation automatically creates a timer metric and a trace span. Additionally, you can attach key-value pairs that appear as both metric tags and span attributes.

@Service
public class PaymentService {

    private final ObservationRegistry registry;
    private final PaymentGateway gateway;

    public PaymentResult processPayment(PaymentRequest request) {
        return Observation.createNotStarted("payment.process", registry)
            .contextualName("process-payment")
            .lowCardinalityKeyValue("payment.method", request.getMethod().name())
            .lowCardinalityKeyValue("currency", request.getCurrency())
            .highCardinalityKeyValue("order.id", request.getOrderId())
            .observe(() -> {
                // Validate payment details
                gateway.validate(request);

                // Charge the payment method
                PaymentResult result = gateway.charge(request);

                // Record result as observation event
                if (result.isSuccessful()) {
                    Observation.Event.of("payment.success").toString();
                }
                return result;
            });
    }
}

The lowCardinalityKeyValue method creates metric tags (bounded values like payment method), while highCardinalityKeyValue creates span attributes (unbounded values like order IDs). This distinction is critical because high-cardinality tags would explode your metrics storage.

Observation Conventions for Consistency

For larger teams, define conventions to ensure consistent naming across services. This approach promotes standardized dashboards and alerts.

public class PaymentObservationConvention
        implements GlobalObservationConvention {

    @Override
    public String getName() {
        return "payment.process";
    }

    @Override
    public KeyValues getLowCardinalityKeyValues(PaymentObservationContext ctx) {
        return KeyValues.of(
            KeyValue.of("payment.method", ctx.getMethod()),
            KeyValue.of("payment.status", ctx.getStatus()),
            KeyValue.of("region", ctx.getRegion())
        );
    }

    @Override
    public boolean supportsContext(Observation.Context context) {
        return context instanceof PaymentObservationContext;
    }
}
Spring Boot Micrometer observability tracing setup
Distributed tracing flow across Spring Boot microservices

Automatic HTTP Instrumentation

Spring Boot 3.x automatically instruments all HTTP server requests and WebClient calls using the Observation API. Therefore, you get request duration metrics, error rates, and distributed trace propagation without writing any instrumentation code.

@RestController
@RequestMapping("/api/orders")
public class OrderController {

    private final OrderService orderService;
    private final RestClient restClient;

    @GetMapping("/{id}")
    public ResponseEntity getOrder(@PathVariable Long id) {
        // Automatically observed: http.server.requests metric + trace span
        Order order = orderService.findById(id);

        // RestClient calls are also auto-instrumented
        // Trace context is propagated via W3C headers
        CustomerDetails customer = restClient.get()
            .uri("http://customer-service/api/customers/{id}", order.getCustomerId())
            .retrieve()
            .body(CustomerDetails.class);

        return ResponseEntity.ok(order.withCustomer(customer));
    }
}

Building Production Dashboards

With metrics flowing to Prometheus and traces to Grafana Tempo, you can build dashboards that correlate the two. The key is using exemplars — metric samples that include a trace ID, allowing you to jump from a spike in a chart directly to the trace that caused it.

# docker-compose.yml — observability stack
services:
  prometheus:
    image: prom/prometheus:v2.51.0
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml
    ports:
      - "9090:9090"

  tempo:
    image: grafana/tempo:2.4.0
    command: ["-config.file=/etc/tempo.yml"]
    volumes:
      - ./tempo.yml:/etc/tempo.yml
    ports:
      - "3200:3200"   # Tempo API
      - "9411:9411"   # Zipkin receiver

  grafana:
    image: grafana/grafana:10.4.0
    environment:
      - GF_FEATURE_TOGGLES_ENABLE=traceqlEditor
    ports:
      - "3000:3000"
    volumes:
      - ./grafana-datasources.yml:/etc/grafana/provisioning/datasources/ds.yml

Advanced: Custom ObservationHandler

For specialized requirements, you can create custom observation handlers that react to observation lifecycle events. This is useful for audit logging, custom SLI calculations, or sending events to external systems.

@Component
public class AuditObservationHandler
        implements ObservationHandler {

    private final AuditLog auditLog;

    @Override
    public void onStop(PaymentObservationContext context) {
        auditLog.record(AuditEntry.builder()
            .action("payment.processed")
            .status(context.getStatus())
            .traceId(context.get(TraceContext.class).traceId())
            .duration(context.getDuration())
            .build());
    }

    @Override
    public boolean supportsContext(Observation.Context context) {
        return context instanceof PaymentObservationContext;
    }
}

When NOT to Use Micrometer Observations

While the Observation API is powerful, there are scenarios where it adds unnecessary overhead. Avoid wrapping extremely hot loops or CPU-bound computations — the observation overhead, though small, adds up at millions of iterations per second. Similarly, avoid using observations for simple in-memory operations where a basic counter or gauge would suffice.

Furthermore, if you are using a non-Spring framework that already has OpenTelemetry auto-instrumentation (like Quarkus or Micronaut), adding Micrometer observations on top can create duplicate telemetry data. In those cases, stick with the framework’s native instrumentation.

Production observability monitoring dashboard
Production monitoring dashboard with correlated metrics and traces

Key Takeaways

  • Micrometer observability Spring Boot integration unifies metrics, traces, and logs through a single Observation API
  • Use lowCardinalityKeyValue for metrics tags and highCardinalityKeyValue for trace attributes to avoid cardinality explosions
  • Spring Boot 3.x auto-instruments HTTP requests and RestClient calls — custom observations are only needed for business logic
  • Observation conventions ensure consistent naming across microservices, which is essential for standardized dashboards
  • Exemplars bridge the gap between metrics dashboards and individual traces for fast root-cause analysis

Related Reading

External Resources

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top