Strangler Fig Pattern: Safely Migrating from Monolith to Microservices

Strangler Fig Migration from Monolith to Microservices

Strangler fig migration is the safest approach to decomposing a monolith into microservices. Named after the strangler fig tree that gradually envelops and replaces its host tree, this pattern incrementally routes functionality from the monolith to new services. At no point does the system require a big-bang cutover — the monolith continues to handle existing features while new services take over piece by piece.

This guide provides a practical implementation plan covering request routing, feature extraction, data migration, and rollback strategies. Whether your monolith is 5 years or 15 years old, the strangler fig pattern provides a risk-managed path to a microservices architecture.

Why Strangler Fig Over Big-Bang Rewrite

Big-bang rewrites fail 70% of the time according to industry surveys. The strangler fig pattern succeeds because it delivers value incrementally, maintains a working system at every step, and allows rollback if problems occur. Moreover, the team learns microservices patterns on low-risk extractions before tackling complex domain boundaries.

Strangler fig migration pattern architecture
The strangler fig pattern gradually routes traffic from monolith to new microservices
Strangler Fig Migration Timeline

Phase 1 (Month 1-2):
[Monolith: 100%] ←──── All traffic
[Facade/Proxy]

Phase 2 (Month 3-6):
[Monolith: 80%]  ←──── Most traffic
[Service A: 10%] ←──── Extracted feature
[Service B: 10%] ←──── Extracted feature
[Facade/Proxy]   ←──── Routes by path

Phase 3 (Month 7-12):
[Monolith: 30%]  ←──── Legacy only
[Service A: 15%]
[Service B: 15%]
[Service C: 20%]
[Service D: 20%]
[Facade/Proxy]

Phase 4 (Month 12-18):
[Monolith: 0%]   ←──── Decommissioned
[Microservices: 100%]

Setting Up the Strangler Facade

The facade is a reverse proxy that routes requests either to the monolith or to new services. Additionally, it enables gradual traffic shifting and A/B testing during migration:

# nginx.conf — Strangler facade configuration
upstream monolith {
    server monolith.internal:8080;
}
upstream user_service {
    server user-service.internal:8080;
}
upstream order_service {
    server order-service.internal:8080;
}

# Feature flags for gradual migration
map $cookie_migration_flags $use_new_user_service {
    "~*user_v2"  1;
    default      0;
}

server {
    listen 443 ssl;

    # Extracted: User service (fully migrated)
    location /api/users {
        proxy_pass http://user_service;
    }

    # In progress: Order service (canary)
    location /api/orders {
        # Route 10% of traffic to new service
        split_clients "${remote_addr}" $order_backend {
            10%    order_service;
            *      monolith;
        }
        proxy_pass http://$order_backend;
    }

    # Not yet migrated: everything else
    location / {
        proxy_pass http://monolith;
    }
}

Programmatic Routing with Feature Flags

@RestController
@RequestMapping("/api")
public class StranglerFacadeController {

    private final MonolithClient monolith;
    private final UserServiceClient userService;
    private final FeatureFlagService flags;

    @GetMapping("/users/{id}")
    public ResponseEntity<?> getUser(
            @PathVariable String id,
            HttpServletRequest request) {

        if (flags.isEnabled("user-service-v2",
                extractContext(request))) {
            try {
                var user = userService.getUser(id);
                return ResponseEntity.ok(user);
            } catch (Exception e) {
                log.error("New service failed, falling back", e);
                metrics.increment("strangler.fallback.user");
                // Fallback to monolith on failure
                return monolith.forwardRequest(request);
            }
        }

        return monolith.forwardRequest(request);
    }
}

Data Migration Strategies

Data extraction is the hardest part of the strangler fig pattern. Therefore, use these strategies based on your consistency requirements:

Data Migration Strategies

1. Shared Database (Simplest, temporary)
   Monolith + New Service → Same DB
   Pro: No data sync needed
   Con: Couples services via schema

2. Database View (Medium complexity)
   Monolith DB → Views → New Service reads
   Pro: Read isolation without data copy
   Con: Write still goes to monolith

3. Change Data Capture (Recommended)
   Monolith DB → Debezium → Kafka → New Service DB
   Pro: Real-time sync, decoupled
   Con: Eventual consistency

4. Dual Write (Risky, avoid if possible)
   Write to both DBs simultaneously
   Pro: Real-time consistency
   Con: Distributed transaction problems
# Debezium connector for CDC-based data migration
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnector
metadata:
  name: monolith-user-cdc
spec:
  class: io.debezium.connector.postgresql.PostgresConnector
  config:
    database.hostname: monolith-db.internal
    database.port: "5432"
    database.user: debezium
    database.password: "${file:/secrets/db-password}"
    database.dbname: monolith
    table.include.list: "public.users,public.user_profiles"
    topic.prefix: "monolith.cdc"
    plugin.name: pgoutput
    slot.name: user_migration
    publication.name: user_tables
    transforms: route
    transforms.route.type: org.apache.kafka.connect.transforms.RegexRouter
    transforms.route.regex: "monolith\.cdc\.public\.(.*)"
    transforms.route.replacement: "user-service.migration.$1"
Data migration from monolith to microservices with CDC
Change Data Capture provides real-time data synchronization during migration

Verification and Rollback

Every extraction must include verification. Consequently, run the monolith and new service in parallel and compare results:

class ParallelVerifier:
    """Compare monolith and new service responses."""

    def __init__(self, monolith_url, service_url):
        self.monolith = monolith_url
        self.service = service_url
        self.mismatches = []

    async def verify_endpoint(self, path, method="GET"):
        """Call both services and compare responses."""
        mono_resp = await self.call(self.monolith + path, method)
        svc_resp = await self.call(self.service + path, method)

        if not self.responses_match(mono_resp, svc_resp):
            self.mismatches.append({
                "path": path,
                "monolith": mono_resp,
                "service": svc_resp,
                "timestamp": datetime.now().isoformat(),
            })
            return False
        return True

    def responses_match(self, a, b):
        """Compare responses, ignoring non-significant fields."""
        ignore_fields = ["timestamp", "requestId", "version"]
        a_clean = {k: v for k, v in a.items()
                   if k not in ignore_fields}
        b_clean = {k: v for k, v in b.items()
                   if k not in ignore_fields}
        return a_clean == b_clean

When NOT to Use Strangler Fig

The strangler fig pattern assumes a request-response architecture with clear API boundaries. If your monolith is a batch processing system with complex data pipelines and no HTTP endpoints, alternative decomposition strategies may be more appropriate. Additionally, if the monolith will be completely rewritten in a different technology stack (for example, moving from COBOL to Java), the overhead of maintaining two systems during migration may not be justified. Finally, for very small monoliths that a team can rewrite in 2-3 months, the incremental approach adds unnecessary complexity.

Architecture migration decision framework
Evaluate the monolith’s architecture before choosing a decomposition strategy

Key Takeaways

  • The strangler fig migration pattern eliminates the risk of big-bang rewrites by extracting services incrementally
  • A reverse proxy facade routes traffic between the monolith and new services based on path, percentage, or feature flags
  • Change Data Capture with Debezium provides the safest data migration strategy with real-time synchronization
  • Parallel verification ensures functional parity before decommissioning monolith features
  • Start with the simplest, most isolated bounded context to build team confidence before complex extractions

Related Reading

External Resources

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top