This article and the source codes attached should be used as reference only.Please thoroughly test your implementation before making changes to production environment
Checkout our NEW Video Channel you can like and subscribe too!


World of microservice is vast. Because of its wide spectrum the concepts varies a lot too. The same set of tasks can be done in infinite combination of tech stacks while building the microservices. If we talk about platform we can go completely serverless or have our own servers.In case of tech stack we can choose java,scala,kotlin,python,ruby While building app we can have as many of 12 factors in place. In case of pipeline we can use Jenkis,CircleCI,travis and the others. To orchestrate the cloud we can have Ansible,Chef,Cloud providers own formation templates etc. The list is actually huge but the ideology behind all these are more or less lies how effectively we implement the concepts of microservices At the end of the day tech stack doesn’t matter for business and doesn’t have the value if not delivered in right proportion in right time.

What we cover here

This document intent to look into various tools and techniques that can be leveraged to migrate to cloud. Although this design patterns are written keeping PCF in mind, they are also valid for any deployment platform of interest.

Points marked with POC are the areas I will try to do POC and update this blog with new threads.

As per microservice guru Chris Richardson there are below patterns


  • Decomposition Patterns
  • Communication Patterns
  • Containerization Patterns
  • Transactional Patterns
  • Migration Patterns
  • Security Patterns
  • Observable Patterns
  • Deployment patterns

Decomposition Patterns

In my view decomposition pattern have 2 major ways

  • A complete lift and shift (with partially 12 factor enabled)
  • Strangling and Incremental way

Lift and Shift

Lift and Shift can only be done if the app is already isolated and have db that is not shared across service.

We need to ask these questions while evaluating the app:

  • Is this application is a spring boot app?
  • Run CF push in the root directory of the legacy app. Can the errors be converted to story points?
  • Check if this application can be containerized?


Strategy 2 can be applied to large scale application which can be decomposed using DD techniques and needs to be strangled into a microservices progressively.

  • Do event storming with the business operators,use snap analysis.Come of with core events under the bounded context.
  • Extract the domain aggregate with least transactions
  • Aim for the “read” aspects of the separated domain and make it as microservice.This will simplify things as we dont have to deal with the state of the system immediately.

So MVP 0 will try to cover by

  • For schema, use strategy as described in the db migration patterns
  • For OLTP data,we need to sync the legacy schema to the de-normalized view of microservices (CQRS)(Use techniques from event sourcing)
  • Expose a single endpoint at microservice and query using java streams to the de-normalized view
  • Write adapter at monolith ACL which will query the endpoint at microservice and reconcile the existing read logic,reconciliation happens on aggregate id and monolith reference entity object.
  • Use DevOps strategy to use canary model and run existing test cases


  • Start migrating the writes to microservices
  • Store states in the microservices and not the data (see event sourcing)
  • Expose write endpoint in the microservice to store the event streams
  • Continue to use denormalized view for querying(see event sourcing).This time stop listening from events (from the bridge queue) which are directly exposed from the microservices.
  • Use DevOps strategy to use canary model and run existing test cases.

Containerization Patterns

  • PCF v2.5 > allow usage of docker images as alternative to PAS Linux container.
  • We should be able to containerize our application and deploy to PCF.This helps in future proofing for PKS migration as well
  • Configure Docker Registry Access both Public and Private registry POC Required
  • In case we use our containers , PCF will not monitor health checks and drain log/metrics out to traffic controller we need to take care of this .POC POC Required

Communication patterns

For PCF Container-to-container should be enabled from the platform.

Spring Cloud Services Direct Registration Method allows to connect between containers using internal IP directly. POC Required

Apart from this we need to leverage following communication patterns

  1. Request-Response using AMQP using Spring cloud amqp starter POC Required
  2. Use Spring cloud streams for Request response (Use channels) POC Required
          as a pub sub mode
          as message broker with LB
          dead letter queue
          error queue  
  3. User Spring cloud streams reactive with Spring cloud function POC Required [Source,Processor and Sink]

Transactional patterns

SAGA orchestration and choreography are the two key patterns in distributed transaction.Below are the POCs that we should look at taking PCF as the platform.

  • Choreography pattern using a message broker (Non spring app) using topic exchange,fanout exchange,domain events and Error queues POC Required
  • Choreography pattern using a message channel (Spring cloud streams) using source,processor and sink model POC Required
  • Orchestration hybrid pattern(Spring Commander +backend other langs) with BPM tool Cumanda POC Required
  • Spring pipeline using reactive producer,function,Subscribe (Java lambda functions) using Spring cloud streams and Spring cloud functions POC Required

Migration patterns

Migrating from monolith can be quite complex.Just for an example in the below scenario I tried extracting delivery service from monolith into a microservices,and below is what I came up with.

MicroService patterns_mvp0_read.jpeg

I will try to explain this above diagram in the following sections zooming into specific areas of concerns

My understanding of db migration have the following phases

1.Identify the schema for the normalized database and the event sourcing database for the microservices.We need to event source from monolith during the migration phase of events.

For this we use flyway to start versioning the schema for the new dbs we create. POC Required This will help us to keep our db versioned and also stop it from corrupting from the monolith.

Also the event store is better to be SQl as we need to do a lot of aggregation on top of this data and update the de-normalize view for current data.

2.Create Data flow pipelines to migrate historical data POC Required

MicroService patterns_historical_data.jpg

While migrating db to the microservices we should be careful about the schemas we migrate.Then we need to get the historical data using a data pipeline with a pull function.

Alternative to Spring cloud data flow, we can use Akka stream and scala slick,periodically pull date and publish this streams to a queue.

MicroService patterns_historical_data_scala.jpg

One more alternative to pooling can use Speedment to use Java native stream APIs use that as a source for queue and rest of the flow remains the same.

3.Create Data flow pipelines for real time replication of data from monoliths to microservices(for the events which are yet to migrate to the microservices) POC Required

MicroService patterns_real_time_streams.jpg

We need to do some kind of event streaming.Good option is to use db (like mongo) which supports this feature OOTB.Polling realtime is resource intensive and should be avoided.

  • Create a custom table in monoliths table with event id,event data,created date,processed flag.
  • Update this table transitionally whenever there is an update/insert in monolith persistency.
  • Use Spring cloud data flow and do S(Source) ->P(process) -> Sink (ETL) to the microservices de-normalized view.

We could have also create adapter so that in the monoliths inside a transaction we can send out the event directly to the queue. But going by Richardson patterns, any new functionality in existing system should be avoided during transition phase.If monoliths already have a AMQP mechanism available we can reuse it otherwise we shouldn’t implement it and follow the above pattern.

Once we started moving the events which incurs a state change, we need to modify the migration pipeline so that it filters out the events which are available on the new microservice for direct consumption.

MicroService patterns_mvp1_state_change.jpeg

4.Create cloud streams for CQRS pattern inside the microservice POC Required Important thing to note here is the event insert and notification to the event queue should happens transactionally.

MicroService patterns_CQRS.jpg

Regarding the data flow pipelines

While creating our pipelines we can make use of reactive streams(Spring Boot 2.2 >) supports java streams natively as Spring cloud functions.This will help streaming data as fluxs and utilize threads without blocking. POC Required

Both imperative and reactive uses Spring Cloud Streams which does all the boilerplate coding.

Querying the denormalized view db

For reads use tool like or Slick or No SQL db streams.Using off-HEAP location to load Data to JVM for faster retrieval.Also because it is totally isolated from writes we can scale it multiple replicas for better availability.

Anti patterns We shouldn’t lift and shift data(data migration with existing schema),it might lead to a lift-n-shift instead of decomposition.Decomposition of db to CQRS is necessary and this is the right time for it.Use de-normalized view and aggregate id as reference to the monolith tables.

Security patterns

Tokens generated at SCG should be relayed to downstream services

  • Use token relay to relay token to the downstream services, so that individual microservices does not create tokens on its own. POC Required
  • Use JWT signature validation to validate incoming token request natively POC Required (this will reduce remote token validation between resource server and authentication server,hence reduce network traffic). Resolve userDetailService from Spring Authentication token.Enable client registration for clients to validated

Logging and Monitoring

  • Use fire hose and nozzles to get metric data streamed out POC Required
  • Generate custom gauges and use that as a scaling event from AutoScaler trigger point. POC Required
  • Incase of Docker container, we need to take care of the log drains POC Required