#microservices
22 posts

The cracking monolith: the forces that call for microservices

Overweight monoliths exhibit degrading system performance and stability, or slow development cycles, or both.
Read more

The cracking monolith: the forces that call for microservices

  • Overweight monoliths exhibit degrading system performance and stability, or slow development cycles, or both.
  • Single points of failure are typical of large monolithic apps and when they come under pressure, your team spends too much time solving technical issues instead of on development. For example - outages in non-critical data processing that bring down the whole system. Or all time-intensive tasks grouped into the background and becoming so unstable as to need a dedicated team. Or changing one part of the system affects others that should logically be unrelated.
  • If shipping a hotfix takes weeks or months, you will have a problem with slow development. To know when it is a good time to break the monolith, you should watch out for:
  • CI builds that take longer than 10 minutes (though a good CI tool should help it by splitting or auto-sequencing tasks).
  • Slow deployment, due to many dependencies and multiple app instances, especially when containerized.
  • Slow onboarding of new staff as it may take them months to get comfortable enough to make a non-trivial change to the codebase. Veteran team members becoming a bottleneck as reviewers because too many developers are waiting for their inputs.
  • New use cases and problems are not easily addressed with existing tools, and software updates are being put off, indicating you are dependent on outdated technology.

Full post here, 6 mins read

Break a monolith to microservices - best practices and design principles

Figure out how to segregate the data storage according to the constituent microservices, using a CQRS (command and query responsibility segregation) architecture so that data is not shared between microservices and is accessed only via APIs.
Read more

Break a monolith to microservices - best practices and design principles

  • Figure out how to segregate the data storage according to the constituent microservices, using a CQRS (command and query responsibility segregation) architecture so that data is not shared between microservices and is accessed only via APIs.
  • Break down the migration into steps, applying domain-driven design, rather than overhauling all repositories, deployment, monitoring, and other complex tasks at once. First, build new capabilities as microservices, then break down the monolith, starting with transforming any known pain points and troublesome gaps.
  • Allocate dedicated teams to every microservice to scale linearly and efficiently, as each team will be familiar with the nuances of its own service. Recognize this is as much a cultural shift as an operational one.
  • Pair the right technology with the right microservice for maintainability, fault tolerance, scalability, economy, and ease of deployment, and choose languages based on the team’s existing skillset.
  • Use ‘build and release’ automation to independently deploy each microservice.
  • Use a REST API so you need not install additional software or libraries and can handle multiple types of calls and data formats.
  • Isolate runtime processes with distributed computing - containerization, event architectures, HTTP management approaches, service meshes, circuit breakers, etc.
  • Distinguish between dedicated and on-demand resources, moving between them to reduce response time and deliver a superior customer experience; also reduce dependency on open-source tools.

Full post here, 8 mins read

Handling distributed transactions in the microservices world

One solution is a two-phase commit. This method splits transactions into a prepare and a commit phase, with a transaction coordinator to maintain the lifecycle of the transaction
Read more

Handling distributed transactions in the microservices world

  • In the microservices context, a distributed transaction is distributed to multiple services, called in a sequence, to complete the single transaction.
  • The ACID (atomicity, consistency, isolation, durability) test is challenging for distributed transactions with microservices because you need to be able to roll back the entire sequence if a microservice later in the sequence returns a failure, but atomicity implies the transaction should complete in entirety or fail in entirety. Also, when handling concurrent requests, you might have an object from one microservice being persisted into the DB even as the second reads that same object. This challenges both consistency and isolation.
  • One solution is a two-phase commit. This method splits transactions into a prepare and a commit phase, with a transaction coordinator to maintain the lifecycle of the transaction: first, all microservices involved will prepare for a commit and notify the coordinator when ready. Then the coordinator issues either a commit or rollback command to all the microservices.
  • It guarantees atomicity, allows read/write isolation (no changes to objects until the commit), and makes a synchronous call to notify the client of success or failure. However, it is a slow method and also locks database rows which can become a bottleneck and can allow two transactions to reach a deadlock.
  • Another solution is to use asynchronous local transactions for related microservices, which communicate through an event bus, guided by a separate choreographer system that listens for success and failures from the bus and chases a rollback up the sequence with a ‘compensating transaction’. This makes each microservice atomic for its transaction, hence the operation is faster, no database locks are needed and the system is highly scalable. However, it lacks read isolation. With many microservices, this is also harder to debug and maintain.

Full post here, 7 mins read

Breaking down a monolith into microservices - an integration journey

Avoid too much decoupling as a first step, you can always break it down later on. Enable logging across the board for observation and monitoring.
Read more

Breaking down a monolith into microservices - an integration journey

  • Before transitioning, identify the biggest pain points and boundaries in the monolithic codebase and decouple them into separate services. Rather than the size of code chunks, focus on ensuring these services can handle their business logic within the boundaries.
  • Split developers into two teams: one that continues to work on the old monolith which is still running and even growing and another to work on the new codebase.
  • Avoid too much decoupling as a first step, you can always break it down later on. Enable logging across the board for observation and monitoring.
  • Enforce security between microservices with mutual TLS to restrict access by unauthorized clients even within the architecture, and Oauth2-based security service.
  • For external clients, use an API gateway for authentication and authorization, and firewalls and/or tokens based on the type of client.
  • Secure any middleware you use as most come without credentials or a default credential. Automate security testing in your microservices deployment procedure.

Full post here, 5 mins read

Adopting microservices at Netflix: lessons for architectural design

Create a separate data store for each microservice and let the responsible team choose the DB that best suits the service.
Read more

Adopting microservices at Netflix: lessons for architectural design

  • Create a separate data store for each microservice and let the responsible team choose the DB that best suits the service. To keep different DBs in sync and consistent, add a master data management tool to find and fix inconsistencies in the background.
  • Use the immutable infrastructure principle to keep all code in a given microservice at a similar level of maturity and stability. So, if you need to add or rewrite code for a service, it is best to create a new microservice, iterate and test it until bug-free and efficient, and then merge back once it is as stable as the original.
  • You want to introduce a new microservice, file or function to be easy, not dangerous. Do a separate build for each microservice, such that it can pull in component files from the repository at the appropriate revision level. This means careful checking before decommissioning old versions in the codebase as different microservices may pull similar files at different revision levels.
  • Treat servers, especially those running customer-facing code, as stateless and interchangeable members of a group for easy scaling. Avoid ‘snowflake’ systems where you depend on individual servers for specialized functions.

Full post here, 7 mins read