From Monolith to Microservices without changing one line of code, thanks to the power of Inversion of Control (IoC)

In this article, we will explore how to transition from a monolithic architecture to a microservices architecture without refactoring any existing code. We will leverage the power of the Onion/Clean architecture to achieve this goal.

Prerequisites

In the past I have written extensively about the Onion/Clean architecture, so if you are not familiar with it, I recommend reading the following articles first:

1. Start with a Monolith and the Onion/Clean Architecture

To begin, we will start with a monolithic application that follows the Onion/Clean architecture principles. This application will have a well-defined separation of concerns, with distinct layers for domain models, domain services, application services, and infrastructure.

When working on a greenfield project, you should always implement a Monolithic first, even if you plan to transition to microservices later. It is very hard to predict the right service boundaries upfront.

If it is a new project there are some major risks derived from the fact that there are a lot of unknowns both in terms of technical implementation as well as business requirements. There is also added complexity that comes with a microservices architecture, such as inter-service communication, data consistency, and deployment strategies. As an engineer you want to reduce over exposure to risks as much as possible. Starting with a monolith allows you to focus on building the core functionality of your application without the added complexity of managing multiple services from the outset.

Note: Service boundaries are the boundaries that define the scope of a microservice. They determine what functionality and data a microservice is responsible for.

In my case I started to work on a project that I knew that I wanted to eventually split into microservices. So I started with a monolith that followed the Onion/Clean architecture principles. Because I knew that I wanted to split the application into microservices later, I made sure to keep the service boundaries in mind while designing the monolith. This meant that I designed the domain models and services in a way that would make it easy to extract them into separate services later on. Here are some of the rules I followed while designing the monolith:

  • Each boundary will use its own database schema (if using a relational database).

  • No database transactions, joins or foreign keys that involve multiple schemas (boundaries).

  • Calls from one boundary to another would always be http calls, even though in the monolith it would be possible to make direct calls. To do this we don’t use URLs like http://localhost:8080/api/xxxx. We use services URLs like http://service-name:8080/api and use a reverse proxy to route the requests to the correct service. In the monolith all services run in the same process, so the reverse proxy routes the requests to the same process.

How does the reverse proxy work? In local development and in the monolith deployment, we use a reverse proxy (like Nginx or Traefik) that routes all service URLs (e.g., http://auth-service:8080/api, http://cms-service:8080/api) to the same monolith process. The service names are just DNS aliases that all resolve to the same host. When we transition to microservices, we simply update the reverse proxy configuration (or use Kubernetes service discovery) so that each service name resolves to its own dedicated container. The application code remains unchanged—only the infrastructure routing changes that is configured outside of the application.

These rules ensured that the monolith was designed in a way that would make it easy to extract the boundaries into separate services later on.

The application I’m sharing as an examples is a CMS with asset management and multi-tenant capabilities. The directory structure of the monolith looked like this:

src
├── app-services // These are not aware of infrastructure details
│   ├── auth
│   ├── cms
│   ├── dam
│   ├── email
│   ├── logging
│   └── tenant
├── domain-model
│   ├── auth
│   ├── cms
│   ├── dam
│   └── tenant
├── index.ts // Monolith composition root
└── infrastructure // These are aware of infrastructure details
    ├── blob
    ├── db
    │   ├── repositories // db queries implementations
    │   │   ├── auth
    │   │   ├── cms 
    │   │   ├── dam
    │   │   └── tenant
    │   └── db.ts
    ├── email
    ├── env
    ├── http
    │   ├── controllers // http layer implementations
    │   │   ├── auth
    │   │   ├── cms
    │   │   ├── dam
    │   │   └── tenant
    │   ├── middleware
    │   └── server.ts
    ├── ioc
    │   ├── index.ts
    │   └── modules // IoC modules for each boundary
    │       ├── asset-management-ioc-module.ts
    │       ├── auth-ioc-module.ts
    │       ├── infrastructure-ioc-module.ts
    │       ├── template-management-ioc-module.ts
    │       ├── content-management-ioc-module.ts
    │       └── tenant-ioc-module.ts
    ├── logging
    └── secrets

As you can see the directory structure is organized as a monolith there are not separate root directories for each micro service. The onion architecture splits the application into layers, and each layer has its own responsibility. The infrastructure layer is responsible for the implementation details, such as database access, email sending, and logging. The application services layer is responsible for the business logic of the application. The domain model layer is responsible for the domain entities and value objects.

Note: If you want to learn more about the Onion/Clean architecture, I recommend reading my previous articles linked in the prerequisites section.

The layers remain decoupled from each other, at design and compile time. However, at runtime, the inversion of control (IoC) container resolves the dependencies and wires everything together. In order to achieve this, the
IoC containers needs to be aware of link between interfaces and implementations across all layers. These links are known as bindings.

It is important to understand and highlight the Inversion of Control (IoC) principle here. The IoC principle states that the control of the flow of the application should be inverted. In other words, you stop deciding “when” and “how” an object gets its dependencies. Instead, something external (in our case, the IoC container) gives (injects) the dependencies to your object. The IoC container is responsible for resolving the dependencies and wiring everything together at runtime.

With InversifyJS (my IoC container of choice for TypeScript projects) we can organize these bindings into IoC modules. Each module is responsible for binding the interfaces to their implementations for a specific boundary or layer. The following is an example of an IoC module for some infrastructure concerns:

// ...

export const infrastructureIocModule = new ContainerModule((options) => {
    const { bind } = options;
    // Singleton that manages Azure Key Vault connections
    bind<SecretsManager>(SecretsManagerSymbol)
        .to(SecretsManagerImplementation)
        .inSingletonScope();

    // Singleton that manages Cosmos DB connections
    bind<DatabaseConnectionManager>(DatabaseConnectionManagerSymbol)
        .to(DatabaseConnectionManagerImplementation)
        .inSingletonScope();

    bind<AppSecrets>(SecretsSymbol)
        .toDynamicValue(async (context) => {
            const secretsManager =
                await context.getAsync<SecretsManager>(SecretsManagerSymbol);
            await secretsManager.initialize();
            return secretsManager.secrets;
        })
        .inSingletonScope();

    bind<UnitOfWork>(UnitOfWorkSymbol)
        .to(UnitOfWorkImplementation)
        .inSingletonScope();

    bind<DbClient>(DbClientSymbol)
        .toDynamicValue(async (context) => {
            const databaseConnectionManager =
                await context.getAsync<DatabaseConnectionManager>(
                    DatabaseConnectionManagerSymbol,
                );
            return await databaseConnectionManager.getDbClient();
        })
        .inRequestScope();

    bind<EmailService>(EmailServiceSymbol)
        .to(EmailServiceImplementation)
        .inRequestScope();

    bind<Logger>(LoggerSymbol).to(LoggerImplementation).inSingletonScope();

    bind<BlobStorage>(BlobStorageSymbol).to(BlobStorageImplementation);
});

This module binds various infrastructure services, such as the secrets manager, database connection manager, email service, and logger. Because these services are used across multiple boundaries, it makes sense to have them in a separate infrastructure IoC module. This IoC module can be considered a platform module, as it provides services that are used across multiple boundaries.

Then we have an IoC module for each boundary. The following is an example of an IoC module for the authentication & authorization boundary:

// ...

export const authIocModule = new ContainerModule((options) => {
    const { bind } = options;
    // Middleware
    bind<ExpressMiddleware>(AuthorizeMiddlewareSymbol).to(AuthorizeMiddleware);
    bind<ExpressMiddleware>(AuthenticateMiddleware).toSelf();

    // Controllers
    bind(AuthController).toSelf().inSingletonScope();

    // Repositories
    bind<UserRepository>(UserRepositorySymbol)
        .to(UserRepositoryImplementation)
        .inRequestScope();

    bind<VerifyRepository>(VerifyRepositorySymbol)
        .to(VerifyRepositoryImplementation)
        .inRequestScope();

    bind<ResetRepository>(ResetRepositorySymbol)
        .to(ResetRepositoryImplementation)
        .inRequestScope();

    // Services
    bind<AuthService>(AuthServiceSymbol)
        .to(AuthServiceImplementation)
        .inRequestScope();

    bind<PasswordHashingService>(PasswordHashingServiceSymbol)
        .to(PasswordHashingServiceImplementation)
        .inRequestScope();

    bind<AuthTokenService>(AuthTokenServiceSymbol)
        .to(AuthTokenServiceImplementation)
        .inRequestScope();

    bind<TwoFactorAppService>(TwoFactorAppServiceSymbol)
        .to(TwoFactorAppServiceImplementation)
        .inRequestScope();

    bind<TOTPService>(TOTPServiceSymbol)
        .to(TOTPServiceImplementation)
        .inRequestScope();
});

In the monolith, we only start one application server that handles all incoming requests for all boundaries. We can achieve this by using a single IoC container that loads all the IoC modules for all boundaries:

// ...

export function createContainer() {
    const container = new Container();
    container.load(
        ...[
            infrastructureIocModule,
            authIocModule,
            tenantIocModule,
            assetManagementIocModule,
            templateManagementIocModule,
        ],
    );
    return container;
}

In your application there should be a single point in which the layers are “composed” together. This is known as the composition root. In our case, the composition root is the IoC container. In the monolith, we create a single IoC container which means that we have one composition root for the entire application.

Finally, we run the monolith application by creating a server that uses the container:

import { createAppServer } from "./infrastructure/http/server";
import { createContainer } from "./infrastructure/ioc";
import "reflect-metadata";
import "dotenv/config";

const port = process.env.API_PORT || 3001;

export const defaultOnReady = () =>
    console.log(`Server started on http://localhost:${port}`);

export async function main(onReady: () => void) {
    const container = createContainer();
    const app = await createAppServer(container);
    app.listen(port, () => {
        onReady();
    });
}

(async () => {
    main(defaultOnReady);
})();

At this point, we have a fully functional monolith with well-defined boundaries. The key insight is that each boundary is encapsulated in its own IoC module, and all modules are composed together in a single container. Now we’re ready to see how we can split this monolith into microservices without changing any of the existing code.

2. Transform into microservices by using one composition root per boundary

After working for an extended period of time on the monolith, we will learn more about the service boundaries and how they should be defined. At some point, we will be ready to split the monolith into microservices. The great news is that because we have followed the Onion/Clean architecture principles and have encapsulated each boundary in its own IoC module, we can easily extract each boundary into its own microservice without changing major parts of the existing code.

First we need to create a new composition root for each microservice. Each composition root will create its own IoC container and load only the IoC modules that are relevant for that specific microservice. We can create a helper function that creates a microservice given a configuration object:

export interface ServiceConfig {
    port: number;
    name: string;
    iocModules: ContainerModule[];
}

export async function createMicroService(config: ServiceConfig) {
    const { port, iocModules } = config;
    const container = new Container();
    container.load(...iocModules);
    const app = await createAppServer(container, port);
    app.listen(port, () => {
        console.log(`Server started on http://localhost:${port}`);
    });
}

Now we can create a new entry point for each microservice. Each entry point will use the createMicroService function to create a microservice with its own IoC container and relevant IoC modules. For example, here is the entry point for the authentication & authorization microservice:

// api/src/infrastructure/http/microservices/auth/index.ts
await createMicroService({
    port: 8080,
    iocModules: [
        infrastructureIocModule,
        authIocModule
    ],
    name: "auth",
});

And here is the entry point for the content management microservice:

// api/src/infrastructure/http/microservices/cms/index.ts
await createMicroService({
    port: 8080,
    iocModules: [
        infrastructureIocModule,
        templateManagementIocModule,
        contentManagementIocModule
    ],
    name: "cms",
});

We then use Kubernetes to deploy each microservice as a separate deployment. Each deployment will run its own instance of the microservice, and we can use Kubernetes services to expose each microservice to the outside world.

3. Manage most of the microservices complexity from the CI/CD layer not the application code

The main idea here is that we should move most of the complexity of managing multiple microservices to the CI/CD layer. Each microservice has its own entry point, and we can use our CI/CD pipeline to build, test, and deploy each microservice independently. Most of the code remains unchanged, as we have not modified any of the existing business logic or domain models. The only changes we have made are in the composition roots for each microservice.

The key insight here is that we use the same codebase for all microservices. We don’t create separate repositories or duplicate code. The main goal is to continue to develop the application in a way that feels like working on a monolith as much as possible.

Most of the microservices complexities are pushed out to the CI/CD layer, where you should leverage Kubernetes heavily to manage the deployments, scaling, and service discovery.

To achieve this, we use a single Dockerfile with different build arguments to specify which entry point to use. The key optimization is that each microservice image only includes the code relevant to that service:

FROM node:20-alpine
WORKDIR /app
COPY . .
ARG SERVICE_NAME=monolith
RUN node scripts/prune-services.js $SERVICE_NAME
RUN npm ci && npm run build
ARG SERVICE_ENTRY_POINT=dist/index.js
ENV ENTRY_POINT=$SERVICE_ENTRY_POINT
CMD ["sh", "-c", "node $ENTRY_POINT"]

The prune-services.js script removes directories not relevant to the target service. For example, when building the auth service, it removes app-services/cms, domain-model/dam, infrastructure/http/controllers/tenant, etc.—keeping only auth-related code and shared infrastructure.

Then in our CI/CD pipeline, we build separate images for each microservice by passing different entry points:

# Build auth microservice
docker build 
  --build-arg SERVICE_NAME=auth 
  --build-arg SERVICE_ENTRY_POINT=dist/infrastructure/http/microservices/auth/index.js 
  -t auth-service .

# Build cms microservice  
docker build 
  --build-arg SERVICE_NAME=cms 
  --build-arg SERVICE_ENTRY_POINT=dist/infrastructure/http/microservices/cms/index.js 
  -t cms-service .

You can use hashes to verify if a microservice needs to be redeployed—if the code for a specific microservice has not changed, you can skip the deployment for that microservice. Since each image only contains service-specific code after pruning, the resulting image digest will only change when relevant code changes. Compare the new image digest against the one currently in your container registry using skopeo:

REGISTRY=myregistry.azurecr.io

# Get digest of newly built image (after pushing to registry)
NEW_DIGEST=$(skopeo inspect docker://$REGISTRY/auth-service:$COMMIT_SHA | jq -r '.Digest')

# Get digest of currently deployed image (tagged as 'latest' or 'production')
DEPLOYED_DIGEST=$(skopeo inspect docker://$REGISTRY/auth-service:latest | jq -r '.Digest')

# Only deploy if the digests differ
if [ "$NEW_DIGEST" != "$DEPLOYED_DIGEST" ]; then
  # Tag the new image as latest
  skopeo copy docker://$REGISTRY/auth-service:$COMMIT_SHA docker://$REGISTRY/auth-service:latest
  # Update the deployment
  kubectl set image deployment/auth-service auth-service=$REGISTRY/auth-service:$COMMIT_SHA
fi

Since each service build is independent, you can run all builds in parallel to speed up the CI/CD pipeline.

Conclusion

The Onion/Clean architecture is powerful because it allows you to build applications that are easy to maintain and extend over time. Your application becomes a modular plugin system where each component can be swapped out independently.

For example, in this particular application, we migrated from CosmosDB to PostgreSQL a few months after starting the project. Because we had followed the Onion/Clean architecture principles, we were able to swap out the database implementation layer (infrastructure/db/repositories) without changing any of the existing code. We simply created a new IoC module for PostgreSQL and updated the composition root to use the new module.

The composition root is a very powerful concept because we delay the decision of how to compose the application until runtime. This allows us to easily transition from a monolithic architecture to a microservices architecture without changing any of the existing code (if you designed the monolith with this goal in mind from the start).

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post

Animation as Strategy — Not Decoration

Next Post

Perl 🐪 Weekly #749 – Design Patterns in Modern Perl

Related Posts