Dependency Injection in JS/TS – Part 1

Photo by Vadim Sherbakov on Unplash

Table of Contents

Intro

Throughout my programming journey, of all the things I learned, a few stand out as things that radically changed not only the way I program but also the way I think about programming in general.

Dependency Injection is one of those.

Dependency Injection, AKA DI, is a powerful technique with many benefits. One of the most evident (but not only) is that it helps us test things that would be very difficult or impossible to do without.

On top of that, once you understand it, it is easy to apply in practice.

But here’s the tricky part: even though Dependency Injection is a somewhat simple technique, in my experience, it is not that easy to grasp, which I believe is the reason why not many people are using it.

This is something I can tell from experience. Having read a handful of resources on dependency injection, I only came to understand it when a friend and mentor of mine, who was very experienced with the technique, explained it to me.

In this series, we’ll understand what dependency injection is, and get to know some of its most important techniques and applications.

To make interfaces explicit, examples are written in Typescript, which works well with Dependency Injection.

So let’s start with a very concrete example.

Fundamentals

Picture this:

You’re writing a function that generates random numbers between 0 and some positive number:

// randomNumber.ts

export const randomNumber = (max: number): number => {
  return Math.floor(Math.random() * (max + 1));
};

How would you write unit tests for this function?

As randomNumber depends on Math.random, whose outputs we do not control and are not deterministic, there is no easy way to do it. (they’re pseudo-random, but from our perspective, it’s not possible to determine which number it is going to generate)

Whenever we want to have callers of a function to have control over something that happens inside of it, we parametrize that thing:

// randomNumber.ts

export type RandomGenerator = () => number;

export const randomNumber = (
  randomGenerator: RandomGenerator,
  max: number
): number => {
  return Math.floor(randomGenerator() * (max + 1));
};

Now, instead of having Math.random hardcoded inside randomNumber, we can pass any randomGenerator we want, as long as it adheres to the correct interface, including Math.random itself.

This allows us to pass a mocked version of Math.random that we do control, and now we’re able to write unit tests for randomNumber:

// randomNumber.test.ts

describe("General case", () => {
  it("Produces numbers within the established range", () => {
    const randomGeneratorMock = jest
      .fn()
      .mockReturnValueOnce(0.13445)
      .mockReturnValueOnce(0.542)
      .mockReturnValueOnce(0.889);

    expect(randomNumber(randomGeneratorMock, 10)).toBe(1);
    expect(randomNumber(randomGeneratorMock, 10)).toBe(5);
    expect(randomNumber(randomGeneratorMock, 10)).toBe(9);
  });
});

What we did showcases the main concept behind dependency injection:

Dependency injection is, in its essence, about parametrizing things previously hardcoded in functions/classes, so we can control these functions/classes to a greater extent.

I want to emphasize the word “control” here, because, in the end, this is the main goal of using dependency injection.

Moreover, it is precisely by achieving a greater degree of control that we end up with all the other benefits commonly attributed to dependency injection. Less coupling, testability, reusability, and maintainability are some of them. But this is not the whole story, so let’s go back to our example.

Now we’re able to unit test randomNumber, however, as we solved one problem, we created another. To understand it, imagine we now need a function that generates a list of random numbers within a predetermined range. To do so, we’ll use randomNumber, of course:

// randomNumberList.ts

import { randomNumber } from "./randomNumber";

export const randomNumberList = (
  max: number,
  length: number
): Array<number> => {
  return Array(length)
    .fill(null)
    .map(() => randomNumber(Math.random, max));
};

If you’re anything like me, this code probably makes you feel uncomfortable, and for a good reason:

Previously, when Math.random was hardcoded, callers of randomNumber didn’t have to care about providing a randomGenerator, not even to be aware of its existence, as this used to be an implementation detail that was purposefully encapsulated inside randomNumber.

Now, as randomNumber is “leaking” this implementation detail to callers, they have the burden of providing a randomGenerator themselves, which means that they’re more likely to be affected by changes in randomNumber, like if we decide to change randomGenerator‘s interface.

This is the abstraction vs flexibility conundrum. The more we abstract away implementation details, the less flexible that piece of code becomes, for when we use abstractions to hide them, they get out of the callers’ reach. Conversely, to make a piece of code more flexible, we have to parametrize things, which moves some of its implementation to its interface.

It is this very issue that prevented me from understanding dependency injection for a long time because even though I read about dependency injection time and time again, it always seemed to me that it solved a problem just to create another that is arguably worse.

So, I want you to pay close attention to this next part as it shows this issue is easily solvable:

// randomNumber.ts

export type RandomGenerator = () => number;

// We renamed our previous randomNumber function
// to randomNumberImplementation
export const randomNumberImplementation = (
  randomGenerator: RandomGenerator,
  max: number
): number => {
  return Math.floor(randomGenerator() * (max + 1));
};

// randomNumber now has the randomGenerator
// already injected into it
export const randomNumber = (max: number) =>
  randomNumberImplementation(Math.random, max);
// randomNumberList.ts

import { randomNumber } from "./randomNumber";

export const randomNumberList = (
  max: number,
  length: number
): Array<number> => {
  return Array(length)
    .fill(null)
    .map(() => randomNumber(max));
};

Now, we have the best of both worlds. We are still able to test randomNumber by using randomNumberImplementation with a mocked randomGenerator, but also, we are exposing a “version” of randomNumber that already has a “stock” randomGenerator injected into it so that consumers do not have to care about providing it themselves.

I think a good analogy is when we’re shopping for a PC, where we have two options:

  1. Buying a pre-built PC.
  2. Buying all the parts separately and building it ourselves.

If we choose to buy a pre-built PC, we don’t need to know anything about PC building, the only thing we need to know is how to use it, but of course, we are limited by the range of available pre-built PCs.

On the other hand, if we choose to buy all the parts ourselves, we have much more control over how our PC is going to turn out, as we get to choose each part that is going into it. At the same time, knowing how to use is not enough, we also need some knowledge on how to build it using the parts we chose.

It’s the same thing with our code.

We should interpret this process of injecting dependencies into a function/class to generate a “complete version” of them as a build process. There are parts of our code that shouldn’t have to know how the things they use are built, the only thing they need to know is how to use them. However, for other parts of the code, like tests, we want to have more control over the behavior of these things, but to do so, we need to know how to build them.

So, in summary, dependency injection consists of two things:

  1. Extracting hardcoded dependencies as parameters in functions/classes, so we can have a greater degree of control over them.
  2. Creating a “version” of these functions/classes with their dependencies already injected, so that we can distribute them to those who need to use them but don’t need to know how they’re built.

These are the core concepts behind dependency injection.

Terminology

Before we move on, we need to establish some terminology to communicate more efficiently.

In Dependency Injection’s lingo, we have two important terms: services (or dependencies) and clients.

Services (or Dependencies) are variables, objects, classes, functions, or pretty much any language constructs that provide some sort of functionality to those who consume/depend on them.

Clients are functions or classes that might or might not consume some services.

Note that the classification of whether something is a service or a client depends on the context. That means something could be, at the same time, a service in one context and a client in another.

So, from now on, instead of saying:

“Dependency injection, in its essence, is about parametrizing things that were previously hardcoded in functions/classes, so that we can control these functions/classes to a greater extent”.

We’re going to say:

“Dependency injection, in its essence, is about parametrizing services that were previously hardcoded in clients, so that we can control these clients to a greater extent”.

Build vs Use

Earlier, we talked about the difference between building and using a service however, as both the service dependencies and its parameters are all mixed, our code does not reflect this distinction very clearly.

To change that, we’re going to use a function (which in this context is called a factory function) whose sole purpose is to build our service, by receiving the service’s dependencies as parameters:

// randomNumber.ts

export type RandomGenerator = () => number;

export const makeRandomNumber =
  (randomGenerator: RandomGenerator) =>
  (max: number): number => {
    return Math.floor(randomGenerator() * (max + 1));
  };

// Quick Note:
// The above construct is called a Higher Order Function (HOC),
// because it returns another function.
//
// If it feels weird because of the arrow function notation,
// but it is equivalent to this:

export function makeRandomNumber(randomGenerator: RandomGenerator) {
  return function randomNumber(max: number): number {
    return Math.floor(randomGenerator() * (max + 1));
  };
}

export const randomNumber = makeRandomNumber(Math.random);

Previously, randomNumber‘s dependencies (randomGenerator) were mixed with its ordinary parameters (max), that is, the parameters randomNumber‘s clients have to pass to the “built randomNumber version”.

Now, we’re using a factory function (makeRandomNumber) to gather these dependencies and explicitly separate randomNumber‘s construction from its usage, which is also reflected in the functions’ names.

Aside from the gain in conceptual clarity, this separation will prove to be very useful soon, as it is the basis of some of the techniques surrounding DI.

Varying Implementations

Imagine that our fictional application has grown and now we have the requirement for our randomNumber to be cryptographically secure, that is, it should be really hard to guess which number it’s going to produce next, based on the previous numbers it has produced.

Thankfully we found a library (also fictional) called secureRandomNumber which exposes a function with the same name (secureRandomNumber) that has the same interface as randomNumber, and does precisely what we need.

This means that we can use secureRandomNumber as a drop-in replacement for randomNumber, as it also generates a random number between 0 and some upper bound that it takes as a parameter, with the upside that it does so in a cryptographically secure way.

So, in practice, in every single place that we called randomNumber, we can just call secureRandomNumber instead.

// Before
const someNumber = randomNumber(10);
// After
const someNumber = secureRandomNumber(10);

But there’s a catch: secureRandomNumber takes considerably more time to run than randomNumber, so we want to keep using randomNumber in non-productive environments and only use secureRandomNumber in production (by the way, I’m not advocating this).

This means that when NODE_ENV !== "production", we want all modules that use randomNumber to keep using it, but when NODE_ENV === "production", they will use secureRandomNumber instead.

One thing we could do is to change the implementation of randomNumber based on which environment we’re on:

// randomNumber.ts

import { secureRandomNumber } from "secureRandomNumber";

export const makeRandomNumber =
  (
    // From now on, we'll use the
    // RandomGenerator type inline instead
    // of having is defined as a named type,
    // for brevity
    randomGenerator: () => number,
    secureRandomNumber: (max: number) => number
  ) =>
  (max: number) => {
    if (process.NODE_ENV !== "production") {
      return Math.floor(randomGenerator() * (max + 1));
    }

    return secureRandomNumber(max);
  };

export const randomNumber = makeRandomNumber(Math.random, secureRandomNumber);

There are lots of problems with this approach, but right now I want to draw your attention to just one:

makeRandomNumber is doing too many things at once.

At first, makeRandomNumber‘s responsibility was to create a function that generates a random number, and now it still does that, but it also selects which algorithm is going to be used to generate the random number.

We even had to change its interface to accommodate that requirement, which alone, would break all its unit tests.

So, instead of doing this, let’s consider another approach, one that leverages dependency injection.

First, we will keep the different implementations of randomNumber separate, which in this case will be done using different files for each implementation.

As secureRandomNumber comes from a library, we only need to create a new file for our former randomNumber implementation, which is now going to be called fastRandomNumber.

// fastRandomNumber.ts
// This file holds our original
// randomNumber implementation

export const makeFastRandomNumber =
  (randomGenerator: () => number) => (max: number) => {
    return Math.floor(randomGenerator() * (max + 1));
  };

export const fastRandomNumber = makeFastRandomNumber(Math.random);

Then, in the randomNumber.ts file, we only select which implementation is going to be used for the randomNumber service.

// randomNumber.ts
// This file only exports
// the selected random number function

import { fastRandomNumber } from "./fastRandomNumber";
import { secureRandomNumber } from "secureRandomNumber";

export const randomNumber =
  process.env.NODE_ENV !== "production" ? fastRandomNumber : secureRandomNumber;

Much cleaner right? And now we won’t be breaking any unit tests.

Also, let’s take a look at randomNumberList, which uses randomNumber:

// randomNumberList.ts

import { randomNumber } from "./randomNumber";

export const makeRandomNumberList =
  (randomNumber: (max: number) => number) =>
  (max: number, length: number): number => {
    return Array(length)
      .fill(null)
      .map(() => randomNumber(max));
  };

// As we currently only have a single randomNumberList
// implementation, we can keep both its implementation
// and service creation in the same file
export const randomNumberList = makeRandomNumberList(randomNumber);

As we can see above, consumers of randomNumber can remain completely unaware of which is the actual implementation they are using for it, as they all import it from the same place, and all implementations adhere to the same interface.

In general, whenever we need to use different implementations of a certain service, it’s better to use the strategy pattern. Instead of having the service itself behave differently according to some parameter, we create different implementations of it and then select the appropriate one to pass to the client.

Mock Implementations In Development

Another interesting usage of this technique is when we want to mock external systems (like external APIs) not only while testing but during development as well.

Suppose we have a function that fetches users and is being used in lots of places in our application:

// apiFetchUser.ts
export const makeApiFetchUser = (apiBaseUrl: string) => async () => {
  const response = await fetch(${apiBaseUrl}/users);
  const data = await response.json();

  return data;
};

// Using window fetch
export const apiFetchUser = makeApiFetchUser(process.env.API_BASE_URL);
// fetchUser.ts

// This is where the parts of the application that
// use fetchUser import it from.
import { apiFetchUser } from "./apiFetchUser";

export const fetchUser = apiFetchUser;

Now, if we want a specific mocked implementation, either because the API is still not ready to be used, or because we want a specific output to develop some feature or reproduce some bug, we can replace the original implementation with a mocked one:

// inMemoryFetchUser.ts

// In this case, as inMemoryFetchUser
// has no dependencies, we don't need a
// factory function
export const inMemoryFetchUser = () => {
  return Promise.resolve([
    {
      id: "1",
      name: "John",
    },
    {
      id: "2",
      name: "Fred",
    },
  ]);
};
// fetchUser.ts

// This is where parts of the application that
// use fetchUser import it from.
import { restFetchUser } from "./restFetchUser";
import { inMemoryFetchUser } from "./inMemoryFetchUser";

// We comment this line where we were
// using the original implementation
// export const fetchUser = restFetchUser;  void): void => {
  const intervalInMs = 5000;

  setInterval(() => {
    // Data fetching logic

    callback(data);
  }, intervalInMs);
//);

We’re polling every 5 seconds, but this value is hardcoded, which means that anytime we want to change it we need to alter the code, and then, in most cases, open a PR, get it reviewed, approved, and merged.

Also, we might want to have different intervals for different environments.

Maybe, we want this value to be higher in production to avoid straining the servers, but somewhat shorter in development to give us quicker feedback and even shorter in local development.

So, to achieve the flexibility we need, we extract this value as an environment variable:

export const poll = (callback: (data: Data)): void => {
  setInterval(() => {
    // Data fetching logic

    callback(data);
  }, process.env.POLLING_TIME_INTERVAL_IN_MS);
}

Now we want to do some integration testing with this function and see how it behaves with a specific time interval, so we create a .env.test where we set POLLING_TIME_INTERVAL_IN_MS to the value we want and then make sure we’re loading environment variables from .env.test.

Then, we end up with two problems:

  • What if we want to use different time intervals for different tests?
  • We have to make sure the.env.test is always up to date with the other environment variables.

These problems arise because poll knows where the time interval is coming from, but in the end, it doesn’t care where it comes from, it just needs a value for the time interval, so why not extract it as a dependency?

export const makePoll = (pollingTimeIntervalInMs: number) => (): void => {
  setInterval(() => {
    // Data fetching logic

    callback(data);
  }, pollingTimeInterval);
};

export const poll = makePoll(process.env.POLLING_TIME_INTERVAL);

Now, when testing, we can inject any pollingTimeIntervalInMs we want and we don’t even need a .env.test anymore.

Also, another benefit is that in the first implementation, where we had process.env.POLLING_TIME_INTERVAL buried deep inside poll, it is not evident that poll is relying on an environment variable, so to discover that we need to look at its implementation.

In our refactored example, we moved the dependency to makePoll‘s interface making the dependency relation explicit.

The moral of the story is: treat configuration values as any other dependency and extract them as parameters so the services that use them don’t need to know where they’re coming from. This facilitates testing and allows us to change where we’re pulling these values from (like reading a file or calling an API asynchronously) without changing the services’ implementation.

Dependency Injection With Classes

So far, we’ve shown how to use dependency injection when our clients and services are functions, but it’s also possible to do so with classes:

// FastRandomNumber.ts

export class FastRandomNumber {
  constructor(randomGenerator: () => number) {
    this.randomGenerator = randomGenerator;
  }

  public generate(max: number): number {
    return Math.floor(randomGenerator() * (max + 1));
  }

  private randomGenerator: () => number;
}
// randomNumber.ts

// Suppose that now SecureRandomNumber is a class
import { SecureRandomNumber } from "secureRandomNumber";

import { FastRandomNumber } from "./fastRandomNumber";
import { RandomNumber } from "./randomNumber";

export interface RandomNumber {
  generate: (max: number) => number;
}

export const randomNumber =
  process.env.NODE_ENV !== "production"
    ? new FastRandomNumber(Math.random)
    : new SecureRandomNumber();
// RandomNumberListImpl.ts

export class RandomNumberListImpl {
  constructor(randomNumber: RandomNumber) {
    this.randomNumber = randomNumber;
  }

  public generate(max: number, length: number): Array<number> {
    return Array(length)
      .fill(null)
      .map(() => randomNumber(max));
  }

  private randomNumber: RandomNumber;
}
// randomNumberList.ts
import { randomNumber } from "./randomNumber";

export const randomNumberList = new RandomNumberListImpl(randomNumber);

When using functions, our dependencies are captured in the closure through the use of higher order functions.

When using classes, our dependencies get stored in private variables.

Deciding What to Extract as a Dependency

When we use dependency injection extensively throughout our application, we eventually begin to wonder which things we extract as dependencies and which we leave hardcoded.

Let me start by telling you that there is no “one size fits all” rule for this.

As we get used to using dependency injection, we might be tempted to adopt a “let’s extract everything” posture, but this doesn’t work, as there is almost no limit to how “generic” we can make our code.

For example, let’s say we have the following function:

export const foo = (value: number): number => {
  if (value % 2 === 0) {
    return value / 2;
  }

  return 3 * value + 1;
};

We could make it more generic by extracting its mathematical operators:

type Dependencies = {
  remainder: (dividend: number, divisor: number) => number;
  divide: (dividend: number, divisor: number) => number;
  multiply: (first: number, second: number) => number;
  add: (first: number, second: number) => number;
};

export const makeFoo =
  ({ remainder, divide, multiply, add }) =>
  (value: number): number => {
    if (remainder(value, 2) === 0) {
      return divide(value, 2);
    }

    return add(multiply(3, value), 1);
  };

We could make it even more generic by extracting the equality operator and the conditional:

type Dependencies = {
  remainder: (dividend: number, divisor: number) => number;
  divide: (dividend: number, divisor: number) => number;
  multiply: (first: number, second: number) => number;
  add: (first: number, second: number) => number;
  isEqual: <T>(first: T, second: T) => boolean;
  ifThenElse: <T, U>(condition: boolean, then: () => T, elseFun: U) => T | U;
};

export const makeFoo =
  ({ remainder, divide, multiply, add, isEqual, ifThenElse }) =>
  (value: number): number => {
    return ifThenElse(
      isEqual(remainder(value, 2), 0),
      () => divide(value, 2),
      () => add(multiply(3, value), 1)
    );
  };

We could make it more generic still, by abstracting the “function application” operation:

type Dependencies = {
  apply: <Args extends Array<unknown>, Return>(
    fun: (...args: Args) => Return,
    ...args: Args
  ) => Return;
  remainder: (dividend: number, divisor: number) => number;
  divide: (dividend: number, divisor: number) => number;
  multiply: (first: number, second: number) => number;
  add: (first: number, second: number) => number;
  isEqual: <T>(first: T, second: T) => boolean;
  ifThenElse: <T, U>(condition: boolean, then: () => T, elseFun: U) => T | U;
};

export const makeFoo =
  ({ apply, remainder, divide, multiply, add, isEqual, ifThenElse }) =>
  (value: number): number => {
    return apply(
      ifThenElse,
      apply(isEqual, apply(remainder, value, 2), 0),
      () => apply(divide, value, 2),
      () => apply(add, apply(multiply, 3, value), 1)
    );
  };

As you can see, this doesn’t add much value to our code, so going with a “let’s extract everything” is not a good idea in most cases.

Even though we do not have some simple rule that tells us which things should or should not be extracted as dependencies, we do have some heuristics to help us.

In general, we might want to extract:

  • Configuration values
  • Services from other units/modules
  • Global variables
  • Services that have a non-deterministic behavior (like random number generators, uuid generators)
  • Services that interface directly or indirectly with things that are outside your program’s memory space (APIs, databases, file systems, console/stdin/stdout)
  • Services that rely on the specificities of some environment (like code that relies on browser/node-specific variables, e.g. window, __dirname)
  • Services that have side-effects
  • Services that already have or are expected to have multiple implementations, including implementations that only exist for testing or temporary purposes

Things we might not want to extract:

  • Services that live on the same module/unit (though there might be occasions where this is useful)
  • Primitive language constructs (arithmetical/logical operators, conditionals)

Let me reiterate, these are guidelines/heuristics, not rules written in stone.

The biggest part of our jobs is not to blindly follow rules but to think, weigh the tradeoffs of different solutions and pick the most appropriate one.

Also, keep in mind that there is no need to decide exactly which things get extracted as dependencies upfront because it is easy to extract a previously hardcoded dependency.

Composition Root (AKA Container)

So far, for each service we have been using, there’s a file that creates the service and then exports it (which might or might not contain the service’s implementation itself), the file from which all clients that use the service import it from.

Now, I want to show you a different approach, where, instead of having a separate file for the creation of each service, we’ll have a single file where we centralize the creation of all services.

The reason we do that is two-fold:

  • To separate concerns: so the concern of defining the implementation of a service is separated from the concern of building it.
  • To enable us to do things more easily: like integration testing, or things that wouldn’t be possible otherwise, like dealing with cyclic dependencies and building services asynchronously.

To see this process in practice, let’s go back to our first example, where we were generating random numbers:

// randomNumber.ts

// Previously, this is the file where we
// selected the appropriate randomNumber implementation,
// and then exported it as the randomNumber service.

// Now, the only thing we'll do here, is defined
// the RandomNumber interface, to which its implementations
// will adhere.
export type RandomNumber = (max: number) => number;
// fastRandomNumber.ts

import { RandomNumber } from "./randomNumber";

export const makeFastRandomNumber =
  (randomGenerator: () => number): RandomNumber =>
  (max: number): number => {
    return Math.floor(randomGenerator() * (max + 1));
  };
// randomNumberList.ts
import { RandomNumber } from "./randomNumber";

export const makeRandomNumberList =
  (randomNumber: RandomNumber) =>
  (max: number, length: number): Array<number> => {
    return Array(length)
      .fill(null)
      .map(() => randomNumber(max));
  };

// Notice that we are not creating the randomNumberList
// service here anymore

Then, we’ll have a single file that will take care of “plugging” all dependencies together:

// container.ts

import { secureRandomNumber } from "secureRandomNumber";
import { makeFastRandomNumber } from "./fastRandomNumber";
import { makeRandomNumberList } from "./randomNumberList";

const randomGenerator = Math.random;
const fastRandomNumber = makeFastRandomNumber(randomGenerator);
const randomNumber =
  process.env.NODE_ENV === "production" ? secureRandomNumber : fastRandomNumber;
const randomNumberList = makeRandomNumberList(randomNumber);

export const container = {
  randomNumber,
  randomNumberList,
};

export type Container = typeof container;

This file where we centralize the creation of all services is called composition root or container.

Structurally speaking, there’s a crucial difference between having service creation happen in a single place and having it happen all over the application.

When each service is created in its file, and, especially when this file also contains the service’s implementation, services know where their dependencies are coming from because they import them directly.

For example, previously in our examples, randomNumber imported its implementations directly from fastRandomNumber.ts and the secureRandomNumber lib, and randomNumberList, which also resided in the same file as its implementation, imported randomNumber directly from randomNumber.ts.

Now, when services are created in a single place, this is the only place that knows about all services/dependencies, and where they’re coming from.

As you can see in our current example, the only thing that files (aside from the composition root) import from one another, are types/interfaces, and nothing else, which makes them as decoupled as they can get.

In JS, this might not seem like a huge benefit, given that it is an interpreted language, and we don’t have things like static/dynamic linking, in which case this structuring allows us to reduce compilation times and load code as plugins, but it does open the door for some important techniques surrounding DI.

Going back to the container itself, to compose services there are certain constraints that we must be mindful of, regarding the order we create them.

When we consider the services’ inter-dependency relation, that is, who depends on who, they form a dependency graph:

The arrows on the image represent the “depends on” relation.

Services must be created in what we call a reverse topological order, that is, first we must create all the dependencies that have no dependencies themselves, then we proceed to the ones that only depend on the already created ones, and so on until there is no service left to be created.

When each service was created in its file, we didn’t have to worry about the order in which we created them, because the “import system” took care of that for us.

After creating all the services, they are now ready to be used, so it begs the question, how do we use them?

Whenever we have a composition root, we’re splitting our application into two phases:

  1. A boot phase
  2. A run phase

The boot phase is like a “runtime build” phase, where we assemble our application by creating all the services and plugging them together.

Then, at the run phase, we start the application using the services from the container.

Going back to the PC analogy, it’s as if each service was a PC part that was manufactured in isolation, and then there’s an assembly line (which corresponds to the composition root) where we assemble all the individual parts into a PC.

It’s only when all parts are composed together, we can turn on the PC.

So, when our container is ready to be used, we must call it in the application’s entry point.

Depending on the kind of application we’re dealing with and, especially, the framework we might be using, what’s considered the application’s entry point may vary, but in the simplest case, it’d be the index.ts that is called when the application starts:

// index.ts
import { container } from "./container";

const main = () => {
  // Do stuff with container
  const randomNumberList = container.randomNumberList;

  // Reads max and length from command line
  const max = Number(process.argv[2]);
  const length = Number(process.argv[3]);
  console.log(randomNumberList(max, length));

  return 0;
};

main();

Of course, the idea is to have only the bare minimum logic necessary to kickstart the application in our entry point, so that all the complexity gets isolated inside services.

Interlude

Congratulations, if you’ve made it this far, this means that you already understand what dependency injection is, how to use it in practice, and some of its most common use cases.

From this point forward, we’ll talk about how to make working with dependency injection more manageable and explore some additional techniques and use cases.

Automatic DI Containers

This approach where we compose our dependencies ourselves in the composition root is called manual DI or pure DI, but as the number of dependencies grows so does the complexity of maintaining the composition root and making sure that we’re creating dependencies in the right order.

To solve this problem we have automatic DI containers, which take care of automatically creating all dependencies in the right order for us.

DIY Automatic DI Container

There are already off-the-shelf libraries that provide automatic DI containers, but before we look at that, I’d like to show you how we could build an automatic DI container ourselves.

// randomNumber.ts

export type RandomNumber = (max: number) => number;
// fastRandomNumber.ts

import { RandomNumber } from "./randomNumber";

// To be able to construct dependencies
// automatically, we first need to
// start using named arguments to pass
// dependencies to services, that is,
// we'll wrap dependencies in an object

type Dependencies = {
  randomGenerator: () => number;
};

export const makeFastRandomNumber =
  ({ randomGenerator }: Dependencies) =>
  (max: number): number => {
    return Math.floor(randomGenerator() * (max + 1));
  };
// randomNumberList.ts
import { RandomNumber } from "./randomNumber";

type Dependencies = {
  randomNumber: RandomNumber;
};

export const makeRandomNumberList =
  ({ randomNumber }: Dependencies) =>
  (max: number, length: number): Array<number> => {
    return Array(length)
      .fill(null)
      .map(() => randomNumber(max));
  };
// container.ts
import { makeFastRandomNumber } from "./fastRandomNumber";
import { makeRandomNumberList } from "./randomNumberList";
import { secureRandomNumber } from "secureRandomNumber";

// We first "declare" what our services are
// and their respective factories
const dependenciesFactories = {
  randomNumber:
    process.env.NODE_ENV !== "production"
      ? makeFastRandomNumber
      : //For this to work,
        // we'll need to wrap this in a factory
        () => secureRandomNumber,

  randomNumberList: makeRandomNumberList,
  randomGenerator: () => Math.random,
};

type DependenciesFactories = typeof dependenciesFactories;

// Some type magic to type the container
export type Container = {
  [Key in DependenciesFactories]: ReturnValue<DependenciesFactories[Key]>;
};

export const container = {} as Container;

Object.entries(dependenciesFactories).forEach(([dependencyName, factory]) => {
  // This is why we need to wrap our dependencies in
  // an object, because then we're able to pass
  // the entire container to factories, and
  // even though we're both passing more dependencies
  // than needed and, dependencies that at some
  // point in time might be undefined, it doesn't matter.
  return Object.defineProperty(container, dependencyName, {
    // We're using a getter here to avoid
    // executing the factory right away, which
    // would break due to some dependencies
    // being undefined by the time the factory
    // is executed.
    // This way, factories are only
    //called when the whole container
    // is already set up, and then, accessing
    // some service triggers the recursive creation
    // of all its dependencies
    get: () => factory(container),
  });
});

This DIY automatic DI container construction is somewhat convoluted, but worry not, for I only included it in this article to give you an idea of how automatic DI containers work under the hood, but you don’t need to fully understand it to use it, especially because there are libraries that take care of that for us.

The main point here is that we don’t have to worry about the order we create our dependencies anymore as our automatic DI container will take care of everything. The only thing we need to do is to register dependencies with the appropriate resolver.

Now, let me show you a production-grade automatic DI container.

Awilix

There are a few libraries out there that provide us with an automatic DI container, and in this post, we’ll use Awilix which is my go-to DI container, given how powerful and easy to use it is.

// randomNumber.ts
export type RandomNumber = (max: number) => number;
// fastRandomNumber.ts
type Dependencies = {
  randomGenerator: () => number;
};

export const makeFastRandomNumber =
  ({ randomGenerator }: Dependencies) =>
  (max: number): number => {
    return Math.floor(randomGenerator() * (max + 1));
  };
// randomNumberList.ts
import { RandomNumber } from "./randomNumber";

type Dependencies = {
  randomNumber: RandomNumber;
};

export const makeRandomNumberList =
  ({ randomNumber }: Dependencies) =>
  (max: number, length: number): Array<number> => {
    return Array(length)
      .fill(null)
      .map(() => randomNumber(max));
  };

// For services where we expect to have
// a single implementation, we can also
// export this type which is going to be useful
// for other services
export type RandomNumberList = ReturnType<typeof makeRandomNumberList>;
// container.ts
import {
  asValue,
  asFunction,
  asClass,
  InjectionMode,
  createContainer as createContainerBuilder,
} from "awilix";
import { AwilixUtils } from "./utils/awilix";

// Here we'll define our dependencies
// and wrap them with the appropriate resolver
// so that they can be instantiated
// automatically by Awilix.
//
// Resolvers are used to tell Awilix
// how a given dependency must be resolved,
// whether it is created using a factory function,
// in which case we use the asFunction resolver,
// or whether it is an instance of a class, in which case
// the asClass is the correct resolver, or even
// whether it is a value that is ready to be used as is,
// that is, one that doesn't need to be constructed,
// in which case we use the asValue resolver.
export const dependenciesResolvers = {
  randomGenerator: asValue(Math.random()),

  // This .asSingleton() means that we'll be using
  // a single instance of this dependency
  // throughout the whole application, which
  // is how we've been using dependency injection
  // so far.
  // There are other possible LIFETIMES,
  // but this goes out of the scope of this
  // article
  randomNumber:
    process.env.NODE_ENV === "production"
      ? asFunction(makeSecureRandomNumber).singleton()
      : asValue(fastRandomNumber),
  randomNumberList: asFunction(makeRandomNumberList),
};

// Everything below here is boilerplate

// This is the awilix container builder,
// where we register the dependencies resolvers and which
// will take care of actually plugging everything
// together
const containerBuilder = createContainerBuilder({
  injectionMode: InjectionMode.PROXY,
});

containerBuilder.register(dependencies);

// The actual container that contains all
// our instantiated dependencies
export const container = containerBuilder.cradle;
// utils/awilix.ts

export namespace AwilixUtils {
  // Some type magic to make sure our container
  // is typed correctly
  type ExtractResolverType = T extends Resolver
    ? U
    : T extends BuildResolver
    ? U
    : T extends DisposableResolver
    ? U
    : never;

  export type Container = {
    [Key in keyof Dependencies]: ExtractResolverType;
  };
}

Let’s take a more in-depth look at Awilix:

Ultimately, we want to end up with a container that has all our services instantiated. To do that, we need to tell Awilix two things:

  1. The dependencies we have.
  2. How to create them.

This is done by registering dependencies and their resolvers in the containerBuilder, where each “kind” of dependency must be wrapped with the appropriate resolver.

If we’re injecting something directly, that is, that doesn’t need any kind of construction/creation, we use the asValue resolver.

If what we’re injecting is created using a factory function (which we’ve been using so far), we then use the asFunction resolver.

Lastly, if our dependency is created using a class, we use the asClass resolver.

Awilix uses JS Proxy to do its magic, so whenever we access some service in the container, it intercepts it and recursively creates all dependencies along the way needed to create the service we’re accessing.

One of the things I really like about Awilix is how unintrusive it is. To use it in our application, we didn’t have to change anything in the code. The only place that changed was the container itself.

Everything else remains the same: our tests, our services’ implementations, and how we call the container in our application’s entry point.

Cyclic Dependencies

Whenever a service depends directly or indirectly on itself, we have what’s called a circular or cyclic dependency, which is what happens when dealing with recursive functions.

// Without dependency injection

// a.ts
import { b } from "./b";

const a = (value: number): void => {
  console.log("a", value);

  if (value === 0) {
    return;
  }

  b(value - 1);
};
// b.ts
import { a } from "./b";

const b = (value: number): void => {
  console.log("b", value);

  if (value === 0) {
    return;
  }

  a(value - 1);
};
// index.ts
a(2);

// Logs:
// a 2
// b 1
// a 0

In this illustrative example, a depends on b and b also depends on a, which means that a depends on itself indirectly.

So, if we tried to use DI with these functions without a DI container (manual or automatic), it wouldn’t work:

// a.ts
import { makeA } from "./aImpl";
import { b } from "./b";

export type A = (value: number) => void;

export const a = makeA({ b });
// b.ts
import { makeB } from "./bImpl";
import { a } from "./a";

export type B = (val: number) => void;

export const b = makeB({ a });
// aImpl.ts
import { A } from "./a";
import { B } from "./b";

type Dependencies = {
  b: B;
};

export const makeA =
  ({ b }): A =>
  (value: number) => {
    console.log("a", value);

    if (value === 0) {
      return;
    }

    b(value - 1);
  };
// bImpl.ts
import { A } from "./a";
import { B } from "./b";

type Dependencies = {
  a: A;
};

export const makeB =
  ({ a }: Dependencies): B =>
  (value: number) => {
    console.log("b", value);
    if (value === 0) {
      return;
    }

    a(value - 1);
  };
// index.ts
import { a } from "./a";

a(2);
// Logs:
// a 2
// b 1
// Error: a is not a function

Here is what happens when we execute index.ts:

  1. Interpreter encounters an import { a } from "./a" statement, so it “goes” to a.ts.
  2. Interpreter encounters an import { b } from "./b" statement, so it “goes” to b.ts.
  3. Interpreter encounters an import { a } from "./a" statement, but it knows that it already “went” to a.ts, so it ignores this statement and keeps on parsing b.ts.
  4. Creates b by injecting a, but notice that we didn’t reach the point where a is created yet, so the a that is passed to b is actually undefined.
  5. After parsing b.ts, it “goes” back to a.ts and creates a, and injects b into it, which has already been created.
  6. Finally, after parsing a.ts and b.ts, it “goes” back to index.ts and calls a, which works fine, and then a call b, which also works fine, but then, when b calls a, as its a reference is undefined, everything blows up.

I won’t delve too deeply into this subject because it would take us off our track, so if you wish to read more about it, I recommend this post which treats specifically about how to deal with cyclic dependencies in dependency injection.

However, I will show you how we solve this problem using Awilix.

As Awilix creates dependencies automatically, we don’t need to change anything in the container itself, the only thing we need to do is the following:

// aImpl.ts
import { A } from "./a";
import { B } from "./b";

type Dependencies = {
  b: B;
};

// Notice we're not destructuring our dependencies
// in the function's signature anymore
export const makeA =
  (deps: Dependencies): A =>
  (value: number) => {
    // We now must destructure them in the
    // resulting function
    const { b } = deps;

    console.log("a", value);

    if (value === 0) {
      return;
    }

    b(value - 1);
  };

// bImpl.ts
import { A } from "./a";
import { B } from "./b";

type Dependencies = {
  a: A;
};

export const makeB =
  (deps: Dependencies): B =>
  (value: number) => {
    const { a } = deps;

    console.log("b", value);
    if (value === 0) {
      return;
    }

    a(value - 1);
  };

And voila, we can safely use services that have cyclic dependencies.

Writing Code: Top Down vs Bottom Up Approach

Without dependency injection all we can do is write code in a bottom up manner. We start by writing modules that have no dependencies and then write other modules that depend on the ones we have already wrotte, and so on, all the way up to the “surface” modules.

Also, all our tests will be integration tests (which is not a bad thing by itself) due to the lack of mocking capabilities.

But what if we wanted to do the opposite. What if wanted to start developing modules that are closer to the “surface”, and then go all the way to the bottom?

With dependency injection, we can easily do that, as I’m going to show you.

Suppose we’re developing a command line to do app (I know, very creative, right?), and instead of thinking about how we’re going to store the todos, how all the logic is going to work, we will start with the user interface.

So, the first thing we’ll develop is a run function that is the application’s entry point:

type Dependencies = {
  // Instead of reading directly from stdin
  // and writing to stdout, we'll depend on the
  // read and write abstractions, which gives us
  // greater flexibility and allows us to mock them
  read: () => Promise;
  write: (data: string) => Promise;
  showTodos: () => Promise;
  toggleTodo: () => Promise;
  addTodo: () => Promise;
  editTodo: () => Promise;
  deleteTodo: () => Promise;
  invalidCommand: () => Promise;
};

export const makeRun =
  ({
    read,
    write,
    showTodos,
    toggleTodo,
    addTodo,
    editTodo,
    deleteTodo,
    invalidCommand,
  }: Dependencies) =>
  async () => {
    await write(
      `Welcome to the To Do App!

    Commands:
    "show" - Show todos.
    "toggle" - Toggle todo.
    "add" - Add todo.
    "edit" - Edit todo.
    "delete" - Delete todo.
    `
    );

    while (true) {
      const command = (await read()).trim();

      switch (command) {
        case Command.Show:
          showTodos();
          break;

        case Command.Toggle:
          toggleTodo();
          break;

        case Command.Add:
          addTodo();
          break;

        case Command.Edit:
          editTodo();
          break;

        case Command.Delete:
          deleteTodo();
          break;

        case Command.Quit:
          return 0;

        default:
          invalidCommand();
      }
    }
  };

export const enum Command {
  Show = "show",
  Toggle = "toggle",
  Add = "add",
  Edit = "edit",
  Delete = "delete",
  Quit = "quit",
}

export type Run = ReturnType;

The run function writes the welcome text to the terminal, then enters a loop where it reads the commands issued by the user and invokes the corresponding routine.

As read, write and all the invoked routines are extracted as parameters, we don’t need to care about their implementation right now, the only thing we need to know is what we expect them to do, and the “how” can be deferred.

Notice that at this point we can’t run the application because we didn’t write the implementations the run function will use, so how can we be sure that we’re coding the “right thing”?

The answer is tests: by writing unit tests for the run function we can exercise it without running the application, as we can mock all the missing dependencies.

//run.test.ts

// Here we have a sample test
// I won't include others for the sake of
// brevity, but we could easily exercise other
// execution paths by simulating different user
// inputs with the mocked read function.
describe("When initializing", () => {
  it("Displays the correct message", async () => {
    const read = jest.fn().mockReturnValueOnce(Promise.resolve(Command.Quit));
    const write = jest.fn();
    const showTodos = jest.fn();
    const toggleTodo = jest.fn();
    const addTodo = jest.fn();
    const editTodo = jest.fn();
    const deleteTodo = jest.fn();
    const invalidCommand = jest.fn();

    const run = makeRun({
      read,
      ViewTodoDomainMapper: ViewTodoDomainMapperMock,
      createViewTodosStore,
      loadTodos,
      addTodo,
      deleteTodo,
      editTodo,
      invalidCommand,
      showTodos,
      toggleTodo,
      write,
    });

    await run();

    expect(write).toHaveBeenCalledTimes(1);
    expect(write).toHaveBeenNthCalledWith(
      1,
      `Welcome to the To Do App!

    Commands:
    "show" - Show todos.
    "toggle" - Toggle todo.
    "add" - Add todo.
    "edit" - Edit todo.
    "delete" - Delete todo.
    `
    );
  });
});

Then, once we’re satisfied with our run implementation and tests, we can move forward to its direct dependencies, for instance, the showTodos routine:

// todo.ts

// Even though we might not need to have our dependencies
// implementations at a certain point in time, we might need
// their interface, which is the case here
export type Todo = {
  id: string;
  text: string;
  status: TodoStatus;
};

export const enum TodoStatus {
  Complete = "Complete",
  Incomplete = "Incomplete",
}
// showTodos.ts
import { Todo } from "./todo";

type Dependencies = {
  write: (data: string) => Promise;
};

export const makeShowTodos =
  ({ write }: Dependencies) =>
  async (todos: Array) => {
    if (todos.length === 0) {
      write("No todos yet!");
      return;
    }

    const formattedTodos = todos.reduce((string, todo, index) => {
      const formattedStatus =
        todo.getStatus() === ViewTodoStatus.Complete ? "[x]" : "[ ]";
      const todoNumber = index + 1;
      const formattedTodo = ${todoNumber}. ${formattedStatus} ${todo.getText()}n;

      return string + formattedTodo;
    }, "");

    write(formattedTodos);
  };

export type ShowTodos = ReturnType;
// showTodos.test.ts
import { makeShowTodos } from "./showTodos";
import { TodoStatus } from "./todo";

describe("When there are NO todos", () => {
  it("Displays the appropriate message", async () => {
    const write = jest.fn();
    const showTodos = makeShowTodos({
      write,
    });

    await showTodos([]);

    expect(write).toHaveBeenCalledTimes(1);
    expect(write).toHaveBeenNthCalledWith(1, "No todos yet!");
  });
});

describe("When there are todos", () => {
  it("Displays todos formatted correctly", async () => {
    const write = jest.fn();
    const todos: Array = [
      {
        id: "234",
        status: TodoStatus.Complete,
        text: "Walk dog",
      },
      {
        id: "323345",
        status: TodoStatus.Incomplete,
        text: "Wash dishes",
      },
    ];
    const showTodos = makeShowTodos({
      write,
    });

    await showTodos(todos);

    expect(write).toHaveBeenCalledTimes(1);
    expect(write).toHaveBeenNthCalledWith(
      1,
      1. [x] Walk dogn2. [ ] Wash dishesn
    );
  });
});

Then we proceed until all the dependencies are effectively implemented.

The nice thing about this approach is that we can defer thinking about the inner workings of our application and focus on its observable behavior.

One last consideration is that both the top-down and the bottom-up approaches are located on the extremes of a “line”, but there’s a whole gradient between these two extremes. We could start by writing some “surface” modules, then write some “core” modules where they would eventually meet the “middle”.

That is the power that dependency injection gives us, making modules very loosely coupled, we can program to interfaces instead of programming to implementations, which gives us a great deal of flexibility in how we write our software.

Final Considerations

In this post, we saw what dependency injection is, how to implement it, and some of its applications.

You probably noticed that throughout the post the implementations we used for dependency injection got gradually more sophisticated as we had to deal with more complex problems.

Based on that, I’d like to give one piece of advice, especially if you’re new to dependency injection: you don’t need to start with the most sophisticated approach. I’d suggest you start with the simplest implementation that serves your needs, and then, you can gradually move to a more sophisticated implementation as your needs grow.

In the next post in this series, we’ll talk about more techniques surrounding dependency injection, like how DI makes integration testing a breeze and how to systematically deal with containers whose creation is complex (e.g. needs to be created asynchronously).

Further Reading

Dependency Injection Principles, Practices and Patterns

Dependency Injection in NodeJS – by Rising Stack

Dependency Injection in NodeJS – by Jeff Hansen (Awilix Author)

Dependency Injection Vantagens, Desafios e Approaches – by Talysson Oliveira (Portuguese)

Six approaches to dependency injection – by Scott Wlaschin

We want to work with you. Check out our "What We Do" section!