Scalable Frontend #1 — Architecture Fundamentals

How can your frontend benefit from software architecture?

This post is part of the Scalable Frontend series, you can see the other parts here: “#2 — Common Patterns” and “#3 — The State Layer”.

The two most common meanings of the word scalability regarding software development are related to the performance and the maintainability of the codebase over time. You can have both of them, but focusing on good maintainability will make it easier to tweak the performance without affecting the rest of the application. Even more so on the frontend, where we have an important difference from the backend: the local state.

In this series of posts, we’re going to talk about how to develop and maintain a scalable frontend application with real-life tested approaches. Most of our examples will use React and Redux, but we’ll compare it often with other tech stacks to show how you can achieve the same results. Let’s begin the series talking about architecture, the most important part of your software.

What is software architecture?

What is architecture anyway? It seems pretentious to say that architecture is the most important part of your software, but bear with me.

Architecture is how you make the units of your software interact with each other to highlight the most important decisions you have to make and postpone the secondary decisions and implementation details. Designing the architecture of a software means separating the actual application from its supporting technologies. Your actual application doesn’t know about databases, AJAX requests, or the GUI; instead, it’s composed of use cases and domain units representing the concepts covered by your software, regardless of the actors that execute the use cases or where the data is persisted.

There’s also something important to talk about regarding architecture: it doesn’t mean file organization, and it’s not how you name files and folders.

Layers in frontend development

One way to separate what is important from what is secondary is by using layers, each with a different and specific set of responsibilities. A common approach in a layer-based architecture is to separate it into four layers: application, domain, infrastructure, and input. These four layers are better explained in another post, NodeJS and Good Practices. We recommend you read the first part of the post about them before continuing. You don’t have to read the second part since it’s specific to NodeJS.

The domain and application layers aren’t that different between the frontend and the backend since they’re technology agnostic, but we can’t say the same about the input and the infrastructure layers. In a web browser, it’s common to have a single actor in the input layer, the view, so we can even call it the view layer. Also, the frontend doesn’t have access to a database or a queue engine, so we won’t find these in our frontend infrastructure layer. What we will find, instead, are abstractions that encapsulate AJAX requests, browser cookies, the LocalStorage, or even units that interact with WebSocket servers. The main difference is only what is being abstracted, so you can even have frontend and backend repositories with exactly the same interface but a different technology underneath. Can you see how awesome a good abstraction can be?

It doesn’t matter if you’re using React, Vue, Angular or any other tool to create your view. It’s important to follow the input layer rule of not having any logic, therefore delegating the input parameters to the next layer. With regards to a frontend layer-based architecture, there’s another important rule: to keep the input/view layer always in sync with the local state, you should follow the one-way data-flow. Does this term sound familiar? We can do it by adding a fifth layer specifically for that: the state, also known as the store.

The State layer

When following the one-way data-flow, we never change nor mutate the data received by a view directly inside the view. Instead, we dispatch what we call “actions” from the views. It goes like this: an action sends a message to the source of the data, the source updates itself, and then re-renders the view with the new data. Notice that there’s never a direct channel from the view to the store, so if two sub-views use the same data, you can dispatch the action from any of them and this will cause both to be re-rendered with the new data. It may seem that I’m talking specifically about React and Redux, but it’s not the case; you can achieve the same results with almost every modern frontend framework or library, like React + context API, Vue + Vuex, Angular + NGXS, or even Ember using the data-down action-up approach (a.k.a. DDAU). You can even do it with jQuery using its event system to send actions up!

This layer is responsible for managing the local and constantly-changing state of your frontend, like the data that’s been fetched from the backend, temporary data created in the frontend and not yet persisted, or transient info like the status of a request. In case you’re wondering, that’s the layer where the actions and their handlers responsible for updating the state live.

Even though it’s common to see codebases with business rules and use case definitions directly inside actions, if you read carefully the description of the other layers, you’ll see that we already have a place to put our use cases and business rules, and it’s not the state layer. Does it mean that our actions are use cases now? No! So how should we treat them?

Let’s think for a moment… we said that actions are not use cases and that we already have a layer to put our use cases. The views should dispatch actions, which take the info coming from the view, hand it to the use cases, dispatch new actions based on the response, and finally update the state — which updates the view and closes the one-way data-flow. Don’t the actions sound like controllers now? Aren’t they a place to take params from the view, delegate away to the use case, and respond based on the result of the use case? That’s exactly how you should treat them. No complex logic or direct AJAX calls should go in there, for these are the responsibilities of another layer. The state layer should know only how to manage the local storage, and that’s all.

There’s another important factor in play. Since the state layer manages the local storage consumed by the view layer, you’ll notice that these two are coupled in some way. There will be some data in the state layer only for the view, like a boolean flag that says if a request is still pending so that the view can display a spinner, and that’s totally OK. Don’t beat yourself up over that, you don’t need to overgeneralize the state layer.

Dependency injection

OK, layers are cool, but how do they talk to each other? How do we make a layer rely on another without coupling them? Is it possible to test all the possible outputs of an action without executing the use case it delegates to? Is it possible to test a use case without triggering an AJAX call? For sure it is, and we can do this with dependency injection.

Dependency injection is a technique which consists in receiving the coupling dependencies of a unity as a parameter during the creation of that unity. For example, receiving the dependencies of a class in its constructor, or using React/Redux to connect a component to the store and inject the required data and actions as props. The theory isn’t complicated, right? The practice shouldn’t be as well, so let’s use a React/Redux application as an example.

We just said that using React/Redux’s connect is a way to achieve dependency injection between the view and the state layer and that’s as straightforward as it gets. But we also said before that the actions delegate the business logic to the use cases, so how do we inject the use cases (application layer) into the actions (state layer)?

Let’s imagine for a second that you have an object that contains a method for each use case of your application. This object is commonly known as dependency container. Yeah, it seems weird and that it won’t scale well, but it doesn’t mean the implementation of the use cases are inside this object. These are just methods that delegate to the use cases, which are defined somewhere else. It’s way better to have a single object with all the use cases of your application than having them spread throughout your codebase, making them really difficult to find. With this object at hand, all we need to do is inject it into the actions and let each of them decide what use case will be be triggered, right?

If you’re using redux-thunk, it’s really simple to achieve it with the withExtraArgument method, which allows you to inject the container in every thunk action as the third parameter after getState. The approach should be as easy if you’re using redux-saga, where we pass the container as the second parameter of the run method. If you’re using Ember or Angular , the built-in dependency injection machinery should suffice.

Doing that will decouple the actions from the use cases because you won’t need to import the use cases manually in each file where you define an action. Moreover, testing the action separately from the use case is now pretty simple: simply inject a fake use case implementation which behaves exactly the way you want. Do you want to test what action will be dispatched if the use case fails? Inject a mock use case that always fails, then test how the action responds to that. No need to think about how the actual use case works.

Great, we have the state layer injected into the view layer, and the application layer injected into the state layer. What about the rest? How do we inject dependencies into the use cases to build the dependency container? That’s an important question and there are a lot of ways to do it. First of all, don’t forget to check if the framework you’re using has dependency injection built-in, like Angular or Ember. If it does, you shouldn’t build your own. If it doesn’t, you can do it in two ways: either manually or with a little help from a package.

Doing it manually should be straightforward:

  • Define your units as classes or closures,

  • Instantiate the ones that have no dependencies first,

  • Instantiate the ones that depend on them, passing them as parameters,

  • Repeat until you have all the use cases instantiated,

  • Export them.

Too abstract? Take a look at a few code examples:

import api from './infra/api'; // has no dependencies
import { validateUser } from './domain/user'; // has no dependencies
import makeUserRepository from './infra/user/userRepository';
import makeArticleRepository from './infra/article/articleRepository';
import makeCreateUser from './app/user/createUser';
import makeGetArticle from './app/article/getArticle';

const userRepository = makeUserRepository({
  api
});

const articleRepository = makeArticleRepository({
  api
});

const createUser = makeCreateUser({
  userRepository,
  validateUser
});

const getArticle = makeGetArticle({
  userRepository,
  articleRepository
});

export {
  createUser,
  getArticle
};
export default ({ validateUser, userRepository }) => async (userData) => {
  if(!validateUser(userData)) {
    throw new Error('Invalid user');
  }

  try {
    const user = await userRepository.add(userData);
    return user;
  } catch(error) {
    throw error;
  }
};
export default ({ api }) => ({
  async add(userData) {
    const user = await api.post('/users', userData);

    return user;
  }
});

You’ll notice that the important part, the use cases, are instantiated at the end of the file and are the only objects being exported because they will be injected into the actions. The rest of your code doesn’t need to know about how the repository is created and how it works. It’s not important and it’s just a technical detail. For the use case, it doesn’t matter if the repository sends an AJAX request or persists something in the LocalStorage; it’s not the use case’s responsibility to know it. If you want to use the LocalStorage while your API is still in development, and then switch to use calls over the wire to an API, you won’t need to change the use case as long as the code that interacts with the API follows the same interface as the one that interacts with the LocalStorage.

You can get pretty far doing the injection manually as described above, even if you have dozens of use cases, repositories, services, and so on. If it gets too messy to build all your dependencies though, you can always use a dependency injection package, as long as it doesn’t increase the coupling.

A rule of thumb to test if your DI package is good enough is to check if moving from the manual approach to using the library doesn’t require touching more than the container code. If it does, the package is too intrusive and you should choose a different one. If you really want to use a package, we recommend Awilix. It’s pretty simple to use, and moving off the manual approach will only require touching the container file. There’s a very good series about how and why to use it, written by the author of the package.

Coming next

OK, we already talked about architecture and how to connect the layers in a nice way! In the next post, we’re going to show some real code and common patterns for the layers we just spoke about, with the exception of the state layer — which will be handled in a separate post. Take some time to absorb these concepts; they’re going to be useful when we enter into detail about these patterns, and everything will make more sense. See you there!

Recommended links

Written by Talysson de Oliveira and Iago Dahlem Lorensini.

We want to work with you. Check out our "What We Do" section!