When we work on big projects a common path is to split it into separate codebases to make management easier. A modern and kind of obvious approach to take into account is to build a modular front-end architecture. Nowadays, starting this out from scratch might be simple, but what about when you have multiple teams working together with not-so-new applications already running in production? How to implement that when what you need to make is a shareable piece that can’t be one more simple URL on the infrastructure side?
Well, everything comes with a cost, and today I’ll show you a very simple approach to consuming a react application inside another one. I know it sounds weird but keep going, I promise it will make sense in a bit.
The use case
The other day, I met my PM to discuss the task requirements and basically, we wanted to "export" a piece of our experience to other teams. Let me give you some context: we had a Hub of products and sometimes it was reasonable to show some info about one product inside another one’s surface. It’s like when you go to the cashier and take advantage to grab some chocolates. The decision we made was to create a widget as an NPM package in a lerna monorepo since the component would be a just small set of cards. So far so good, we built the component, published the first version, and distributed it to the other teams.
The problem
Things started going bad when we had to update this. Updating a package means a new build & deployment for each team using the component, that’s what we call build-time integration. There’s nothing wrong with this approach. It’s perfectly fine when you don’t need often updates. But that was not our case, we did need that and not all teams were being able to bump the version and perform a new deployment of their project on time. We ended up in a terrible gap in user experience because when a user landed on surface A, they would see different content from surfaces B and C. There was no guarantee that all teams were under the same version, which caused production issues, out-of-date code, and hard debugging as well in case of issues.
A word about microfrontends
So first things first: let’s keep in mind that what we wanted to solve is the dependency on other teams to update our component. To solve that, we would have to make this component independent first in a way that we could send updates over the air. If you thought "okay, just turn it into a microfrontend and that’s it" I get to tell you that using either an NPM package, an Iframe or any other fancy way is a microfrontend approach. My understanding is that we should differentiate them as build-time vs runtime integration.
How did we workaround that?
So let’s talk about why it’s not so trivial for us. I had some rules that I couldn’t break while implementing this. We were already causing enough pain to the other teams, so we wanted this process to be as smooth as possible for them and require the minimum amount of changes possible on their end.
The first thing you find on google when researching this kind of thing is webpack module federation. This approach is great but harder and unfeasible for our case since they require webpack-specific changes or deeper configs. As I said above, we wanted it to be transparent for them.
So we had an NPM package, built using CRA. In theory, we just needed to get the compiled file, expose a URL for this and inject it into our consumer’s page. YES! In theory…
When you create a new react application using CRA, this is what the index.js file looks like:
const root = ReactDOM.createRoot(document.getElementById('root'));
root.render(
<React.StrictMode>
<App />
</React.StrictMode>
);
When you do npm run build
it generates a production build for you with all assets, HTML, and bundle.js that in theory, could be injected as a JS script anywhere. To make this work, I had to customize my index.js file on the component to not immediately look for a root node in the DOM:
const mountShareableComponent = () => {
const element = document.getElementById("shareable-component");
const root = ReactDOM.createRoot(element);
root.render(
<React.StrictMode>
<App />
</React.StrictMode>
);
};
window.mountShareableComponent = mountShareableComponent;
Just wrapped this with a function and exposed it to a global variable in the window object.
Then, to render the shareable piece in our container:
function App() {
const status = useScript("http://localhost:3000/static/js/bundle.js");
useEffect(() => {
if (status === "ready") window.mountShareableComponent();
}, [status]);
return (
<div className="App">
<h1>MFE DEMO</h1>
<div id="shareable-component" />
</div>
);
}
That’s it! The consumer just needs to inject our bundle file into their DOM and declare a div with our unique identifier where they would like the component to show up.
Trade-offs
The above approach is pretty easy to implement and does not require deep configs on the consumer’s end, which is great for our case. However, since it’s an isolated application, it’s gonna become heavier and will impact the page load time. There are a few ways to solve that or at least minimize the impact.
You could build your microfrontend using preact. In the end, you would have a low-size code.
Another way to solve that is to use webpack externals. Then we assume that react is somehow already loaded on the parent application (if it’s a react app too).
The browser caching issue
Alright, this is a real problem. If we’re fetching an asset the browser will cache that and even if deploy the component, the consumer would still get out-of-date content. To solve that, we must fetch another file generated by react build. When we run npm run build
, along with the build files we have asset-manifest.json that include the paths to the assets like:
{
files: {
main.js: "/static/js/main.cb7d57d4.js"
}
}
Note that we have a hash in the bundled file and this hash changes for each build. So, before adding the script tag on the consumer’s DOM, we’re gonna fetch this asset-manifest file to get the file name. If it hasn’t changed, so we’re safe to use the cached file. Otherwise, if the file name differs from the name that is in the cache, then the browser will fetch the new file. (we can provide a hook for the consumers so they don’t need to do any logic on their side).
Communication between the two sides
We’ll all know that we could use JS custom events as a pub & sub-strategy to exchange data between the two sides or even trigger some action from the parent. But we can take a simpler approach and pass props that the microfrontend knows, as it was a react component (in the end it is, right?). To do that, we can change a little bit our mount function:
const mountShareableComponent = () => {
const element = document.getElementById("shareable-component");
const root = ReactDOM.createRoot(element);
const props = {
surface: element.dataset.surface,
};
root.render(
<React.StrictMode>
<App {...props} />
</React.StrictMode>
);
};
window.mountShareableComponent = mountShareableComponent;
Then, from the container:
return (
<div className="App">
<h1>MFE DEMO</h1>
<div id="shareable-component" data-surface="home" />
</div>
);
Let’s say we want to change the component style depending on the passed surface:
const surfaceColors = {
home: "#282c34",
sales: "#018878",
};
function App({ surface }) {
<div className="App">
<header style={{ backgroundColor: surfaceColors[surface] }}>
...
</header>
</div>
}
This is useful when you need to get some initial data to define stuff or calculate something.
Conclusion
Well, that’s it. If you have worked in a similar environment with similar problems, where turning a piece into a microfrontend by using a fancy approach might be tough, you can take advantage of this simple alternative. If you want to dive deeper into it and play around, I also pushed my code to a Github repo.
We want to work with you. Check out our "What We Do" section!